A group of U.S. senators, including Robert Casey, Richard Blumenthal, John Fetterman, and Kirsten Gillibrand, have penned a letter to FTC Chair Lina Khan seeking information on the commission’s efforts to combat the use of artificial intelligence (AI) in targeting older Americans with scams.
Expressing the urgency to effectively counter AI-driven fraud, the senators emphasized the need for comprehensive data gathering on AI-related scams. In their letter, they urged the FTC to share details on how it collects data on AI-involved scams and ensures accurate representation in its Consumer Sentinel Network (Sentinel) database.
Consumer Sentinel serves as the FTC’s investigative cyber tool utilized by law enforcement agencies, encompassing reports on various scams. The senators posed four specific questions to Chair Khan regarding the FTC’s practices in collecting data on AI scams.
Firstly, they inquired about the FTC’s capability to identify AI-powered scams and whether such incidents are appropriately labeled in Sentinel. Additionally, the senators sought information on the commission’s ability to detect generative AI scams that might go unnoticed by victims.
The lawmakers also requested a breakdown of Sentinel’s data to understand the prevalence and success rates of different scam types. Lastly, they queried whether the FTC utilizes AI to process the data gathered by Sentinel.
Notably, Senator Casey, who is one of the signatories, serves as the chairman of the Senate Special Committee on Aging, focusing on issues affecting older Americans.
The senators’ inquiry coincides with the release of global guidelines on November 27, endorsed by the U.S., the U.K., Australia, and 15 other nations. These guidelines aim to enhance the security of AI models, emphasizing a “secure by design” approach. However, the guidelines lack specific controls addressing image-generating models, deep fakes, data collection methods, and their impact on model training.
This collective effort by the senators reflects growing concerns about the exploitation of AI in perpetrating scams, particularly targeting vulnerable populations, and underscores the importance of regulatory measures to safeguard against such fraudulent activities.
