AI-generated child porn is about to make the CSAM problem much worse
- Science
- April 23, 2024
- No Comment
- 30
But just 5 to 8 percent of those reports ever lead to arrests, the report said, due to a shortage of funding and resources, legal constraints, and a cascade of shortcomings in the process for reporting, prioritizing and investigating them. If those limitations aren’t addressed soon, the authors warn, the system could become unworkable as the latest AI image generators unleash a deluge of sexual imagery of virtual children that is increasingly “indistinguishable from real photos of children.”
“These cracks are going to become chasms in a world in which AI is generating brand-new CSAM,” said Alex Stamos, a Stanford University cybersecurity expert who co-wrote the report. While computer-generated child pornography presents its own problems, he said that the bigger risk is that “AI CSAM is going to bury the actual sexual abuse content,” diverting resources from actual children in need of rescue.
The report adds to a growing outcry over the proliferation of CSAM, which can ruin children’s lives, and the likelihood that generative AI tools will exacerbate the problem. It comes as Congress is considering a suite of bills aimed at protecting kids online, after senators grilled tech CEOs in a January hearing.
Among those is the Kids Online Safety Act, which would impose sweeping new requirements on tech companies to mitigate a range of potential harms to young users. Some child-safety advocates also are pushing for changes to the Section 230 liability shield for online platforms. Though their findings might seem to add urgency to that legislative push, the authors of the Stanford report focused their recommendations on bolstering the current reporting system rather than cracking down on online platforms.
“There’s lots of investment that could go into just improving the current system before you do anything that is privacy-invasive,” such as passing laws that push online platforms to scan for CSAM or requiring “back doors” for law enforcement in encrypted messaging apps, Stamos said. The former director of the Stanford Internet Observatory, Stamos also once served as security chief at Facebook and Yahoo.
The report makes the case that the 26-year-old CyberTipline, which the nonprofit National Center for Missing and Exploited Children is authorized by law to operate, is “enormously valuable” yet “not living up to its potential.”
Among the key problems outlined in the report:
- “Low-quality” reporting of CSAM by some tech companies.
- A lack of resources, both financial and technological, at NCMEC.
- Legal constraints on both NCMEC and law enforcement.
- Law enforcement’s struggles to prioritize an ever-growing mountain of reports.
Now, all of those problems are set to be compounded by an onslaught of AI-generated child sexual content. Last year, the nonprofit child-safety group Thorn reported that it is seeing a proliferation of such images online amid a “predatory arms race” on pedophile forums.
While the tech industry has developed databases for detecting known examples of CSAM, pedophiles can now use AI to generate novel ones almost instantly. That may be partly because leading AI image generators have been trained on real CSAM, as the Stanford Internet Observatory reported in December.
When online platforms become aware of CSAM, they’re required under federal law to report it to the CyberTipline for NCMEC to examine and forward to the relevant authorities. But the law doesn’t require online platforms to look for CSAM in the first place. And constitutional protections against warrantless searches restrict the ability of either the government or NCMEC to pressure tech companies into doing so.
NCMEC, meanwhile, relies largely on an overworked team of human reviewers, the report finds, partly due to limited funding and partly because restrictions on handling CSAM make it hard to use AI tools for help.
To address these issues, the report calls on Congress to increase the center’s budget, clarify how tech companies can handle and report CSAM without exposing themselves to liability, and clarify the laws around AI-generated CSAM. It also calls on tech companies to invest more in detecting and carefully reporting CSAM, makes recommendations for NCMEC to improve its technology and asks law enforcement to train its officers on how to investigate CSAM reports.
In theory, tech companies could help manage the influx of AI CSAM by working to identify and differentiate it in their reports, said Riana Pfefferkorn, a Stanford Internet Observatory research scholar who co-wrote the report. But under the current system, there’s “no incentive for the platform to look.”
Though the Stanford report does not endorse the Kids Online Safety Act, its recommendations include several of the provisions in the Report Act, which is more narrowly focused on CSAM reporting. The Senate passed the Report Act in December, and it awaits action in the House.
In a statement Monday, the Center for Missing and Exploited Children said it appreciates Stanford’s “thorough consideration of the inherent challenges faced, not just by NCMEC, but by every stakeholder who plays a key role in the CyberTipline ecosystem.” The organization said it looks forward to exploring the report’s recommendations.
#AIgenerated #child #porn #CSAM #problem #worse