Child abuse images removed from AI image-generator training source, researchers say

2024-12-26 05:14:34 source:lotradecoin trading platform reviews category:Markets

Artificial intelligence researchers said Friday they have deleted more than 2,000 web links to suspected child sexual abuse imagery from a database used to train popular AI image-generator tools.

The LAION research database is a huge index of online images and captions that’s been a source for leading AI image-makers such as Stable Diffusion and Midjourney.

But a report last year by the Stanford Internet Observatory found it contained links to sexually explicit images of children, contributing to the ease with which some AI tools have been able to produce photorealistic deepfakes that depict children.

That December report led LAION, which stands for the nonprofit Large-scale Artificial Intelligence Open Network, to immediately remove its dataset. Eight months later, LAION said in a blog post that it worked with the Stanford University watchdog group and anti-abuse organizations in Canada and the United Kingdom to fix the problem and release a cleaned-up database for future AI research.

Stanford researcher David Thiel, author of the December report, commended LAION for significant improvements but said the next step is to withdraw from distribution the “tainted models” that are still able to produce child abuse imagery.

RELATED COVERAGE California advances landmark legislation to regulate large AI models Social platform X edits AI chatbot after election officials warn that it spreads misinformation Police officers are starting to use AI chatbots to write crime reports. Will they hold up in court?

One of the LAION-based tools that Stanford identified as the “most popular model for generating explicit imagery” — an older and lightly filtered version of Stable Diffusion — remained easily accessible until Thursday, when the New York-based company Runway ML removed it from the AI model repository Hugging Face. Runway said in a statement Friday it was a “planned deprecation of research models and code that have not been actively maintained.”

The cleaned-up version of the LAION database comes as governments around the world are taking a closer look at how some tech tools are being used to make or distribute illegal images of children.

San Francisco’s city attorney earlier this month filed a lawsuit seeking to shut down a group of websites that enable the creation of AI-generated nudes of women and girls. The alleged distribution of child sexual abuse images on the messaging app Telegram is part of what led French authorities to bring charges on Wednesday against the platform’s founder and CEO, Pavel Durov.

More:Markets

Recommend

Alex Jones keeps Infowars for now after judge rejects The Onion’s winning auction bid

A federal judge in Texas rejected the auction sale of Alex Jones’ Infowars to The Onion satirical ne

Gwen Stefani's 3 Kids Are All Grown Up at Her Hollywood Walk of Fame Ceremony With Blake Shelton

Blake Shelton is B-A-N-A-N-A-S over Gwen Stefani.The country music star let it be known that the "Ju

Ali Krieger Shares “Happy Place” Photo With Her and Ashlyn Harris’ Kids Amid Divorce

Ali Krieger has teammates for life in these two. Amid her ongoing divorce Ashlyn Harris, Ali shared