- cross-posted to:
- stablediffusion@lemmit.online
- cross-posted to:
- stablediffusion@lemmit.online
A sex offender convicted of making more than 1,000 indecent images of children has been banned from using any “AI creating tools” for the next five years in the first known case of its kind.
Anthony Dover, 48, was ordered by a UK court “not to use, visit or access” artificial intelligence generation tools without the prior permission of police as a condition of a sexual harm prevention order imposed in February.
The ban prohibits him from using tools such as text-to-image generators, which can make lifelike pictures based on a written command, and “nudifying” websites used to make explicit “deepfakes”.
Dover, who was given a community order and £200 fine, has also been explicitly ordered not to use Stable Diffusion software, which has reportedly been exploited by paedophiles to create hyper-realistic child sexual abuse material, according to records from a sentencing hearing at Poole magistrates court.
There’s no csam because there’s no child. Critical thinking is hard I know.
Except when the data is trained on csam
https://cyber.fsi.stanford.edu/news/investigation-finds-ai-image-generation-models-trained-child-abuse
Now that’s 100% reprehensible. I didn’t read the link, but the only excuse I can think of is if it’s used to automatically recognise csam, so a human doesn’t have to look at it.
The link explains that they are in a dataset used to train a text-to-image model. Images with hashes matching known CSAM. There are tools that could have caught this which this dataset failed to use. Gigantic and repugnant failure. Makes me want to never download a dataset.
Now think of the photos that don’t have any matching hashes. Social media has a ton of csam and as long as they scrape from Facebook/insta/twitter or from porn sites with no verification system they will continue to have csam in their training data.