Please consider supporting us by disabling your content blocker.
loader

Introduction

In a concerning revelation, a recent report by Thorn, a nonprofit organization focused on defending children from sexual abuse, highlights that one in ten minors have reported classmates using artificial intelligence to create explicit images of other kids (link).

The Survey Findings

Thorn conducted a survey with over 1,000 minors aged 9-17 across the U.S. The findings were alarming: 11% of minors reported knowing someone who created AI-generated explicit images, 7% admitted to sharing such images, and nearly 20% had seen nonconsensual images.

Expert Opinions

“A lot of the times when we get these new technologies, what happens is sexual exploitation or those predators exploit these technologies,” said Lisa Thompson, vice president of the National Center on Sexual Exploitation.

The report underscores the mainstream nature of this issue, with children engaging in image-based sexual abuse against peers.

Legislative Response

In response to these findings, lawmakers introduced the Take It Down Act earlier this year. This legislation aims to ban AI-generated explicit content from being posted online and mandates that websites remove such content within two days.

Conclusion

The misuse of AI by minors to create explicit images is a growing concern that requires immediate attention from both parents and policymakers. How can we better educate our children about the responsible use of technology?