The Rise of AI-generated Sexual Abuse Material
March 31, 2026
Made with in Raleigh, NC, USA
© Our Wave 2026. All rights reserved.
Show resources for
CubaMarch 31, 2026

Artificial Intelligence technology, commonly referred to as just “AI,” is a field of computer science where machines and software are built that have the ability to perform tasks normally requiring human intelligence. AI technology has evolved from the 1950s theoretical foundations into a powerful deep-learning model that is capable of creating content, human-level perception, and making autonomous, goal-directed actions.
This technology used to be “symbolic AI,” where humans were still manipulating its processes and output themselves, whereas these days it has moved onto “machine learning,” in which the machines are learning patterns from data and creating content on their own. This means that the use of these machines has gone from the hands of the scientists, mathematicians, researchers, and the engineers who created it, and into the hands of the general public.
Free AI tools that can be accessed by literally anyone has both its pros and cons. It can have important and influential effects for the better. However, AI technology’s ability to cause immediate societal harm must be acknowledged and understood so that we can put a stop to dangerous and harmful behavior and work to prevent future occurrences of it occurring.
The harms of easily accessible AI includes the spread of misinformation and deepfakes, which are realistic fake images, videos, or audio materials. This type of content has unfortunately seen an increase in specifically AI-generated sexual abuse material.
The use of this artificial intelligence to create non-consensual sexual content is often called “deepfake pornography” or “nudification.” This is a dangerous and unfortunately rapidly accelerating issue that has resulted in the widespread abuse of others online, targeting primarily women and children.
UN Women (2025) reported that “Deepfake pornography makes up 98 percent of all deepfake videos online, and 99 percent of the individuals targeted are women.” Horribly, there is also a massive and rapid rise in AI-generated child sexual abuse material (CSAM). The National Center for Missing and Exploited Children (2026) reported that the amount of this material created “skyrocketed from 4,700 in 2023 to more than 400,000 in just the first half of 2025.”
These reports reveal the disgusting reality that AI technology has brought to society. One of the scariest aspects of this situation is that this material is accessible to the general public, as the technology allows those with minimal technical expertise to generate realistic nude images or sexually explicit videos from ordinary photos.
The cases of people having artificial intelligence-generated material used in a non-consensual, sexual manner is unfortunately prevalent in today’s modern age. An instance of a very real and unfortunately very recent example of this took place in 2025 at North Carolina State University, the local university of Raleigh, North Carolina.
It was here that over thirty young women' s photographs were used for pornographic material, completely without their knowledge and consent until a couple of the women discovered their photographs online and recognized others who were posted there. WRAL News was one of the many organizations to cover this story, reporting that the suspect, Hayne Beard, “posted photographic images of a victim in a collection of computer-generated pornographic photos,” (2025).
The warrants also state that out of the thirty women who came forward, twenty-eight were NC State sorority members. All of these women had their photos manipulated and posted to a pornography site where their faces were artificially imposed on nude women performing sexual acts. These heinous images were posted in a folder that directly referenced the university, as well as the women’s names, said CBS 17.
Instances of AI-generated sexual abuse materials abuse cause severe, lasting psychological, professional, and social harm to those who are targeted. Despite the fact that the material is entirely fabricated by AI, victims experience trauma comparable to physical sexual assault, including anxiety, depression, and severe reputational damage.
The creation of these materials can cause deep emotional trauma as targets of such content report feeling violated, humiliated, and dehumanized. They may feel a lack of power and as if their control has been taken away, which is deeply distressing, especially as they are forced to see their likenesses being offered up non-consensually for the sexual gratification of others.
It is a type of hijacking of their identity, where they may feel a loss of self-ownership. AI-generated sexual abuse material can also sometimes be recreated using the images of those who have already faced sexual trauma, causing a retraumatization of the survivors.
Unfortunately, “sexually explicit deepfakes are often indistinguishable from real images or videos, enabling them to exploit, humiliate, or blackmail victims whose faces are superimposed onto the bodies of people engaged in sexual conduct without their consent,” said the Maryland Coalition Against Sexual Assault (MCASA).
The targets of this content who were exploited often then faced severe reputational harm. This meant that they were less likely to be hired or to retain their existing jobs. Once deepfake pornography is created, it would be easy for anyone to either purposely or accidentally find the content online after looking up the name of the target.
Not only could this cause severe emotional distress, but could also disrupt and cause tension within interpersonal relationships, whether at one’s job, with their family, between friends, and/or with someone’s significant other. Even though it was not the target’s fault, their reputation, relationships, and opportunities could become significantly impacted.
The increasingly widespread development of these materials is also creating a harmful culture of normalizing sexual violence. When sexual violation is reduced to a single “click,” the victim is no longer perceived as a human being with feelings but as an object to be manipulated for other’s purposes.
The tools provided by AI technology are often user-friendly and easy to use as they mimic apps like photo-editing filters, which portrays the act of violating someone as less of a crime and more of a creative exercise. The ease of creating such content feeds a culture of harassment and reduces the social cost of engaging in sexual abuse.
It is increasingly obvious that AI-generated sexual abuse materials are becoming a significant issue in our modern world. While this does not mean AI is a 100% evil machine that can’t be used for good, it does show the need for greater legal and regulatory measures, technical preventative actions, and ensured industry and platform action.
There can be a societal pushback that starts with ensuring the criminalization of the creation and possession of deepfake pornography. Luckily, governments are already starting to be urged to expand the definition of Child Sexual Abuse Material (CSAM) to explicitly include AI-generated content.
For example, Thorn (2025) reports that the ENFORCE Act of 2025 “aims to close federal gaps in the U.S. by ensuring consistent penalties for AI-modified material.” Legislation like the TAKE IT DOWN Act will allow survivors to sue for damages and requires that the AI platforms implement formal notice-and-removal procedures for such harmful content.
Technical preventative measures include putting greater emphasis on the actual creators of these AI tools, ensuring that they are accountable for the content created using their technology. Developers can be pressured to ensure that the training of the AI’s datasets does not contain any sort of CSAM or non-consensual intimate imagery (NCII).
There is also a push that can be made for mandating “content credentials,” which is when there is a type of label or watermark on the files created by the AI. This would make it easier for platforms to detect and filter AI-generated abuse. Companies are also being encouraged to perform rigorous “red teaming,” which is when they find and block prompts that could lead to the generation of abusive material.
People who have been targeted by this abuse can use tools like StopNCII.org and the Take It Down Act to generate digital “fingerprints” to automatically block the same material from being uploaded elsewhere. Educational institutions and families at home can define and reinforce the idea that deepfake pornography or similar abusive material is a form of cyberbullying. They can use programs like NetSmartz to teach children and parents how to recognize, resist, and report digital exploitation.
Spreading awareness and educating the public on what this content is and how we can put an end to it legally and civilly is a great way to gather support for this movement of fighting back. Other important resources include the RAINN National Hotline that offers 24/7 confidential support for survivors of AI-generated sexual abuse, and the Know2Protect (DHS) which focuses on awareness campaigns for recognizing online child exploitation.
We can practice “digital consent” by making it a general rule to always ask before posting photos of others, even in assumed innocent contexts. This helps to create a societal culture where personal imagery is seen as private property and not meant to be used and manipulated by others for harmful and malicious purposes.
If you see any suspicious activity or clearly AI-generated material, don’t repost it online or spread it around, even if your intentions are to figure out what to do and to “bring awareness.” This will only cause more harm. Instead, report it immediately and in any way you can.
Secure your own data by using the highest possible privacy settings on your social media platforms and apps. While it is never a survivor’s fault if they are targeted, it’s a good practice to make it slightly more difficult for others to find and steal your content for malicious purposes.
If you know anyone who is targeted by this type of material, stand by them and acknowledge that the material is fake and that they are deserving of the same support as other survivors of sexual abuse. This helps to ensure a greater social understanding of the issue at hand and limits the reputational damage that often occurs from such scenarios.
With the modern world and its limitless technological advancements, it’s important to recognize that new systems introduce new issues into our personal and social world. The increased public use of AI has allowed cruel and malicious people to gain access to tools that generate harmful and distressing material.
It’s important to understand that this is no small problem, and that the survivors of targeted deepfake pornography are just as valid in their experiences and trauma. But there is hope as great pushes in the legal and civil world are being made to expand and strengthen laws protecting survivors, to implement technical changes in the AI systems to create better safeguards, and to spread awareness and important resources amongst society.
References:
Our Wave depends on your generous contributions for our continued success. Give today and support us as we work to support survivors of sexual harm and domestic violence.
Read stories Give todayUpdates, events, and ways to help out. Directly to your inbox.
CubaOur Wave is a 501(c)(3) nonprofit organization and an anonymous service. For additional resources, visit the Our Wave Resources Hub. If this is an emergency, please contact your local emergency service.