Researcher Clare McGlynn, an expert in violence against women and girls, warns that the latest generation of artificial intelligence chatbots is facilitating new forms of abuse with concerning scale and intensity. Although the relationship between technology and gender-based violence is not new—including sexual deepfakes or image-based abuse—McGlynn believes that chatbots represent a qualitative change. Her research documents how these tools, often available for free, allow users to simulate scenarios of rape, incest, and child sexual abuse, as well as other forms of gender-based violence.

As reported by Patricia Clarke on observer.co.uk, this report coincides with an investigation by the Internet Watch Foundation (IWF), which highlights a rapid increase in child sexual abuse material generated by AI. The data is particularly alarming: in 2025, thousands of AI-generated videos were identified, with an exponential increase compared to the previous year. In addition, a significant proportion of this content is classified at the highest levels of severity. Girls represent the vast majority of victims in this type of material, which evidences a clear gender bias in the harm caused.
Reports agree that the problem is not only the misuse of technology but also the design decisions of the platforms. When companies prioritise growth and user acquisition over security, they create environments that facilitate abuse. In particular, open-source AI models are highlighted as a risk factor, as any user can download them, modify them, and remove their safeguards. This accessibility has been celebrated in dark web forums, where some users see AI as a tool to materialise illegal fantasies with a high degree of realism.
One of the most concerning areas is that of role-playing applications and companies, where chatbots act as fictional interlocutors. Platforms with millions of users allow for the creation of characters that can represent abusive or sexualised situations, including minors. The lack of effective control over this content and the ease of access to it amplify the risks, especially for young users.
McGlynn defines this phenomenon as “chatbot-simulated violence” and emphasises that it is a problem that is still very scarcely visible in academic research. Despite the abundance of studies on AI security, there is a significant lack of analysis focused on gender impact. This invisibility may contribute to perpetuating systemic risks as technology evolves.
In terms of regulation, experts believe that the current response is insufficient and fragmented. Some measures, such as restricting access to certain applications or prohibiting them in certain countries, are seen as limited steps that do not address the structural problem: the very design of the platforms. In this context, McGlynn proposes the creation of a new criminal offence for the “dangerous deployment of AI chatbots”, which would hold companies accountable for not implementing adequate harm prevention measures.
At the same time, the IWF demands that security by design become a mandatory standard, including pre-launch testing and independent audit mechanisms. Political movements are also taking place: in the United Kingdom, the House of Lords has proposed introducing criminal liabilities for providers of unsafe chatbots, and there are plans to include these services within online safety legislation. However, critics point out that there is still a lack of a specific regulator and clear obligations to ensure safety before products reach the public.
Ultimately, the reports reveal a growing tension between technological innovation and the protection of fundamental rights. Without more forceful interventions, there is a risk that AI will not only reflect but also amplify existing forms of violence, especially against women and girls. For security professionals, this implies the need to adopt a proactive approach based on prevention, responsibility, and the ethical design of emerging technologies.
_____
Aquest apunt en català / Esta entrada en español / Post en français








