Introduction: When Violence No
Longer Needs a Body
Artificial Intelligence is often
discussed through the language of innovation, efficiency, and progress.
Governments promote it. Universities celebrate it. Technology companies market
it as the future. Yet beneath this optimistic narrative lies another reality, one
that women are already experiencing in profoundly intimate and violent ways.
One of the clearest examples is the
rise of AI-generated deepfakes.
Deepfakes use artificial
intelligence to create fabricated but highly realistic images, videos, or audio
recordings that make people appear to say or do things they never did. While
the technology itself is not inherently gendered, its use has become overwhelmingly
targeted toward women, particularly through the creation of non-consensual
sexual content. Female journalists, academics, students, celebrities,
politicians, and ordinary women have found their faces digitally inserted into
pornographic videos without consent, often with devastating psychological,
professional, and social consequences.
What makes this especially
disturbing is not only the violation itself, but the ease with which it can now
occur. A photograph taken from social media, a public interview clip, or a
university profile picture can become raw material for sexual exploitation.
Violence no longer requires physical proximity. It can be generated remotely,
anonymously, and at scale.
This essay argues that AI-generated
deepfakes represent a new form of gendered violence in which women’s bodies,
identities, and reputations become technologically reproducible and endlessly
manipulable. AI did not invent misogyny, but it has created new mechanisms
through which misogyny can operate faster, wider, and with alarming legitimacy.
The Gendered Reality of Deepfake
Abuse
Although deepfake technology has
multiple applications, research consistently shows that the overwhelming
majority of non-consensual deepfake content targets women in sexually explicit
ways. The technology has become deeply entangled with existing patterns of
misogyny, harassment, and sexual domination online.
This is significant because it
reveals something important about AI itself: technologies do not emerge outside
of culture. They absorb the values, desires, and violences already present
within society. Deepfakes did not suddenly create the objectification of women;
rather, they automated and intensified it.
The harm caused by deepfakes is
often minimised because the images or videos are “not real.” But this
distinction misunderstands the nature of violence. Psychological humiliation,
reputational destruction, fear, anxiety, and loss of professional credibility
are real consequences, regardless of whether the content is fabricated.
For women, especially those working
in public-facing professions, the threat extends beyond embarrassment.
Deepfakes can undermine authority, silence participation, and force withdrawal
from public spaces. Female politicians, academics, and journalists are
particularly vulnerable because credibility is already unevenly distributed
along gendered lines. A manipulated video does not emerge into a neutral
environment, it enters a culture already willing to scrutinise, sexualise, and
disbelieve women.
The Collapse of Trust and the
Weaponisation of Doubt
One of the most dangerous aspects
of deepfake technology is its ability to destabilise trust itself.
Traditionally, photographs and
videos have functioned as forms of evidence. AI-generated media disrupts this
assumption by making fabrication increasingly difficult to detect. This creates
what some scholars describe as a “liar’s dividend,” where genuine evidence can
be dismissed as fake, while fabricated material can circulate as truth.
For women, this has profound
implications.
Women already navigate cultures in
which their testimony is frequently questioned, minimised, or reframed as
emotional exaggeration. Deepfake technology intensifies this dynamic by
introducing permanent uncertainty around visual evidence and identity. Women
may struggle not only to prove that something happened, but also to prove that
something did not happen.
This creates a particularly
gendered form of vulnerability. A woman can become digitally violated without
her participation, knowledge, or consent, while simultaneously carrying the
burden of disproving the fabrication.
The violence, therefore, is not
only sexual. It is epistemic. It attacks credibility, coherence, and
trustworthiness.
Deepfakes, Power, and Institutional
Vulnerability
The rise of deepfake abuse also
raises urgent institutional questions, particularly within universities and
workplaces.
Institutions increasingly encourage
visibility. Staff and students are expected to maintain online professional
profiles, participate in digital engagement, attend recorded meetings, and
produce public-facing content. Yet this visibility also creates exposure.
Images and videos shared for legitimate professional purposes can be extracted
and repurposed into exploitative material.
Women in academia and leadership
positions may therefore experience a new form of technological precarity: the
awareness that professional visibility itself carries risk.
This matters because institutional
responses often lag behind technological realities. Policies around harassment,
misconduct, and safeguarding frequently remain grounded in older understandings
of abuse that separate “real” violence from digital harm. As a result, women
subjected to AI-generated exploitation may encounter confusion, minimisation,
or procedural gaps when seeking support.
And once again, certain women are
more exposed than others.
Black women and women of colour
often experience overlapping forms of racialised misogyny online, including
hypersexualisation, stereotyping, and disproportionate harassment. Deepfake
technologies do not erase these dynamics, they reproduce them within digital
form. AI systems trained within unequal societies inevitably inherit unequal
patterns of representation and exploitation.
The Illusion of Neutral Technology
Defenders of AI often argue that
technology itself is neutral and that responsibility lies solely with users.
But this argument is too simplistic.
Technologies are shaped by the
environments in which they are designed, funded, and deployed. Deepfake systems
are not emerging in a social vacuum; they are developing within digital
cultures that already normalise misogyny, harassment, and the commodification
of women’s bodies.
Moreover, many AI systems are built
under the assumption that innovation should move quickly, while ethical and
legal protections struggle to keep pace. The result is a familiar pattern:
women become the testing ground for technological harm long before institutions
decide the harm is serious enough to address.
Neutrality, in this context,
becomes a form of deflection.
Because when a technology
overwhelmingly harms one group in particular, it becomes increasingly difficult
to argue that its social effects are merely accidental.
Conclusion: Violence in the Age of
Artificial Intimacy
AI-generated deepfakes reveal that
violence against women is evolving alongside technology. Harm no longer
requires physical contact, geographic proximity, or even direct interaction. A
woman’s image, voice, or likeness can now be manipulated, circulated, and
consumed without her consent, often by people she will never know.
What makes this especially
dangerous is the combination of realism, speed, and scale. Deepfakes transform
misogyny into something infinitely reproducible. They allow humiliation to
circulate rapidly while making accountability increasingly difficult to secure.
AI did not invent violence against
women. But it has created new infrastructures through which that violence can
operate quietly, anonymously, and with technological sophistication.
And perhaps that is what is most
unsettling.
Not simply that machines can
fabricate women’s bodies, but that society continues to treat those violations
as secondary harms until the damage becomes impossible to ignore.
Call to Action
If AI-generated violence against
women is treated as an unfortunate side effect of innovation rather than a
structural issue requiring urgent intervention, the consequences will only
deepen. Deepfakes are not harmless digital experiments. They are part of a
growing ecosystem of technological abuse that exploits the gaps between law,
ethics, and accountability.
Governments, universities,
technology companies, and institutions can no longer afford to respond
reactively. There must be stronger legal protections around non-consensual
AI-generated imagery, clearer institutional safeguarding policies, and greater
accountability for platforms that allow exploitative content to circulate
unchecked. AI development cannot continue to prioritise speed, profit, and
experimentation while treating women’s safety as an afterthought.
But regulation alone is not enough.
We also need a cultural shift in
how technological harm is understood. Violence does not become less real
because it is digital. Psychological humiliation, reputational destruction,
sexual exploitation, and fear are not diminished simply because a machine
helped produce them.
And women should not have to prove
catastrophic damage before their violation is taken seriously.
The conversation around AI must
therefore move beyond fascination with innovation and begin asking harder
questions about power, ethics, and who is expected to absorb the risks of
technological progress. Because if we continue to treat AI as neutral while
ignoring the unequal harms it produces, we are not witnessing the future of
technology.
We are witnessing the automation of
old violences in new forms.