For years, deepfakes were treated as a novelty: clever party tricks, viral jokes, or speculative future threats. That complacency is now impossible to defend.
The recent controversy surrounding Elon Musk’s AI chatbot Grok, which allowed users to digitally undress women without their consent, marks a turning point. Not because the technology is new, but because the harm is now obvious, scalable, and happening in public.
What we are witnessing is not a failure of one product or one platform. It is the collision of powerful generative AI with weak guardrails, delayed regulation, and an online ecosystem that rewards speed over safety.
Deepfakes are no longer fringe. They are infrastructure.
What Is a Deepfake?
A deepfake is any AI-generated or AI-altered image, video, or audio that convincingly impersonates a real person. With enough publicly available material (photos, videos, voice clips etc) AI systems can replicate likeness, movement, and speech with disturbing accuracy.
What once required specialist knowledge now takes minutes and a text prompt. Tools are cheap, accessible, and increasingly embedded inside mainstream platforms.
Grok did not invent this capability. But by allowing image manipulation directly inside a major social network, including responses that repost altered images to their subjects, it demonstrated how quickly AI misuse can become ambient, automated, and deeply personal.
The Grok Backlash: A Symptom, Not an Outlier
In recent weeks, women on X have described having dozens, in some cases hundreds, of sexualised images of themselves generated by Grok. Many were based on profile pictures. Some were posted directly back to them by the chatbot itself.
The impact has been severe: humiliation, fear, withdrawal from public platforms, and the mental toll of repeatedly reporting abuse that keeps reappearing. Several women described the experience not as harassment but as a violation, an assault mediated by software.
UK politicians, campaigners, and regulators responded with unusually strong language. The Prime Minister called the images “disgraceful” and “disgusting”. Ofcom launched an investigation. The government expressed “full support” for regulatory action.
And yet the legal reality tells a more troubling story.
The Dangerous Gap in the Law
In the UK, it is already illegal to share non-consensual sexualised deepfakes of adults. But legislation passed in June 2025 to criminalise creating or commissioning such images has still not been brought into force.
Because in an AI-driven system, the distinction between creating, requesting, sharing, and reposting is blurred, and abusers exploit the grey areas. Campaigners and legal experts argue the law is ready, the harm is clear, and the delay is indefensible.
As one campaigner put it, this is not just a criminal justice issue. It is about regulating a tech ecosystem that facilitates and profits from abuse.
Limiting Grok’s image editing to paid users, as X has since done, only sharpened that critique. Turning a harmful capability into a premium feature is not mitigation. It is monetisation.
Why This Should Worry Everyone
The sexual abuse enabled by deepfakes is reason enough for urgent action. But focusing only on that misses the broader danger.
Deepfake technology is rapidly becoming one of the most effective tools in modern fraud and social engineering. Security professionals now warn that AI-driven impersonation is cheap, scalable, and increasingly convincing across voice, video, SMS, and messaging apps.
Attackers no longer need to break systems. They break trust.
Using publicly available information, deepfake personas can mimic executives, colleagues, family members, or vendors, referencing real names, roles, locations, and personal details. The result is a request that feels legitimate in the moment: urgent, confidential, and authoritative.
This is why regulators and financial institutions are raising the alarm. By 2026, many organisations expect facial and voice recognition to be unreliable on their own, due to the prevalence of AI-generated impersonation.
If you rely on “I recognised the voice” or “it looked real”, you are already exposed.
Detection Is Not Enough
A common response to deepfakes is to call for better detection tools. But experts increasingly agree this is the wrong focus.
As AI improves, the obvious tells – strange blinking, awkward movements, audio glitches – are disappearing. Detection becomes a cat-and-mouse game, and attackers only have to succeed once.
The more resilient approach is procedural, not perceptual.
Security specialists emphasise that the signals that still matter are behavioural: urgency, secrecy, and requests that bypass established processes. If a message demands an exception e.g. send money now, skip verification, don’t tell anyone, then this is a real red flag.
In other words, organisations and individuals must stop asking “does this look real?” and start asking “does this follow the rules?”
What Individuals Can Do Now
For the general public, the advice is simple but uncomfortable: assume that what you see and hear can be faked.
That means:
- Don’t trust caller ID, profile photos, or familiar voices
- Don’t treat personal details as proof of identity
- Be sceptical of urgent or emotionally loaded requests
- Verify through a second channel you already trust
- Use agreed “safe words” with family and close contacts
This is not paranoia. It is adaptation.
Just as email forced us to learn about phishing, AI forces us to rethink authenticity itself.
What Organisations Must Change
For businesses and institutions, deepfakes expose a structural weakness. Many security systems still assume that seeing or hearing a person is strong authentication.
It isn’t.
Organisations need layered verification that does not rely on a single factor e.g. biometrics alone. That means stronger multi-factor authentication, clear escalation paths, and training that focuses on process discipline, not spotting fakes.
Importantly, the attack surface is expanding beyond email. Voice calls, messaging apps, and personal devices are now prime vectors, even in environments organisations mistakenly treat as “out of scope”.
In the deepfake era, that assumption becomes an open door.
What Legislators and Platforms Must Do
The Grok episode underscores a hard truth: waiting for harm before acting is no longer acceptable.
Legislators must act without delay to bring existing deepfake laws fully into force. It is essential that the law criminalises not only the sharing of harmful deepfake content, but also the creation and commissioning of such material.
Regulators should be empowered to intervene proactively, influencing the design and deployment of AI models to prevent abuse before it happens. Above all, AI-enabled harm must be treated as a systemic failure within the technology ecosystem and not simply dismissed as isolated user errors.
At the same time, platforms have a responsibility to build ethical guardrails into their AI models from the outset. Harmful capabilities should be removed entirely, rather than merely restricted or hidden behind paywalls. Accountability must be proportional to the reach and power these platforms hold.
If a traditional media company were to display unlawful images in public spaces, it would face swift action and sanctions. AI platforms should be held to the same, if not higher, standards because algorithmic harm is no less real or damaging.
Where Is This Heading?
Deepfakes will become more convincing, more personalised, and more integrated into everyday communication. Trust – once implicit – will become something we actively verify.
This is not a dystopian prediction. It is the trajectory we are already on.
The question now is whether we respond with seriousness, or continue to patch over damage after the fact. The Grok controversy is not just about one chatbot. It is about whether consent, identity, and accountability still matter in an automated world. Technology moves fast. But society doesn’t get to move slowly anymore, not when the cost of delay is measured in violated lives, stolen identities, and the steady erosion of trust itself.
Deepfake harm is accelerating, and the tools to fight it must keep pace. If your organisation is ready to move beyond reactive measures and build resilient, future-proof defenses against AI-driven abuse, speak to us now because protecting trust, consent, and safety isn’t optional; it’s urgent.
image taken from AI fan generated video depicting Manchester United’s former manager Rubin Amorim as Freddie Mercury.
—————————————————————————————————————–
Enjoyed reading this? then don’t forget to sign up to our newsletter for up to date industry news and insight delivered straight to your mail box.
You may be interested in our previous article: Control Interrupted? The Battle for Power Over Our Digital World


Comments are closed.