Is that really your face?
Deep fakes, creep walks, and the legal challenges of technology
There has been a recent flurry of stories about the gross and creepy deployment of technology. The most glaring and alarming has been the use of AI tools to generate child-sex abuse materials and non-consensual intimate imagery. Alongside them have come concerns about the filming and distribution of material that is not exactly sexual, but surreptitious and seedy. This week, the BBC ran a piece on the spate of men filming young women on nights out and uploading it to YouTube, often with sexual undertones, or to stress provocative political points. This worry has been further heightened by the emergence of glasses with filming capabilities and concerns that they can be deployed covertly to produce content that is voyeuristic in the eyes of all but the law.
Both issues pose political and legal challenges. AI-generated images and covert filming both cause harm, distress, and upset to those involved. They also tend to cause a broader public concern. That someone can do this stuff is an outrage to morality, makes people feel less secure, and generally feels wrong. It sounds like the sort of thing that should be illegal, especially when done for profit or perversion, yet the law has proven reactive rather than preventative.
Generation of non-consensual intimate imagery has now been outlawed. The law change follows a pattern in closing lacunae in the law after they become social problems. Under the last government, legislative action was brought against upskirting and non-consensual sharing of intimate images. Yet this legislative whack-a-mole approach always means that thousands become victims before the state bestirs itself to action, and that gaps often remain around the edges of new laws. For example, for several years it was illegal to take a photo up a woman’s skirt without her consent, but not to do so by looking down her top. Meanwhile, even now, using AI to generate non-intimate pictures designed to harass, distress or embarrass is probably still legal.
The evolving hodgepodge is the result partly of our overburdened legislature and piecemeal approach to legislation. It was only with the Online Safety Act that there was comprehensive engagement with the issue of non-consensual intimate images, nearly two decades after it first became an issue. There is, however, a broader, more philosophical issue here: many of the wrongs enabled by emerging technology sit poorly within our legal framework. Fundamentally, our law lacks a coherent concept for protecting identity itself.
In a broad oversimplification, most of our laws can be considered property offences, offences against the person, or public order offences. Most everyday offences slot neatly into one of these. Theft interferes with what we own. Assault harms our bodies. Public order law exists to keep shared spaces functional and safe. The wrongs enabled by technology do not necessarily fit easily within these frameworks. If an image is created algorithmically, for example, there is no direct threat to the subject. Equally, lawful possession of an intimate image may imply legal ownership, and the right to do with it what you wish, even where moral ownership clearly does not exist. Each of our attempts to legislate around these problems has had to grapple with these challenges.
The issue of “creepy” filming makes this problem even clearer. In most cases, the filming itself is lawful. British law has historically taken a robust view of public space, treating it as a place where people must tolerate observation by others. But new technologies and online platforms have changed the consequences of being watched. Footage that might once have been fleeting can now be recorded permanently, edited, and distributed to vast audiences with little effort.
The harm arises not from the act of observation alone, but from the transformation of a person into content. Individuals can find themselves turned into objects of commentary, ridicule, or sexualisation without ever knowing they were filmed. Our revulsion at this sits awkwardly within existing legal frameworks, which remain far more comfortable regulating physical intrusion than regulating the appropriation of identity. That these videos often end up in monetised formats, pushing political points and/or with highly sexualised undertones, makes them feel wrong, but they remain almost perfectly legal. Even the civil courts have been reluctant to interfere with this, with privacy expectations in public spaces confined mainly to children.
Other legal systems handle this differently. Across many European jurisdictions, there is a “right to personality”. Its existence provides greater protection for individuals against intrusions into their personal identity. In Greece, for example, even taking pictures of people in public spaces requires their consent. In Germany, the “right to free development of the personality” includes significant control over one’s image, and this right has been further strengthened by specific laws governing intimate photos. In Spain, robust data protection laws prevent the publication of images on the internet without the subject’s consent.
At first glance, these approaches appear attractive. They recognise something that British law has often struggled to articulate: that identity itself can be vulnerable to misuse. By giving individuals greater control over how their likeness is captured, reproduced, and distributed, personality rights aim to protect precisely the forms of harm that new technologies have made easier to inflict.
They also reflect a broader shift in how societies understand personal autonomy. If individuals have strong legal protections over their bodies and property, it is not immediately obvious why their image, voice, or digital likeness should be treated any differently. As artificial intelligence increasingly allows realistic replication of individuals without their involvement, the argument that identity deserves direct legal protection is likely to become more persuasive.
Personality rights can also offer clarity. Rather than relying on an awkward patchwork of privacy law, harassment statutes, and data protection regulation, they provide a more transparent framework for assessing whether a person’s identity has been exploited. In doing so, they promise a more preventative approach to harm, rather than the reactive legislative cycle that has characterised Britain’s response to image-based abuse.
Having this starting point would make it easier to legislate for new technology and new harms. We are already seeing how technology can accelerate faster than politics. The availability of generative AI has opened a raft of image and personality-based risks. So far, only intimate photos have been legislated for other types of harassing images, or impersonation of voices remains beyond the scope of the law. Equally, the use of facial recognition is becoming increasingly complex and contested, often without a clear legal framework.
Yet personality rights are not an unalloyed good. The same legal tools designed to protect individuals from exploitation risk introducing new constraints on public life, journalism, and artistic expression. In seeking to give people greater control over how they are represented, such rights inevitably raise difficult questions about who decides when observation, recording, or depiction crosses the line into misuse.
One immediate concern is the potential chilling effect on legitimate public interest activity. Documentary filmmaking, investigative journalism, and citizen recording of public events often rely on the ability to capture images without securing individual consent from every subject. While most personality-right regimes include exceptions for public-interest reporting, the boundaries of those exceptions are frequently contested and litigated. The result can be a legal environment in which recording public life becomes slower, riskier, and more vulnerable to challenge by those with the resources to pursue legal action. A number of European countries have faced scandals suppressed by such laws, including the suppression of Francois Mitterand’s health concerns while President.
There is also a risk that personality rights disproportionately benefit those already well-positioned to defend their reputations. Wealthy individuals, corporations, and public figures are often better able to enforce image-based rights than ordinary citizens. What begins as a tool to protect vulnerable individuals from exploitation may therefore evolve into a mechanism that allows powerful actors to control how they are portrayed, suppress unflattering coverage, or discourage scrutiny altogether. As a legal system already renowned for claimant-friendly libel laws and punitively deployed privacy laws, this could further entrench this privilege.
More broadly, personality rights challenge long-standing assumptions about public space. British law has traditionally accepted that public life involves a degree of unwanted visibility. Being observed, photographed, or recorded has generally been considered part of the social contract for shared spaces. Expanding legal control over personal likeness risks transforming public environments into permission-based zones, where everyday documentation becomes subject to negotiation or legal uncertainty.
There are also practical enforcement questions. Modern digital content spreads rapidly across jurisdictions, platforms, and anonymous networks. Granting individuals stronger legal control over their likeness does not automatically make misuse easier to prevent or remedy. Instead, it may create expectations of protection that are difficult to fulfil in practice, potentially shifting responsibility onto platforms and courts without meaningfully reducing the volume of harmful content. Existing legislation already suffers from this, and can be compounded with political imperatives that push against pursuit, as we seem to be seeing with the slow government approach to dealing with Twitter.
Ultimately, the debate over personality rights reflects a broader tension in how modern societies understand identity itself. Emerging technologies increasingly treat human likeness as a form of transferable data. Images, voices, and behavioural patterns can be captured, reproduced, and redistributed with unprecedented ease. From a technological perspective, identity becomes simply another dataset: something that can be processed, repurposed, and monetised.
Legal systems, however, have traditionally approached identity in fragmented ways. They protect the body through offences against the person, property through theft and fraud, and reputation through defamation. What they have rarely attempted to regulate directly is the misuse of the self as representation. Personality rights seek to close that gap by recognising identity as something worthy of standalone legal protection.
The difficulty is that protecting identity inevitably introduces friction into public life. Observation, recording, and documentation are fundamental to journalism, artistic expression, and democratic accountability. Expanding legal control over personal likeness, therefore, requires navigating a difficult balance between two competing risks: allowing identity to become a freely exploitable commodity, or constructing legal boundaries that restrict legitimate scrutiny and expression.
In practice, the strongest case for personality rights may lie in limited and carefully targeted applications rather than wholesale legal transformation. There is a growing argument for stronger protection against commercial exploitation, AI impersonation, and the creation of sexualised or deceptive imagery without consent. These forms of misuse strike most directly at personal autonomy while posing fewer risks to legitimate public-interest activity. A piecemeal approach to legislation continues to risk lacunae. Sure, it is an offence now to create an intimate AI image – but what of one that shows someone committing a crime, or something else that damages their reputation?
At the same time, caution remains essential. Public space has long required a degree of tolerance for observation and documentation. Journalism, artistic expression, and civic accountability depend upon it. Overly expansive personality rights risk shifting public life from something shared and observable into something increasingly conditional and litigated.
Technological change is forcing societies to confront questions that legal systems have long avoided. As the boundaries between individuals and their digital representations blur, the challenge is not simply how to protect identity, but how to do so without undermining the openness that public life depends upon. How liberal societies strike that balance will shape their response to the next generation of technological change.
And now for something else…
My picks from around the web this week




