The world around me tends to feel very polarised around artificial intelligence. I suspect two things are going on. The first is the simple divide between the natural enthusiast and the instinctive cynic. The former looks for reasons to be hopeful about anything, while the latter waits to shoot it down. The enthusiasts also tend to be broader tech evangelists, with a bias towards adopting and advocating most new things which emerge.
The cynics, on the other hand, tend to be writers and artists. Some of their scepticism about the tech might be self-interest – after all, it is their skills which are most often threatened by generative AI. It goes beyond this, however. They are people who enjoy thinking, writing and creating, seeing it as something fulfilling rather than a chore. They no more understand why you'd outsource these things than why an avid athlete understands why you'd sack off training.
My views fall somewhere in the middle. The technology seems impressive. In specific applications, it presents an opportunity for groundbreaking research. I've shared with fascination how it is unlocking secrets of the ancient world and bio-engineering. The more general models can be fun and helpful – even if their output falls into being fairly middling. For my own work, they can helpfully take the place of an editor, helping to thrash out ideas and structure. Rarely does the writing impress me, feeling more like a caricature of prose than the real thing.
The personalness of these takes on AI nods towards something else. However powerful the tool, its reality and impact will be shaped mainly by the users. People are the conduit between whatever the sand can "think up" and the real world. They are also where its most adverse consequences lie. It is also the space in which policy responses might be most effective. The danger lies not so much in AI itself but in the reality that many fail to understand it and become thoroughly hooked on its worst excesses. It is less the programmes that are the problem, but the people.
The mass adoption of generative AI has spread far beyond those who understand what it really does. Rather than an (admittedly complex) word-guessing machine, there is a tendency to view it as a knowledge engine. This combines with a human instinct to anthropomorphise things into a tendency to see these chats as thinking like people do, but better. It's not so much that people don't think AIs can be wrong, but they fail to recognise how and why they err. People who have only a loose understanding of the tech might get that it gives wrong answers but don't realise how those wrong answers are generated or why it tends towards plausible bullshit rather than "I don't know". After all, the process for arriving at correct and incorrect answers is broadly the same.
The failure to properly understand technology leads to its worst uses. These are easy to spot on social media, especially on Twitter, where Grok has been integrated into the network. A vast swathe of users seemingly treat it as omniscient. The point is that it is tagged into conversations as an ultimate appeal to authority, used not just as a source but as the final arbiter of what is incorrect and correct. That is not the fault of the AI or those who created it but rather the misuse of those who have a poor understanding of what it can and cannot do.
Those who build these technologies are often blind to this. They inhabit worlds of people who have a highly attuned sense of how they work and where the limitations are. Largely, they know how to operate them sensibly. Then they push them onto people who have almost no tech literacy and treat them as if they were some sort of magic. A tool crafted by knowledgeable and skilful users is then turned over to a base that is neither. It is a potent mix – indeed, at the time of writing, that very problem is unravelling, in public, on Meta's AI app. People with a poor knowledge of technology treat it in ways that any moderately informed person would be aghast at.
This also applies to many of the risks of misinformation. Generative AI indeed has the power to flood sites with it. Some of that will be organised and pushed by bad actors. Some of it will be organic. Indeed, we are currently witnessing this phenomenon in relation to the LA riots. On this, however, the supply is really the secondary problem. The issue with misinformation remains the demand for it. The awkward truth is that people aren't being overwhelmed unwittingly but that many crave nonsense that reaffirms their prejudices. It is why so much of it is so obviously bad.
Even before the rise of generative AI, this tendency was evident. In the early hours of the Russian invasion of Ukraine, video game footage was passed off as combat shots. In the Gaza war, countless images from Syria and other conflicts have been passed off as fresh images of new atrocities. AI is just an acceleration of this. The real, underlying challenge remains – many people don't engage critically with what they see, especially when it aligns with their existing beliefs.
The danger here is not less doctored images but rather what happens to the viewer. The rise of partisan and polarised systems creates alternative truth networks. Truthfulness and authenticity come second to the in-group logic. The facts no longer matter. Indeed, believing the absurd sometimes becomes a demonstration of loyalty to the political clan. Eliot Higgins expands well on this point here. AI becomes a vehicle for this, but even without the magic power of the thinking computers, the instincts would be there. It may feed the beast, but it didn't create it.
On a less harmful level, the same is probably true of "AI-slop". It is a frequent criticism of AI that it can only create poor stuff. By its very nature, it's derivative; its writing and art are a composite of things that have gone before. Much of the output is low-skilled and low-rent. But guess what? That's often what people like.
Entire industries have sustained themselves by churning out stuff that is neither particularly thoughtful nor aesthetically pleasing. Millions of books are sold by writers the smart set would turn their noses up. Vettriano and Banksy both grew rich through the consumption of not-very-challenging art prints. Plenty of people seem to buy crappy greeting cards with puerile humour for their friends and loved ones. The emergence of AI slop is perhaps just a replacement for the more traditional slop industries. You may be aghast that AI slop is taking over social media, but really, that's because there is an insatiable demand for low-quality, emotionally charged images.
Even the most dystopian developments of AI seem to have been enabled rather than created by the technology. The people who fall in love with AI, those radicalised by it, were likely veering that way anyway. They found in the tech a reflection of their worst impulses rather than a catalyst for them. There's every chance they would have veered into madness via some other route sometime anyway. The same is probably true of the lazy students and workers who use it as a plagiarism machine. Yes, the machine has increased the options for these things, but the desires have really lain in the hearts of the users.
It is hard to hold back the tide of technology. There are important parts of AI that will have to be strictly regulated. However, thought should also be given to how we, as individuals and society, interact with it. Most of the initial issues with AI are less intrinsic to the technology itself but more deeply ingrained within us. It is highlighting problems rather than creating them.
The first step should be to provide better public education about what AI is and how it works. Quite frankly, too many people in senior government seem to be bamboozled by this and seemingly drawn to it as a panacea for our problems. We need a better understanding of what these things actually are, how they work, and how that maps to our interactions with them. The idea of the omniscient voice in the computer has a potent allure; understanding it is a highly trained guessing machine trained on a billion books dispels this a little. To use these tools properly, every user needs to know how right and wrong apply to them and where their fallibilities lie.
Beyond that, we need a societal shift to challenge the power of misinformation. It is too easy to blame the vehicles of falsehoods, whether it is social networks or now large language models (LLMs). The fake news industry has developed because people tend to accept lies that affirm their opinions over truths that challenge them. Too many people have aligned themselves with partisan groups where loyalist logic takes precedence over reality. Yes – AI makes it easier to exploit this thinking, but it did not create it.
Ultimately, the problem isn't the AI. It's us. These systems reflect and amplify the instincts we already have—towards convenience, affirmation, spectacle, and tribalism. They don't invent these tendencies; they just scale them. That means the most significant challenges aren't technical but human: how we understand these tools, how we choose to use them, and what behaviours we tolerate when we do. If people continue to treat chatbots like oracles or flood the internet with low-quality content, the consequences won't be because the machines have become too smart—but because we didn't think hard enough about what we were doing.
Below the line, my picks from around the web this week…
Keep reading with a 7-day free trial
Subscribe to Joxley Writes to keep reading this post and get 7 days of free access to the full post archives.