The AI Con by Emily M Bender and Alex Hanna review – debunking myths of the AI revolution

5 hours ago 5

At the beginning of this year, Keir Starmer announced an “AI opportunities action plan”, which promises to mainline AI “into the veins of this enterprising nation”. The implication that AI is a class-A injectable substance, liable to render the user stupefied and addicted, was presumably unintentional. But then what on earth did they mean about AI’s potential, and did they have any good reason to believe it?

Not according to the authors of this book, who are refreshingly sarcastic about what they think is just another tech bubble. What is sold to us as AI, they announce, is just “a bill of goods”: “A few major well-placed players are poised to accumulate significant wealth by extracting value from other people’s creative work, personal data, or labor, and replacing quality services with artificial facsimiles.”

Take the large language models (LLMs), such as ChatGPT, which essentially work like fancy auto­complete and routinely make up citations to nonexistent sources. They have been “trained” – as though they are lovable puppies – on vast databases of books as well as scrapings from websites. (Meta has deliberately ingested one such illegal database, LibGen, claiming it is “fair use”.) Meanwhile, “a survey conducted by the Society of Authors found that 26% of authors, translators, and illustrators surveyed had lost work due to generative AI.”

Better to think of LLMs, Bender and Hanna suggest, as “synthetic text-extruding machines”. “Like an industrial plastic process,” they explain, text databases “are forced through complicated machinery to produce a product that looks like communicative language, but without any intent or thinking mind behind it”. The same is true of other “generative” AI models that spit out images and music. They are all, the authors say, “synthetic media machines” – or, as I like to call them, giant plagiarism machines. “Both language models and text-to-image models will out-and-out plagiarize their inputs,” the authors write, noting that the New York Times is suing OpenAI for just this reason.

But reliance on AI is not just bad for artists in garrets; it’s bad for everyone, as Bender and Hanna persuasively argue. The fact that internet search results now start with an AI-generated summary, they point out, is likely to dull critical thinking – and not just because such summaries have in the past told people that they should eat rocks, but because “scanning a set of links gives us information about what information sources are available” and so builds “our understanding of the information landscape”.

The real appeal of AI, as the authors see it, is that it promises to enable the making of vast numbers of people redundant. They recount, for example, how the National Eating Disorders Association in the US replaced their hotline operators with a chatbot days after the former voted to unionise. According to the World Economic Forum’s 2025 report, 40% of employers are planning to reduce staff headcounts as they adopt AI in the coming years.

I, for one, do not want to live in a cultural wasteland of AI-generated garbage. But, amusing as this book’s broadside against the giant plagiarism machines is, it tends to lump everything else that can be called “AI” in with them. And the authors do know better: “AI is a marketing term,” they note at the start. “It doesn’t refer to a coherent set of technologies.” They do allow, subsequently, that there are “sensible use-cases” for such tech, such as image processing that helps radiologists, but there are many more that go unmentioned.

Under a broader definition of “AI” as machine-learning systems, emerging tools can, according to a recent overview by the Economist, manage load on the electricity grid more effectively, cut the time required to inspect nuclear facilities and help reduce emissions in trucking, shipping, steelmaking and mining industries. The British engineer Demis Hassabis won the Nobel in chemistry last year for his company DeepMind’s work on protein folding, which may yet have profound applications in drugmaking. And, less glamorously, machines can now automatically transcribe doctors’ notes: an example these authors present as a reason it’s bad that AI is infiltrating the NHS, but surely one that is a win-win for doctors and patients alike.

Nevertheless, Bender and Hanna are right to insist that each such case should be scrutinised for its utility, the biases it might smuggle in, and its propensity to destroy jobs that depend on human judgment. They cite a famous old rule from IBM: “A computer can never be held accountable, therefore a computer must never make a management decision.” But that is precisely why some in power want to hand decision‑making capacity to computers: it promises a sunlit utopia of profit without blame. Once AI is mainlined into our veins, we may be too doped up to care.

skip past newsletter promotion
Read Entire Article
International | Politik|