A self-driving vehicle ploughs into an oncoming car, combusting the occupants and leaving those who survive battered and bruised and staring into their devices wondering who is to blame.
That’s the jump off point to Bruce Holsinger’s tech-lit bestseller Culpability, an exploration of agency and responsibility in the era of AI through the eyes of a lawyer, an ethicist and their screen-dependent offspring.
It’s also a broader description of our current moment as this self-propelling technology accelerates exponentially before it has been fitted with brakes, seatbelts, speed limits or a working GPS.
Working back from the crash, Holsinger skilfully weaves together the concurrent lines of causation: those who design the tech, those who deploy it, those who use it and, most profoundly, the spaces that overlap.
“Culpability” lies in these grey areas of legal and moral accountability where we are still thinking through formal and ethical rules of engagement, which themselves are strapped to the bonnet of this out-of-control vehicle.
Right now, there is justified focus on the responsibilities of those building the Large Language Models and taking them to market, even as they struggle to explain how they work or how they can be deployed safely.
When not dropping bombs on Iranian schoolgirls, the White House has been at war with its own broligarchcy, demanding the right to use AI models to surveil their citizens and fire autonomous weapons. Spoiler: Anthropic pushed back; OpenAI bent over.
Closer to home, our policymakers are struggling to come up with a coherent response to how these models should be deployed. The federal government has eschewed a stand-alone AI Act, putting its faith in the messy process of updating a slew of existing laws.
One bright spot in this sea of inertia is the New South Wales parliament, which has just passed laws requiring work rosters and allocations to be transparent so that employers do not allow AI to abrogate their legal responsibility to provide a safe place of work.
But there is another actor in this story, for whom the technology might be a valuable tool or a material threat (or, more likely, both): us. With recent Essential polling suggesting more than 40% of Australian adults are already using generative AI, how we choose to use this technology is a critical piece of this puzzle.
It is here that the regulation of the motor vehicle, one of the transformational technologies of the 20th century, might offer a guide on how culpability could be distributed.
When cars were first invented in the late 19th century, it soon became apparent they were killing machines, with thousands of deaths registered in the first decade alone. By the end of the 20th century, the car had accounted for an estimated 60 million worldwide fatalities including 200,000 in Australia.
The early regulatory responses to this danger now seems laughable: a man walking ahead of the vehicle waving a red flag was one intervention as wealthy motorists lobbied against prescriptive rules, arguing that driving faster would be safer.
As the death toll rose, a system of shared culpability was negotiated. Government set rules for how the technology would be used, facing enforceable obligations on both the deployer and end user. Manufacturers put safe machines on the road and submitted to rigorous testing, including compliance with national standards.
But the rules also placed individual onus on the driver for whom usage was a privilege that was conditional on certain behaviours, which evolved in response to emerging evidence: don’t speed, don’t drink, don’t text.
As a general-purpose technology, AI is a far more challenging beast than a car to control. Models are being released as soon as they are developed and being used in so many defuse ways, all with the underlying narrative that speed is good.
We know the technology is already doing material damage: the body count on chatbot-assisted suicides is mounting, women and children are being violated by nudify apps, our culture is being illegally stripped and mined, and entire professions are at risk of being erased.
So how should we drive AI safely?
The first requirement is to take the time to understand AI and to use it mindfully and warily: recognise it burns energy, has a tendency to hallucinate, is programmed for sycophancy, is a compulsive thief and will make you dumber the more you use it.
A second challenge is transparency. Former chief scientist Alan Finkel has launched a voluntary trust mark that would give consumers the ability to choose accredited ‘Proudly Human’ cultural content to counteract the spread of AI-generated slop.
Giving people the tools to better understand how they use AI is not just good for human creators, it is also a critical step in educating people on the way these systems operate, which are often opaque.
Finally, we can exercise our power as voters to demand governments get serious about safety, not allowing the ‘opportunity at all costs’ shtick of industry to override its own accountability under the ruse of inevitability.
The politics of AI is still evolving and can shift wildly between crude state interventions and free market neoliberalism, but what’s needed now is a more focused analysis of the relative power of the creator, deployers and users.
This is not about shifting responsibility to the end user, rather ensuring we all take control of these tools that assert themselves capable of transforming the world.
“Als are not aliens from another world,” argues one of the characters in Culpability. “They are things of our all-too-human creation. [They] will only be as moral as we design them to be. Our morality in turn will be shaped by what we learn from them and how we adapt accordingly.”
Until we think through our shared culpability, we should not be issuing licences to anyone.
-
Peter Lewis is the executive director of Essential, a progressive strategic communications and research company that undertook research for Labor in the last election and conducts quantitative research for Guardian Australia. He is the host of Per Capita’s Burning Platforms podcast

5 hours ago
4

















































