Big Tech deploys Orwellian doublespeak to mask its democratic corrosion | Peter Lewis

16 hours ago 7

There is something fundamentally undemocratic about the constant stream of new AI releases as the industry scrambles to prove its value in the face of a rapidly inflating market bubble.

Each shiny new toy emerges from the hype machine that casts progress as inevitable and resistance as futile as the industry builds moats around their advantage at the expense of everyday people.

In the past few weeks three new applications of AI have landed that could each have a profound impact on our shared reality: OpenAI’s new video platform, Sora; the scaling of a virtual companion called “Friend” and Meta’s push to import its advertising model into chatbots.

The launch of Sora to a select group of users was juiced by a deepfake of OpenAI CEO Sam Altman shoplifting, which is totally on brand given how he has trained his model on the stolen property of creators. From porn to politics to performers’ IP, Sora comes with no discernible positive use case. It will simply flood the public square with slop, undermining any pretence of shared reality in pursuit of dopamine-charged clicks.

“Friend” is a wearable pendant that collects a user’s conversations and spatial movements to inform a sycophantic buddy whose job is to send supportive text messages. Another companion startup, “Replika”, embeds this connection with voice and a happy ending. What these tools will do to human connection, especially among young people navigating intimate personal relationships for the first time, appears to have been given very little thought.

Meanwhile, Meta has begun using chatbot interactions to target ads, evidence that the surveillance capital model that has so “enshitified” social media is about to cross the AI frontier. The customer becomes the product, their prompts shaping their user profiles which in turn shape their behaviour to keep them producing more data to repurpose and exploit.

Thinking through the impact of each of these individual products is only a small part of the scrutiny they demand; it’s when we look at the intersections that come with rapid diffusion that we should get really alarmed.

Consider, for example, how these three products might connect with each other in the context of an election campaign: a political actor purchases advertising within a chatbot to lead users to its version of the truth, packaging targeted fake videos all reinforced by their little AI friend.

This is not science fiction; this is just the next step in the atomisation of our civic selves, a “politics of me” powered by misinformation and automated self-reinforcement that will further erode our capacity for coordinated collective action.

Big Tech deploys Orwellian doublespeak to mask its democratic corrosion; blandishments of “freedom” override accountability and regulation is decried as “state control” rather than the expression of our collective will.

“Techno-fascism” was a term framed more than a decade ago by historian Janis Mimura to describe the proactive union of industry and government power that override liberal norms, where individuals forgo their interests for a predetermined greater good.

While the “F-bomb” is liberally dropped by liberals, it seems an accurate description of the current relationship between tech, the state and us as technology asserts its power to shape social evolution.

Since Big Tech took the front row at Trump’s inauguration, Elon Musk’s Doge has scraped the US government’s databases; OpenAI’s Stargate project to drain the world’s energy has been greenlit, while moves to place guardrails and redlines around AI have been repudiated at home and abroad.

On a deeper level the Silicon Valley’s libertarian billionaires Peter Thiel and Marc Andreessen have propelled JD Vance within a heartbeat of the presidency, while the work of Curtis Yarvin who argues for the “CEO as king” is gaining traction in all the wrong places.

Closer to home, Australia’s government appears frozen in the headlights, proselytising the “opportunity” of the technology and doing a “gap analysis” on existing laws while trying to avoid a trade war with the United States. All this would be perfectly sensible if the tech industry wasn’t a law unto itself.

If AI products were a car or a new medicine, the impact would be tested and modelled before it was let loose on the community. But simply by being new, Big Tech seeks a free pass based on a trust that it hasn’t earned.

Deferring these decisions to representatives elected via the three-year election cycle is not enough to meet such rapid evolutions. Doing nothing is actually doing something, adding to the concentration of tech power.

The good news is there are alternative approaches. When Uber came to Taiwan, the then digital minister Audrey Tang convened an extensive program of citizen juries to set the ground rules for how the company would operate, part of that nation’s “always-on” model of deliberative democracy.

British academic Dan McQuillan argues in his book Resisting AI – An Anti-Fascist Approach to Artificial Intelligence for workers and community councils to have a real say over the integrating of technology into workplaces, schools and communities.

AI resistance is also a collective choice: to use AI mindfully, to give up our data sparingly, to ignore synthetic news and culture and demand to know when it is putting itself forward as something more genuine than it is, to start our thinking with a blank page.

The AI industry purports to provide us with what we as humans most yearn: intelligence, agency, companionship. But it is these very qualities that are our only defence against its techno-fascist tendencies.

Peter Lewis is the executive director of Essential, a progressive strategic communications and research company that undertook research for Labor in the last election and conducts qualitative research for Guardian Australia. He is the host of Per Capita’s Burning Platforms podcast.

Read Entire Article
International | Politik|