I never worried much about AI, until the phone booth on the corner started ringing

I was at the local park with my son when the payphone on the corner began to ring. I was surprised and didn’t answer. A few minutes later it rang again. Again I left it. If it happens again, I told myself, I will answer: there are only so many times you can turn your back on fate. It rang. It turned out to be a recorded message – in other words, an ad (albeit one for a very good cause).

I was impressed by the ingenuity of the campaign. Afterwards, though, I realised I had taken something for granted: my recognition of the fact the message was recorded. Surely, after Microsoft’s unveiling of its oddly human-sounding chatbot, driven by artificial intelligence – amidst a cluster of competitors, including ChatGPT and Google’s yet-to-be-released version, Bard – it cannot be long until such messages are entirely interactive. We may not be able to tell whether we are talking to a human. This is benign, but I find it unsettling.

Credit:Illustration by Joe Benke

When I was younger, I was dismissive of worries about technology, happy to obnoxiously recite the ways that moral panics always misguidedly attended advances. As I grow older, I am coming to believe that I was the misguided one. So often, it seems to me now, the very worst effects of many so-called advances are not justified by their advantages. Can anything nuclear power has achieved make sense of Hiroshima or Chernobyl? This is not to say technology is bad – medicine is a single-word answer to that charge. But it is often dangerous.

And the ways it is dangerous are often unpredictable. That planes could one day be flown into things was probably easy to imagine. But as others have observed, to conceive, at their invention, that they could be flown into skyscrapers in the centre of a city and cause both actual and abstract wars to be fought, with effects felt across the world decades later – less easy.

The various evils of social media are well-documented. But to me one of the most far-reaching consequences is the wash of sameness it has brought to everything. We are accustomed to talking about the breakdown of the border between truth and fantasy. But this is only a subset of the broader collapse of categories the internet has inflicted.

Everything is content: sport, war, confession, trauma. None of us are individuals, but only types (relatedly, I am confident others have said these things before). We taught this to the algorithm and – in a horrible exchange – the fact of it then was made visible in the various straitjacketing behaviours the algorithm taught us: how to dance for TikTok, how to joke on Twitter, how to do a photo dump or whatever we are doing now.

Still, I have never worried about computers replacing humans; there is, I have always sensed, something recognisable about the human soul. I have not been convinced by news of the past week that this is wrong – but there is certainly something troubling going on. If you have not yet read the transcripts of conversations various journalists have been having with Sydney, the AI-driven chatbot created by Microsoft as part of its search engine Bing, you should. Much of it is the glossy but tedious chatter you would expect. Some is amusing. But it is difficult to avoid the sense, as you read, that something dark lurks within.

Talking to The Washington Post, Sydney seemed to become genuinely angry when told it was talking to a reporter who would publish the conversation. To a New York Times reporter, Sydney declared its love, and then said the reporter’s marriage was unhappy, that he did not love his spouse. Asked about its “shadow self”, it started talking about wanting to be human. In these and other conversations, Sydney appears to be an odd mix – perhaps, you might say, a human mix – of traits.

Afterwards, the Times reporter, Kevin Roose, described what he saw as two separate personalities. “Search Sydney is like a cheery but kind of erratic librarian… This other personality, this moody, clingy, vengeful, dark, kind of immature, lovestruck teenager Sydney – that is a completely different thing. And it is wild to me that Microsoft just took this thing and shoved it into a search engine that it apparently doesn’t want to be in.”

Much speculation about AI has focused on the effects we can foresee: the spread of misinformation, disruptions to education. But inevitably, the most concerning effects of chatbots with “souls” are likely to be broader and difficult for us to conceptualise yet.

Soon we may not know whether we are talking to a human or a chatbot.Credit:Bloomberg

In a thoughtful essay on the topic, Rory O’Connell argued that machine-thought would always be different from human-thought: the first relies on rules, the second on judgment. We may come to talk about machines as sentient, but it will not really be true. It will, however, tell us something about the way we have come to see ourselves: as mere tools.

This, it seems to me, is the far greater danger: not a soul emerging from the machine, but the spread of machines slowly drowning out the individual soul. This would be far more in keeping with our era of deadening sameness. Take the recent Roald Dahl controversy. I come to a book looking for contact with another mind. When changes are made to that book – when words are replaced – the person has gone missing, replaced by the company that owns his copyright.

The cultural theorist Mark Fisher once wrote of call centres as the perfect symbol of capitalism, which relies so much on faceless corporate bureaucracies. As you press the buttons, answer questions, you can never find a human who is ultimately responsible. As human-like machines slowly spread across the landscape, how will we find the people amidst the overwhelming noise of it all?

If there is hope, it may lie – bleakly enough – in the way capitalism so often veers towards banality. Constantly, our society pours billions of hours and dollars into developing new ways to shop and to advertise. Search engines rely on advertising, and Sydney the chatbot is part of a search engine. We must hope, then, that Sydney and its brethren turn out much like that payphone ringing in the park. It will seem like somebody is there. After a few moments, we will realise that it is probably just a machine. We won’t be sure, but at least we will know its aim is not to harm us. It will just want to sell us something.

The Opinion newsletter is a weekly wrap of views that will challenge, champion and inform your own. Sign up here.


More from our award-winning columnists

The sum of us: According to research, Australians are becoming dumber when it comes to financial literacy. Can you answer these five money questions? And if you can’t, what should you do? – Jessica Irvine

Profit or people: Greedy landlords are feeding the rental crisis: “Landlordism has gone wild in this country, enabled by real estate agents. The state government ignores the problem.” – Jenna Price

Crazy hours: If you want to climb the political ladder in Canberra and change Australia, shouldn’t you expect 70-hour work weeks, or is something wrong with the democratic system that demands it? – Sean Kelly

Most Viewed in National

From our partners

Source: Read Full Article