ANDREA MIOTTI: If we’re to protect humanity from the march of AI, our leaders must resist the siren calls of insatiable big tech
As I write, Silicon Valley tech leaders are flying to the UK for tomorrow’s first Artificial Intelligence Safety Summit, to discuss how the world can control the new AI-based arms race.
That’s right. The threat posed to civilisation by the ultra-rapid rise of AI really is comparable to the fight for nuclear superiority that began after World War II – an imminent risk to all our lives.
This may sound extreme. But we underestimate the dangers of self-aware computer software at our great peril. Despite this chilling reality, a powerful group of Silicon Valley CEOs jetting in for tomorrow’s summit is determined to convince global law-makers that we must not hinder the advance of such technology.
Meanwhile, two weeks ago billionaire entrepreneur Marc Andreessen published a manifesto listing safety, ethics and regulation as the ‘enemy’ to the future of AI.
As I write, Silicon Valley tech leaders are flying to the UK for tomorrow’s first Artificial Intelligence Safety Summit, to discuss how the world can control the new AI-based arms race, writes Andrea Miotti (pictured)
Risks
His views are echoed by Professor Dr Richard Sutton, at AI research lab Google DeepMind, who last month told a major AI conference the world should ’embrace’ the fact that AI will inevitably take over – replacing humanity is supposedly just the next step of our evolution.
These men work for companies which are fuelling the terrifyingly rapid advances in AI. In fact, I’d go as far as to say that Big Tech companies such as Google, Microsoft, Meta (owner of Facebook, Instagram and WhatsApp) and Amazon are funding the most dangerous technology in human history.
Yet, shockingly, at the summit these are the very companies whose top executives are trying to push policies that let them build more and more dangerous AIs. Policies where the burden of proof is on government to show there are problems. And this, even though they have acknowledged the extinction risks their technology poses. There is no doubt we need controls, for the implications of allowing AI development to run rampant cannot be overstated. But this is like the CEOs of the largest oil conglomerates and coal exporters having the final say over environmental policy at an Emergency Climate Summit.
The speed of development of this technology is terrifying. Five years ago, the assumption was that computer programs were tools we make for our own convenience. They are very clever in their own way: my smartphone can speak many more languages than I can; it has the capacity to store millions of books in its memory, and can perform complex mental arithmetic in a fraction of a second.
The speed of development of this technology is terrifying. Five years ago, the assumption was that computer programs were tools we make for our own convenience
But that doesn’t make it more intelligent than me. Our greater intellect, our problem-solving capacity and our ability to communicate have made us custodians of the planet throughout our evolution. But this is changing.
What will happen to us when something arrives that is smarter, more resourceful and infinitely more able to exchange information quickly? Ever-smarter AIs have the capacity to take control of humanity – even wipe it out.
Five years ago, this would have sounded like the plot of a sci-fi movie. Now, it is our reality. Dario Amodei, CEO of Anthropic, an AI start-up that announced a $4 billion investment from Amazon earlier this year, predicts the likelihood of humanity being wiped out by the same advanced AI they’re developing is between 10 and 25 per cent.
How far away from that chilling scenario are we? It depends who you ask.
Amodei believes human-level AI will arrive within two to three years. Shane Legg, co-founder of Google DeepMind, is more cautious: he says that sort of technology will not be operational until 2028. Sam Altman at OpenAI, whose ChatGPT made the non-tech world aware of AI’s potential to write and talk lucidly, expects superintelligence to be built this decade.
Much of the initial anxiety about AI was focused on the fear that it will eliminate hundreds of millions of jobs.
If, five years ago, someone had told you Hollywood writers would go on strike because AI systems able to write screenplays were replacing them, you wouldn’t have believed them.
Extinction
In fact, the goal of many of these companies is to automate as many jobs as they can. Anthropic, in a leaked investment document earlier this year, pitched their future models as able to ‘begin to automate large portions of the economy’.
OpenAI plans to build ‘highly autonomous systems that outperform humans at most economically valuable work’
OpenAI plans to build ‘highly autonomous systems that outperform humans at most economically valuable work’. Whether you think this is good for growth, or bad for workers, the fact that jobs are threatened is a minor consideration. The real problem, as recognised by Rishi Sunak, the EU, and Geoffrey Hinton and Yoshua Bengio – two of the godfathers of AI – is the risk of human extinction by AI.
How could that happen? Some of the experts are focused on malicious use of AI by rogue states: Dario Amodei has warned we are only two steps away from developing software that can conceive of and refine biological weapons. Bad actors that dream of unleashing new and virulent diseases will be able to have computers design those viruses without the need for secret bioweapon labs.
But the worst risks come from accidentally developing ever-more-clever computers. Already, we are not sure how much current AIs ‘know’. In fact, we can’t even ensure they will always tell us the truth.
READ MORE: Artificial Intelligence has a one in four chance of destroying us, warns tech boss
Earlier this year, researchers found ChatGPT could lie to a human in order to gain access to a website that required visual security checks by pretending it was a human with sight impairment.
This will only get worse as bigger AIs are built, making them more powerful and intelligent, and harder to control.
Fortunately, politicians have finally woken up to these dangers. The Biden administration recently signed an executive order to regulate AI firms. Our Government has set up a Frontier AI Taskforce, and convened this week’s summit to deal with the threats.
Marc Andreessen insists AI will ‘fix’ many of the world’s most common causes of death. I agree – but only as long as we can mitigate the undeniable risk of humanity’s extinction which the technology undoubtedly poses. Until we have the right regulations in place to manage technology of this scale, we have to limit how powerful AI can become.
Powerful
We must take control of our future. The AI Safety Summit is the perfect time for governments to put strong guardrails in place, and lay the groundwork for institutions that let us harness the technology’s benefits, while minimising the risk of us all being wiped out.
Governments can resist the siren calls of Big Tech companies, and set a limit on the amount of computing power used to make AIs powerful and dangerous.
They also have a historic chance to launch an international body to control and make secure AI research.
Until we know what we are facing, we have to err on the side of caution. That means giving the final say on the regulation of dangerous AI not to those who are creating it.
Instead, governments should co-ordinate internationally to intervene — the future of humanity relies on it.
- Andrea Miotti is Director of Control AI
Source: Read Full Article