Site icon Techydr

AI Expert Believes Elon Musk’s Letter Doesn’t Go Far Enough, “Absolutely Everyone on Earth Will Die”

The open letter calling for a six-month ban on the development of strong AI systems was criticized by an AI specialist with more than two decades of expertise in the field for not going far enough.

Decision theorist and Machine Intelligence Research Institute employee Eliezer Yudkowsky recently argued in an opinion piece that the current six-month “pause” on building “AI systems more powerful than GPT-4” called for by Tesla CEO Elon Musk and hundreds of other innovators and experts understates the “severity of the predicament.” He would go further, implementing a moratorium on new large AI learning models that is “indefinite and worldwide.” 

More than a thousand people, including Elon Musk and Apple co-founder Steve Wozniak, signed a statement produced by the Future of Life Institute that stated safety guidelines ought to be devised by independent overseers to guide the development of AI systems.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” said the letter. This, in Yudkowsky’s opinion, is not enough.

Yudkowsky argued in an article for Time that “the fundamental problem is not ‘human-competitive’ intelligence” (as the open letter puts it), but rather “what happens once AI gets to smarter-than-human intelligence.”

“Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die,” he claims.”Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen.'”

According to Yudkowsky, the danger lies in the possibility that a superintelligent artificial intelligence might defy its programmers and put profit above people.

Stop worrying about “Terminator” — “Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers — in a world of creatures that are, from its perspective, very stupid and very slow,” As a writer, he creates.

Yudkowsky cautions that there is currently no strategy in place to cope with a superintelligence that determines the best course of action for addressing its problem is the destruction of all life on Earth.

AI Expert Believes Elon Musk’s Letter Doesn’t Go Far Enough, Absolutely Everyone on Earth Will Die

 

He also questions if it would be unethical to possess learning models that have achieved “self-awareness” and whether this is something researchers in artificial intelligence are even aware of.

He thinks that six months is insufficient time to formulate a strategy: “It took more than 60 years between when the notion of Artificial Intelligence was first proposed and studied, and for us to reach today’s capabilities. Solving safety of superhuman intelligence — not perfect safety, safety in the sense of ‘not killing literally everyone’ — could very reasonably take at least half that long.”

To prevent the creation of such advanced AI systems, Yudkowsky advocates worldwide collaboration, including between enemies like the United States and China. He considers this a higher priority than “preventing a full nuclear exchange,” even if it means resorting to the use of nuclear weapons “if that’s what it takes to reduce the risk of large AI training runs.”

Yudkowsky says, “Shut it all down.” “Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries.”

When the use of AI software becomes more widespread, Yudkowsky issues a dire warning. The recently unveiled artificial intelligence chatbot ChatGPT by OpenAI has stunned users with its ability to produce music, generate content, and even write code.

“We’ve got to be careful here,” Sam Altman, founder, and CEO of OpenAI made the announcement earlier this month.“I think people should be happy that we are a little bit scared of this.”

Exit mobile version