Elon Musk and Tech Leaders Want a Massive AI Pause

Elon Musk and Tech Leaders Want a Massive AI Pause: In an open letter published Tuesday, Elon Musk and other tech leaders and AI, computer science, and other professionals asked premier artificial intelligence laboratories to halt the development of AI systems more capable than GPT-4, citing “deep hazards” to human society.

Musk, Apple co-founder Steve Wozniak, Stability AI CEO Emad Mostaque, and Sapiens author Yuval Noah Harari are among the 1,000 signatories of the Future of Life Institute’s open letter. It requires a public, verifiable, six-month stop in system training.

“AI systems with human-competitive intelligence can pose profound risks to society and humanity as shown by extensive research and acknowledged by top AI labs,” the letter states.

“Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we risk loss of control of our civilization?”

The open letter follows OpenAI’s GPT-4, the huge language model that drives the premium version of the popular chatbot ChatGPT. According to OpenAI, the new GPT-4 can tackle more complicated jobs and generate more nuanced outcomes than prior versions.

Systems like GPT-4 need to be trained on enormous amounts of data to answer questions and complete other tasks. ChatGPT, which debuted in November, can write professional emails, organize trip plans, develop code, and do well on examinations like the bar exam.

Elon Musk and Tech Leaders Want a Massive AI Pause
Elon Musk and Tech Leaders Want a Massive AI Pause


OpenAI didn’t immediately comment.

OpenAI, however, recognizes the importance of making sure that technology systems “usually smarter than humans” serve humanity’s best interests on its website.At some point, it may be important to get an independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.”

Google, Microsoft, Adobe, Snapchat, DuckDuckGo, and Grammarly are just a few of the firms that have introduced new services that leverage generative AI capabilities since the beginning of the year.

Risks associated with these AI abilities have been demonstrated by OpenAI’s own research. Artificially intelligent generators may cite questionable sources or, as pointed out by OpenAI,

“increase safety challenges by taking harmful or unintended actions, increasing the capabilities of bad actors who would defraud, mislead, or abuse others.”

Experts in AI are worried about the future of AI and the rush by certain corporations to release products without proper protections or even an awareness of the potential dangers.

Leave a Comment