Elon Musk, Steve Wozniak and a host of other technology leaders and artificial intelligence experts urge AI labs to halt the development of powerful new AI systems in an open letter pointing to potential risks to society.
The letter urges AI developers to “stop training AI systems that are more powerful than GPT-4 immediately for at least 6 months.” It was published by the Future of Life Institute and signed by more than 1,000 people, including Musk, who argued that safety protocols must be developed by independent overseers to guide the future of AI systems. GPT-4 is OpenAI’s latest deep learning model, which the lab says “demonstrates human-level performance on various professional and academic benchmarks.”
“Powerful AI systems should only be developed when we are certain that their impact is positive and their risks manageable,” the letter reads.
The letter warns that, at this point, no one can “understand, predict, or reliably control” the powerful new tools being developed in AI labs. The undersigned tech experts cite the risks of propaganda and lies being spread by AI-generated articles that appear genuine, and even the possibility that AI programs can outperform workers and render jobs obsolete.
AI EXPERTS BALANCE THREATS, BENEFITS OF CHATGPT FOR PEOPLE, JOBS AND INFORMATION: “DYSTOPIC WORLD”
The Welcome to ChatGPT lettering from the US company OpenAI can be seen on a computer screen. (Silas Stein/Picture Alliance via Getty Images)
“AI labs and independent experts should use this pause to collaboratively develop and implement a common set of security protocols for advanced AI design and development, which will be rigorously reviewed and monitored by independent external experts,” the letter reads.
“In parallel, AI developers must collaborate with policymakers to dramatically accelerate the development of robust AI governance systems.”
ARTIFICIAL INTELLIGENCE ‘GODFATHER’ ABOUT AI THAT MAY CAN ERASE HUMANITY: ‘IT’S NOT UNTHINKABLE’

Tesla CEO Elon Musk and more than 1,000 technology leaders and experts in artificial intelligence call for a temporary pause in the development of AI systems that are more powerful than OpenAI’s GPT-4 and warn of risks to society and civilization. (Michael Gonzalez/Getty Images)
The signatories, which include Emad Mostaque, CEO of Stability AI, researchers at Alphabet’s DeepMind, and AI heavyweights Yoshua Bengio and Stuart Russell, emphasize that AI development in general should not be halted, writing that her letter “just a retreat from the dangerous race toward larger and larger, unpredictable black-box models with emerging capabilities.”
According to the European Union transparency registerThe Future of Life Institute is funded primarily by the Musk Foundation, as well as the London-based effective altruism group Founders Pledge and the Silicon Valley Community Foundation.
ARTIFICIAL INTELLIGENCE EXPERTS HANDLE BIAS IN CHATGPT: “VERY HARD TO PREVENT BIAS FROM HAPPENING”

OpenAI website ChatGPT info page on a laptop taken on Thursday January 12, 2023 in the borough of Brooklyn in New York, USA. (Gabby Jones/Bloomberg via Getty Images)
Musk, whose electric car company Tesla uses AI for its autopilot system, has previously expressed concerns about the rapid development of AI.
Since its release last year, OpenAI’s Microsoft-backed ChatGPT has prompted competitors to accelerate development of similarly sized language models and companies to incorporate generative AI models into their products.
CLICK HERE TO GET THE FOX NEWS APP
Notably missing from the letter’s signatories was Sam Altman, CEO of OpenAI.
Reuters contributed to this report.
Source : www.foxnews.com