In an open letter, Elon Musk and more than 1,000 other people with knowledge, power, and influence in the tech world ask for a six-month break from all “giant AI experiments.”
Society doesn’t want anything more powerful than OpenAI’s GPT-4.
AI that can beat humans is becoming a bigger worry every day.
Once upon a time, it was hard to imagine how dangerous AI could be for society. But it’s not a secret that this technology is being developed so quickly that efforts to reduce its risks can’t keep up. The railings are gone.
Over a thousand people, including Elon Musk, signed an open letter saying that they think these risks will happen soon if we don’t slow down on making powerful AI systems.
A Reuters article says that the backers include Stability AI CEO Emad Mostaque, researchers at DeepMind, which is owned by Alphabet, and AI pioneers Yoshua Begio and Stuart Russell. They join the Future of Life Institute, which is mostly funded by the Musk Foundation, Founders Pledge, and Silicon Valley Community Foundation.
The tweet below verifies the news:
Elon Musk Calls to Stop New AI for 6 Months, Fearing Risks to Society
He joins over 1,000 experts who are worried about the tech's rapid expansion.https://t.co/F8HIMGyNJ8
— Deepak Mohoni (@deepakmohoni) March 30, 2023
What Does the Letter Say?
And this is important. The group wants all “giant AI experiments” to stop for six months.
Elon Musk joins a group that is worried about the dangers of AI development
In the letter, the people who signed it asked for a six-month break from making AI systems that are more powerful than OpenAI’s GPT-4.
“Powerful AI systems shouldn’t be made until we’re sure their effects will be good and their risks are manageable,” says the letter. “Society has put a hold on other technologies that could be very bad for society. Here, we can do it.”
The people who signed the letter say that AI could be a “fundamental change in the history of life on Earth,” but there isn’t enough planning and management to match this potential. This is especially true because AI labs are still in an “out-of-control race to develop and deploy ever more powerful digital minds that no one, not even their creators, can understand, predict, or reliably control.”
As AI systems get better at keeping up with humans in general tasks, the letter asks a series of “should we” questions about whether or not we should let machines flood information channels with propaganda, automate away jobs, develop nonhuman minds that could replace humans, or risk losing control of civilization in our rush to make better and better neural networks.
As expected, though, not everyone agrees. OpenAI CEO Sam Altman hasn’t signed the letter, and AI researcher Johanna Bjorklund from Umea University tells Reuters that the AI worry is all a bunch of nonsense. “These kinds of claims are meant to get people excited,” says Bjorklund. “It is meant to scare people. I don’t think the handbrake needs to be pulled.”
OpenAi has said that at some point, it may be important to get an independent review before starting to train future systems and that the most advanced work should agree to limit the rate at which new models grow.
The open letter says, “We agree.” “That time has come.”
Get ahead of the curve by accessing breaking news and insightful articles on californiaexaminer.net – start exploring today
Click on the following links for more news from the California Examiner:
- Motorcyclist Killed in Fresno County Vehicle Crash
- Former Alief Football Coach David Temple to Do Time for Murdering Pregnant Wife