Most Viewed Content:

Microsoft working on new features for Win11 / Win12: smart notifications, depth-of-field effects

According to the source Albacore (@thebookisclosed), Microsoft is preparing...

Google to bring PWA application backup & restore function for Chrome/android

According to thespAndroid reports, GitHub's Chromium repository recently added...

India’s censorship body gave power to remove pirated Movies from platforms

India’s Ministry of Information and Broadcasting announced that its...

Elon Musk and many others call for AI research moratorium in open letter challenged

Tesla CEO Elon Musk, Apple co-founder Steve Wozniak and thousands of other AI researchers have signed an open letter calling for a moratorium on research into more advanced AI technologies. However, the letter has been challenged by many experts and even the signatories, accused of fueling AI hype, falsifying signatures and misrepresenting papers.

The open letter was written by the Future of Life Institute, a nonprofit organization whose mission is “to reduce the global catastrophic and existential risks posed by powerful technologies. Specifically, the institute focuses on mitigating long-term “existential” risks to humanity, such as super-intelligent AI, to which Elon Musk is a supporter and to which he donated $10 million in 2015.

The letter reads, “More powerful AI systems should only be developed if we are confident that the impact of AI is positive and the risks are manageable. We, therefore, call on all AI labs to immediately suspend training more powerful AI systems than GPT-4 for at least six months. AI labs and independent experts should use this time to jointly develop and implement a set of shared security protocols for advanced AI design and development that are rigorously audited and monitored by independent external experts.”

The letter also clarifies that “this does not mean that AI development in general is on hold, just that it is retreating from a dangerous race to larger, unpredictable black-box models with emergency capabilities.” It refers to the AI race between large tech companies such as Microsoft and Google, which have released many new AI products in the past year.

Other notable signatories include Emad Mostaque, CEO of image generator creator Stability AI, author and historian Yuval Noah Harari and Pinterest co-founder Evan Sharp, among others. It was also signed by employees of companies participating in the AI competition, including Google sister company DeepMind and Microsoft. Although OpenAI developed and commercialized the GPT family of AI models, no one signed the letter.

Despite a verification process, the letter initially had many false signatories, including people posing as OpenAI CEO Sam Altman and Meta Chief AI Scientist Yann LeCun. Later, the Future of Life Institute cleaned up the list and suspended showing more signatures on the letter while it verified each signature.

However, the release of the open letter has caused an uproar and has been scrutinized by many AI researchers, including many of the signatories themselves. Some signatories have reversed their positions, some signatures of prominent individuals have proven to be fake, and many more AI researchers and experts have publicly voiced their objections to the descriptions and proposed methods in the letter.

Gary Marcus, a professor of psychology and neuroscience at New York University, said, “The letter isn’t perfect, but the spirit of it is correct.” Meanwhile, Stability AI CEO John Mustak tweeted that OpenAI is a truly “open” AI company, “so I don’t think a six-month pause in training is the best idea, and I don’t agree with a lot of the points in the letter, but some of the letter is certainly interesting. But there are some interesting points in the letter.”

AI experts criticized the letter for furthering the “AI hype,” but failed to list or call for specific action on the dangers of AI that exist today. Some argue that it promotes a long-standing but somewhat unrealistic view that has been criticized as harmful and anti-democratic because it is more favourable to the super-rich and allows them to behave in morally dubious ways for certain reasons.

Emily M. Bender, a professor in the Department of Linguistics at the University of Washington and co-author of the paper cited at the beginning of the open letter, tweeted that the letter was “rife with AI hype” and misused her research. The letter states, “Extensive research suggests that AI systems with human-like intelligence could pose a significant threat to society and humanity.” But Bender countered that the threat, which she specifically states in her research refers to current large-scale language models and their use in oppressive systems, is much more specific and urgent than the future AI threat assumed in the open letter.

Bender went on to write, “We published our full paper in late 2020, pointing out the problems with this rush to build larger language models without considering the risks. But the risks and harms are never about ‘AI being too powerful,’ rather they are about concentrating power in the hands of the people, about reproducing systems of oppression, about damage to information ecosystems, about damage to natural ecosystems, etc.”

Sasha Luccioni, a research scientist at AI startup Hugging Face, said in an interview, “The open letter is essentially misleading: bringing everyone’s attention to the assumed power and harms of large language models, and proposing very vague, almost ineffective solutions, rather than focusing on those harms and addressing them here and now. Focus on these hazards and address them. For example, demand more transparency when it comes to training data and capabilities for LLMs, or require legislation that dictates where and when they can be used.”

Arvind Narayanan, an associate professor of computer science at Princeton University, said the open letter is full of AI hype and “makes it more difficult to address real, ongoing AI harms.”

The open letter asks several questions: “Should we automate all jobs, including those that give people a sense of accomplishment? Should we cultivate non-human minds that may eventually surpass human intelligence and replace us? Should we continue to develop AI at the risk of losing control of civilization?”

In response, Narayanan said that these questions are “nonsense” and “absurd. The question of whether computers will replace humans and take over human civilization is a very distant one, part of a long-term mindset that distracts us from the issues at hand. After all, AI is already being integrated into people’s work, reducing the need for certain occupations, rather than a “non-human mindset” that would make us “obsolete.

Narayanan added: “I think these are legitimate long-term concerns, but they have been repeatedly mentioned to distract people from the current hazards, including very real information security and safety risks! And addressing these security risks will require us to work together in good faith. But unfortunately, the hype in this letter, including the exaggeration of AI capabilities and the risks to human survival at stake, could lead to AI models being tied up even more, making it more difficult to address the risks.”

However, many of the open letter’s signatories have also defended themselves. Yoshua Bengio, founder and scientific director of the research institute Mila, said the six-month moratorium is necessary for governing bodies, including governments, to understand, audit and validate AI systems to ensure they are safe for the public. He added that there is a dangerous concentration of power, that AI tools have the potential to destabilize democracy, and that “there is a conflict between democratic values and the way these tools are developed.”

Max Tegmark, professor of physics at MIT’s NSF Institute for AI and Fundamental Interaction (IAIFI) and director of the Future of Life Institute, said the worst-case scenario is that humanity will gradually lose control of civilization. The current risk, he says, is that “we lose control of a group of unelected power players in technology companies who have too much influence.”

The comments spoke of a broad future, hinting at fears of losing control of civilization, but did not address any specific measures beyond calling for a six-month moratorium.

Timnit Gebru, a computer scientist and founder of the Distributed AI Institute, tweeted that, ironically, their call for a moratorium on training AI models more powerful than GPT-4 failed to address the massive concerns surrounding GPT-4 itself.

Latest

Starting from 48,900, Geely Panda Karting officially starts pre-sale

Geely Panda Karting officially started pre-sale. The pre-sale price...

Ford: Expand charging network, fuel/ hybrid/ pure electric in parallel

Recently, Ford released the company's comprehensive annual report for...

Chery’s two new cars are exposed, targeting overseas markets

Recently, some media exposed the actual cars of two...

New Trumpchi Shadow Leopard to launch on May 1, upgraded performance rims

Recently, we learned from the official that the 2024...

Newsletter

Don't miss

Starting from 48,900, Geely Panda Karting officially starts pre-sale

Geely Panda Karting officially started pre-sale. The pre-sale price...

Ford: Expand charging network, fuel/ hybrid/ pure electric in parallel

Recently, Ford released the company's comprehensive annual report for...

Chery’s two new cars are exposed, targeting overseas markets

Recently, some media exposed the actual cars of two...

New Trumpchi Shadow Leopard to launch on May 1, upgraded performance rims

Recently, we learned from the official that the 2024...

Samsung Galaxy S25 Ultra expected to feature 5000mAh + 45W Combo

Technology media WccFtech recently reported that Samsung will not...
James Lopez
James Lopezhttps://www.techgoing.com
James Lopez joined Techgoing as Senior News Editor in 2022. He's been a tech blogger since before the word was invented, and will never log off.