Beijing, April 3 morning news, according to reports, four artificial intelligence experts’ research findings were cited in an open letter, which was co-signed by Elon Musk and called for an urgent moratorium on research into artificial intelligence, experts expressed concern.
The open letter, dated March 22, had garnered more than 1,800 signatures by Friday local time. The letter calls for a six-month hiatus in the development of a system “more powerful” than Microsoft-backed artificial intelligence research firm OpenAI’s new GPT-4, which can carry out human-like conversations, compose music and summarize lengthy documents.
Since the release of GPT-4’s predecessor, ChatGPT, last year, competitors have been racing to launch similar products.
The open letter says AI systems with “human competitive intelligence” pose far-reaching risks to humans, citing 12 studies by experts, including university academics and current and former employees of OpenAI, Google and its subsidiary DeepMind.
Since then, civil society groups in the United States and the European Union have been pressuring lawmakers to restrain OpenAI’s research. OpenAI did not immediately respond to a reporter’s request for comment.
The open letter was sent by the Future of Life Institute, which is largely funded by the Elon Musk Foundation. Critics accuse the institute of prioritizing imaginary apocalyptic scenarios while ignoring more pressing concerns posed by artificial intelligence, such as racist or sexist biases programmed into machines.
The research cited in the letter includes the famous paper “On the Dangers of Random Parrots,” co-authored by Margaret Mitchell. She was in charge of AI ethics research at Google and is now the chief ethics scientist at the AI company Hugging Face.
But Mitchell herself criticized the letter, saying it’s not clear what is “more powerful than GPT4. “The letter treats many questionable ideas as hypothetical facts, and it sets out a series of AI priorities and narratives that would benefit future Life Institute proponents,” she said, adding, “Right now, we don’t have the privilege of ignoring active harm.”
Michelle’s co-authors Timnit Gebru and Emily M. Bender also criticized the letter on Twitter, with the latter calling some of its claims “insane.”
Max Tegmark, president of the FutureLife Institute, said the move was not intended to try to hinder OpenAI’s corporate advantage.
“It’s hilarious. I’ve seen people say, ‘Elon Musk is trying to slow down the competition,'” he said, adding that Elon Musk was not involved in drafting the letter. “It’s not a company that’s at stake in this matter.”
The current risk
Shiri Dori-Hacohen, an assistant professor at the University of Connecticut, also took issue with the letter’s reference to her research. Last year she co-authored a research paper that argued that the widespread use of artificial intelligence already poses serious risks.
Her research argues that the application of AI systems today could have implications for climate change, nuclear war and other decisions related to threats to survival.
“AI does not need to be at the human level to exacerbate these risks,” she said.
“There are also some non-survival risks that are really, really important, but those risks are not given high priority.”
Asked to comment on the criticisms, Tegmark, president of the FutureLife Institute, said the short- and long-term risks of artificial intelligence should be taken seriously.
“If we quote someone, it just means we think they agree with the quote. It doesn’t mean they endorse the letter, or that we endorse all of their ideas,” he said.
Another expert quoted in the open letter is Dan Hendrycks, director of the California-based Center for Artificial Intelligence Security. He supports the letter and says it’s wise to consider Black Swan events. Black swan events are events that seem unlikely to happen but could have devastating consequences.
The open letter also warned that generative AI tools could be used to flood the Internet with “propaganda and lies.
Dori Hacohen said Elon Musk was “quite rich” for signing the open letter, citing sources such as the civil society organization Common Cause, which documented an increase in misinformation on Twitter following Elon Musk’s acquisition of the platform.
Twitter will soon introduce a new fee structure for accessing its research data, which could hinder research on the topic.
“This directly affects my lab work, and the work of other people who study misinformation and disinformation,” Dori Harcorn said. “We’re in a bind.”