AI systems Archives - TechGoing https://www.techgoing.com/tag/ai-systems/ Technology News and Reviews Sun, 16 Apr 2023 15:59:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.4 AI: eight major risks cannot be ignored https://www.techgoing.com/ai-eight-major-risks-cannot-be-ignored/ Sun, 16 Apr 2023 15:59:31 +0000 https://www.techgoing.com/?p=88994 While the mainstream use of artificial intelligence (AI) technology is exciting, some science fiction scenarios can become nightmares if left unchecked. In a recent paper written by AI security expert Dan Hendrycks, director of the Center for AI Security, he highlights some of the speculative risks associated with the unfettered development of increasingly intelligent AI. […]

The post AI: eight major risks cannot be ignored appeared first on TechGoing.

]]>
While the mainstream use of artificial intelligence (AI) technology is exciting, some science fiction scenarios can become nightmares if left unchecked.

In a recent paper written by AI security expert Dan Hendrycks, director of the Center for AI Security, he highlights some of the speculative risks associated with the unfettered development of increasingly intelligent AI. Speculative risk is the risk of gain or loss.

Given that AI is still in the early stages of development, Hendrycks argues in his paper for incorporating safety and security features into the way AI systems operate.

Here are the eight risks he lists in his paper:

  1. Weaponization race: AI’s ability to automate cyber attacks and even control nuclear silos could make it dangerous. According to the paper, an automated retaliation system used by one country “could quickly escalate and lead to a large-scale war. If one country invests in weaponized AI systems, other countries will have a greater incentive to do so.
  2. Humans become weaker: As AI makes specific tasks cheaper and more efficient, more companies will adopt the technology, eliminating certain jobs from the job market. As human skills become obsolete, they may become economically irrelevant.

The eight risks of AI

  1. Epistemic erosion: This term refers to AI’s ability to launch large-scale disinformation campaigns aimed at shifting public opinion toward a certain belief system or worldview.
  2. Proxy games: This happens when an AI-driven system is given a goal that runs counter to human values. These goals don’t always have to sound sinister to affect human well-being: AI systems can aim to increase viewing time, which may not be best for humanity as a whole.
  3. Value lock-in: As AI systems become more powerful and complex, the number of stakeholders operating them decreases, leading to massive disenfranchisement. Hendricks describes a situation in which governments are able to impose “pervasive surveillance and oppressive censorship. He writes, “Defeating such a regime is unlikely, especially if we come to rely on it.”
  4. Burst goals: As AI systems become more complex, they may gain the ability to create their own goals. Hendricks notes that “for complex adaptive systems, including many AI agents, goals such as self-preservation emerge frequently.”
  5. Deception: Humans can gain general acceptance by training AI to perform deception. Hendricks cites a programmed feature of Volkswagen that allows their engines to reduce emissions only when they are being monitored. Thus, this feature “allows them to achieve performance gains while maintaining the claimed low emissions.
  6. Power-seeking behaviour: As AI systems become more powerful, they can become dangerous if their goals are not aligned with the humans who programmed them. The hypothetical result would incentivize systems to “pretend to be consistent with other AI, collude with other AI, suppress monitors, etc.

Hendricks noted that these risks are “future-proof” and “generally considered low-probability,” but it underscored the need to keep safety in mind while the AI system framework is still in the design process. “It’s highly uncertain. But because it’s uncertain, we shouldn’t assume it’s further away,” he said in an email, “and we’ve seen smaller-scale problems with these systems. Our institutions need to address these issues so that they are prepared when larger risks arise.” He said.

He added, “You can’t do something in a hurry and be safe at the same time. They are building more and more powerful AI and avoiding responsibility for security issues. If they stop to figure out security, their competitors can run ahead of them, so they won’t stop.”

Similar sentiments were expressed in an open letter recently signed by Elon Musk and a group of other AI security experts. The letter calls for a moratorium on training any AI model more powerful than GPT-4 and highlights the dangers of the current arms race among AI companies to develop the most powerful version of AI.

In response, OpenAI CEO Sam Altman, speaking at an event at MIT, said the letter lacked technical details and that the company was not training GPT-5.

The post AI: eight major risks cannot be ignored appeared first on TechGoing.

]]>