Most Viewed Content:

India’s censorship body gave power to remove pirated Movies from platforms

India’s Ministry of Information and Broadcasting announced that its...

Microsoft working on new features for Win11 / Win12: smart notifications, depth-of-field effects

According to the source Albacore (@thebookisclosed), Microsoft is preparing...

Toyota responds to continued production cuts in the next 3 months: easing pressure on dealer earnings

In response to the news that "production will continue...

AI: eight major risks cannot be ignored

While the mainstream use of artificial intelligence (AI) technology is exciting, some science fiction scenarios can become nightmares if left unchecked.

In a recent paper written by AI security expert Dan Hendrycks, director of the Center for AI Security, he highlights some of the speculative risks associated with the unfettered development of increasingly intelligent AI. Speculative risk is the risk of gain or loss.

Given that AI is still in the early stages of development, Hendrycks argues in his paper for incorporating safety and security features into the way AI systems operate.

Here are the eight risks he lists in his paper:

  1. Weaponization race: AI’s ability to automate cyber attacks and even control nuclear silos could make it dangerous. According to the paper, an automated retaliation system used by one country “could quickly escalate and lead to a large-scale war. If one country invests in weaponized AI systems, other countries will have a greater incentive to do so.
  2. Humans become weaker: As AI makes specific tasks cheaper and more efficient, more companies will adopt the technology, eliminating certain jobs from the job market. As human skills become obsolete, they may become economically irrelevant.

The eight risks of AI

  1. Epistemic erosion: This term refers to AI’s ability to launch large-scale disinformation campaigns aimed at shifting public opinion toward a certain belief system or worldview.
  2. Proxy games: This happens when an AI-driven system is given a goal that runs counter to human values. These goals don’t always have to sound sinister to affect human well-being: AI systems can aim to increase viewing time, which may not be best for humanity as a whole.
  3. Value lock-in: As AI systems become more powerful and complex, the number of stakeholders operating them decreases, leading to massive disenfranchisement. Hendricks describes a situation in which governments are able to impose “pervasive surveillance and oppressive censorship. He writes, “Defeating such a regime is unlikely, especially if we come to rely on it.”
  4. Burst goals: As AI systems become more complex, they may gain the ability to create their own goals. Hendricks notes that “for complex adaptive systems, including many AI agents, goals such as self-preservation emerge frequently.”
  5. Deception: Humans can gain general acceptance by training AI to perform deception. Hendricks cites a programmed feature of Volkswagen that allows their engines to reduce emissions only when they are being monitored. Thus, this feature “allows them to achieve performance gains while maintaining the claimed low emissions.
  6. Power-seeking behaviour: As AI systems become more powerful, they can become dangerous if their goals are not aligned with the humans who programmed them. The hypothetical result would incentivize systems to “pretend to be consistent with other AI, collude with other AI, suppress monitors, etc.

Hendricks noted that these risks are “future-proof” and “generally considered low-probability,” but it underscored the need to keep safety in mind while the AI system framework is still in the design process. “It’s highly uncertain. But because it’s uncertain, we shouldn’t assume it’s further away,” he said in an email, “and we’ve seen smaller-scale problems with these systems. Our institutions need to address these issues so that they are prepared when larger risks arise.” He said.

He added, “You can’t do something in a hurry and be safe at the same time. They are building more and more powerful AI and avoiding responsibility for security issues. If they stop to figure out security, their competitors can run ahead of them, so they won’t stop.”

Similar sentiments were expressed in an open letter recently signed by Elon Musk and a group of other AI security experts. The letter calls for a moratorium on training any AI model more powerful than GPT-4 and highlights the dangers of the current arms race among AI companies to develop the most powerful version of AI.

In response, OpenAI CEO Sam Altman, speaking at an event at MIT, said the letter lacked technical details and that the company was not training GPT-5.

Latest

2024 Beijing Auto Show: Aion Y Plus new colors unveiled

At the 2024 Beijing Auto Show, the Aion brand...

OPPO Find X7 White phone opens for pre-sale, starting at 3899 RMB

The OPPO Find X7 white phone is now available...

Official spy photos of Lynk & Co ZERO pure electric sedan released

The deputy general manager of Lynk & Co Auto...

OPPO Find X7 Ultra satellite communication edition adds 16GB+ 512GB, priced at 6799 RMB

The OPPO Find X7 Ultra satellite communication version will...

Newsletter

Don't miss

2024 Beijing Auto Show: Aion Y Plus new colors unveiled

At the 2024 Beijing Auto Show, the Aion brand...

OPPO Find X7 White phone opens for pre-sale, starting at 3899 RMB

The OPPO Find X7 white phone is now available...

Official spy photos of Lynk & Co ZERO pure electric sedan released

The deputy general manager of Lynk & Co Auto...

OPPO Find X7 Ultra satellite communication edition adds 16GB+ 512GB, priced at 6799 RMB

The OPPO Find X7 Ultra satellite communication version will...

Honda Plans Electric Vehicle Supply Chain Project in Canada: 240K Annual Capacity

Honda recently announced plans to build an electric vehicle...
James Lopez
James Lopezhttps://www.techgoing.com
James Lopez joined Techgoing as Senior News Editor in 2022. He's been a tech blogger since before the word was invented, and will never log off.

Honda Plans Electric Vehicle Supply Chain Project in Canada: 240K Annual Capacity

Honda recently announced plans to build an electric vehicle supply chain complex in Ontario, Canada, with the goal of starting production in 2028. After...

GWM Tank 300 Hi4-T model set for April 22 release, limited to 3000 units in 2024

The much-anticipated Great Wall Motor Tank 300Hi4-T model was officially announced and will be released on April 22. A total of 3,000 units will...

2024 Beijing Auto Show: Mid-term facelift BMW 4 Series launched

The 2024 Beijing Auto Show officially kicked off. The BMW brand brought the new 4 Series to consumers. A total of 4 configurations are...