Powerful new artificial intelligence (AI) tools have been proliferating at an alarming rate over the past six months, from chatbots that can carry on conversations like real people, to coding bots that run software automatically, to image generators that create images out of nothing – so-called generative artificial intelligence (AIGC) has suddenly become ubiquitous and increasingly powerful.
Image source Pexels
But in the last week, a backlash against the AI craze has begun to emerge. Thousands of technologists and academics, led by Tesla and Twitter CEO Elon Musk, signed an open letter warning of the “grave risks to humanity” and calling for a six-month moratorium on the development of AI language models.
Meanwhile, an AI research nonprofit filed a complaint asking the U.S. Federal Trade Commission (FTC) to investigate OpenAI, the company that created ChatGPT, and to halt further commercial releases of its GPT-4 software.
Italian regulators also moved to block ChatGPT altogether, citing data privacy violations.
Perhaps it is understandable that there are calls to pause or slow down AI research. AI applications that seemed unthinkable and even unfathomable just a few years ago are now rapidly penetrating social media, schools, workplaces and even politics. In the face of this dizzying change, some have issued pessimistic predictions that AI could eventually lead to the demise of human civilization.
The good news is that the hype and fear of all-powerful AI may be overblown. While Google’s Bard and Microsoft’s Bing are impressive, they are a long way from being “Skynet.
The bad news is that fears about the rapid evolution of AI have become a reality. This is not because AI will become smarter than humans, but because humans are already using AI to oppress, exploit, and deceive each other in ways that existing organizations are not prepared for. Moreover, the more powerful people think AI is, the more likely they and businesses are to delegate tasks to it that it is not capable of.
Beyond the pessimistic doomsday predictions, we can get a first glimpse of AI’s impact on the foreseeable future from two reports released last week. The first report, released by U.S. investment bank Goldman Sachs, focuses on assessing the impact of AI on the economy and labor market; the second report, published by Europol, focuses on the potential criminal misuse of AI.
From an economic perspective, the latest AI trend focuses on automating tasks that once could only be done by humans. Like power looms, mechanized assembly lines and automated teller machines, AIGC promises to do certain types of work cheaper and more efficiently than humans.
But cheaper and more efficient doesn’t always mean better, as anyone who has dealt with a grocery store self-checkout machine, an automated telephone answering system or a customer service chatbot can attest. Unlike previous waves of automation, AIGCs can mimic humans and even impersonate them in some cases. This can lead to both widespread deception and the temptation for employers to believe that AI can replace human employees, even if that is not the case.
Goldman Sachs’ research analysis estimates that AIGC will change about 300 million jobs worldwide, resulting in tens of millions of job losses, but will also contribute to significant economic growth. However, Goldman Sachs’ assessment numbers are not necessarily accurate – after all, they have a history of missing predictions. In 2016, the bank predicted that virtual reality headsets could become as popular as smartphones.
What’s most interesting about Goldman Sachs’ AI analysis is their breakdown of individual industries – which jobs are likely to be augmented by language models and which are likely to be replaced altogether. The Goldman researchers ranked white-collar tasks on a scale of 1 to 7, with “reviewing forms for completeness” at level 1, tasks that could be automated at level 4, and “ruling on a complex motion in court” at level 6. The conclusion is that administrative support and paralegal jobs are most likely to be replaced by AI, while occupations such as management and software development will become more productive.
The report optimistically predicts that this generation of AI could eventually grow global GDP by 7 percent as companies reap more benefits from employees with AI skills. But Goldman Sachs also predicts that about 7 percent of Americans will find their careers eliminated in the process, and more will have to learn the technology to stay employed. In other words, even if AIGC has a more positive impact, the result could lead to massive job losses and the gradual replacement of humans in the office and in everyday life by robots.
In the meantime, there are already many companies eager to take shortcuts to the point of automating tasks that AI cannot handle, such as the tech site CNET’s automated generation of error-ridden financial articles. When AI goes awry, already marginalized groups may be even more affected. While ChatGPT and its ilk are exciting, developers of today’s large language models have yet to address the problem of biased datasets that have embedded racial bias into AI applications such as face recognition and crime risk assessment algorithms. Last week, a black man was again wrongfully imprisoned for facial recognition matching errors.
More worryingly, AIGC could in some cases be used for intentional harm. The Europol report details how AIGC can be used to help people commit crimes, such as fraud and cyber attacks.
For example, chatbots are able to generate a specific style of language and even mimic certain people’s voices, which could make them a powerful tool for phishing scams. The advantages of language models in writing software scripts may popularize the generation of malicious code. Their ability to provide personalized, contextualized, step-by-step advice could be a foolproof guide for criminals who want to break into a home, blackmail someone, or build a pipe bomb. We have seen how synthetic images can spread false narratives on social media, raising renewed concerns that deep fakery could distort campaigns.
It is worth noting that what makes language models vulnerable to abuse is not just that they possess extensive intelligence, but also that they are fundamentally flawed in terms of knowledge. Current leading chatbots are trained to remain silent when they detect an attempt to use them for nefarious purposes. But as Europol points out, “Security measures to prevent ChatGPT from delivering potentially malicious code are only effective if the model understands what it is doing.” As the large number of documented tricks and vulnerabilities shows, self-awareness remains one of the weaknesses of this technology.
With all these risks in mind, you don’t have to worry about an apocalyptic scenario to see the pace of AIGC’s development slow down, giving society more time to adapt. openAI itself was founded as a non-profit organization on the premise that AI could be built in a more responsible way, without the pressure of meeting quarterly revenue goals.
But OpenAI is now leading a neck-and-neck race, with tech giants laying off their AI ethicists, and the horse may have left the barn anyway. As academic AI experts Sayash Kapoor and Arvind Narayanan point out, the main driver of innovation in language models now is not the push for ever-larger models, but the integration of the models we have into a variety of applications and tools. They argue that regulators should look at AI tools in terms of product safety and consumer protection, rather than trying to contain AI like a nuclear weapon.
Perhaps the most important thing in the short term is for technologists, business leaders, and regulators to put aside the fear and hype and gain a deeper understanding of AIGC’s strengths and weaknesses so they can be more cautious in adopting it. If AI continues to work, its impact will be disruptive, no matter what happens. But overestimating its capabilities will make it more, not less, harmful.