“I have always believed that AI (artificial intelligence) is the most profound technology that humans are studying, deeper than fire, electricity, or anything we have done in the past.”
On April 16, in the CBS interview program “60 Minutes”, Sundar Pichai, CEO of Google and its parent company Alphabet, expressed his understanding of the rapid development of artificial intelligence and It is concerned about potential threats to society and warns that society needs to prepare for technologies that have already been rolled out.
In an interview with 60 Minutes, interviewer Scott Pelley tried out several of Google’s AI projects, saying he was “speechless” and found them “disturbing” (This refers to Google’s chatbot Bard and other products).
“Our society needs to adapt to it,” Sundar Pichai told Paley, adding that all walks of life could be affected, with “knowledge workers” of all kinds, including writers, accountants, architects and software engineers, likely to be affected. Overturned by artificial intelligence.
“This is going to affect every product in every company,” “For example, you might be a radiologist, and if you have an AI collaborator 5 to 10 years from now. You come in the morning, let’s say you With a hundred things to deal with, it might say, ‘These are the most serious cases you need to deal with first.’”
Paley looked at other fields within Google with advanced AI products, including DeepMind, whose robots can even play soccer without learning the skill from humans. , In addition, Google also showed a robot that recognized items and fetched the apple he wanted for Paley.
In warning of the consequences of artificial intelligence, Sundar Pichai said the scale of the problem of disinformation, fake news and imagery would be “much larger” and “it could do harm”.
Last month, Google rolled out its AI chatbot, Bard, to the public, after Microsoft introduced OpenAI’s GPT technology into its search engine, Bing.
In recent weeks, however, public figures from all walks of life have begun to express concern about the consequences of this rapid development. In March, Elon Musk, Steve Wozniak, and dozens of academics called for an immediate moratorium on training “experiments” related to large language models “more powerful than GPT-4,” the letter said. More than 25,000 people signed it.
Paley commented on the show: “Competitive pressures between giants like Google and startups you’ve never heard of are propelling humanity into the future, ready or not.”
Google released a document outlining “recommendations for regulating AI,” but Sundar Pichai said society must adapt quickly to regulations, laws that punish abuse, treaties between nations to ensure AI is safe for the world, and ” consistent with human values, including morality”.
“It’s not something a company can decide,” “That’s why I think the development of this field requires not only engineers but also social scientists, ethicists, philosophers, etc.”
When asked if society was ready for an AI technology like Bard’s, Sundar Pichai replied, “On the one hand, I don’t think so, because the speed with which we as a social institution can think and adapt has nothing to do with the pace of technological development.” It doesn’t seem to match the speed.” However, he said he is optimistic because “the number of people who are starting to worry about its impact” starts early on compared to other technologies in the past.