Google CEO Sundar Pichai responded to criticism of Bard, the company’s experimental artificial intelligence chatbot, by promising that Google will soon upgrade Bard. “We obviously have more capable models,” Pichai said in an interview with the New York Times Hard Fork podcast.” Soon, maybe when this [podcast] goes live, we’ll upgrade Bard to some of our more capable PaLM models, which will bring more capabilities; both in terms of reasoning, coding, it can answer mathematical questions better. So you’ll see progress over the course of the next week.”
Pichai noted that Bard is running on a “lightweight and efficient version of LAMDA,” an artificial intelligence language model focused on delivering conversations. In some ways, I think we’re putting a modified Civic in a race against a more powerful car,” Pichai said. By contrast, PaLM is a newer language model; it’s bigger and Google claims it’s more capable at handling tasks like common sense reasoning and coding problems.”
First released to public users on March 21, Bard has failed to garner the attention or praise earned by OpenAI’s ChatGPT and Microsoft’s Bing chatbot, and Bard has never been as useful as its rivals. Like all general-purpose chatbots, it is capable of answering a wide variety of questions, but its answers are often less fluid and imaginative, and it fails to make use of reliable data sources.
Pichai said part of the reason for Bard’s limited capabilities is a sense of caution within Google. He said, “It was important to me not to [roll out] a more capable model until we could be completely sure we could handle it well.”
Pichai also confirmed that he is discussing the work with Google co-founders Larry Page and Sergey Brin (“Sergey has been hanging out with our engineers for a while”) and that while he himself has never released the infamous “Code Red” to disrupt development, someone at the company may have “sent an email saying there was a Code Red.
Pichai also discussed concerns that the development of artificial intelligence is currently moving too fast and may pose a threat to society. Many in the artificial intelligence and technology community have been warning of the dangerous race dynamics that currently exist between companies including OpenAI, Microsoft and Google. Earlier this week, an open letter signed by Elon Musk and top AI researchers called for a six-month moratorium on the development of these AI systems.
“In this area, I think it’s important to hear concerns,” Pichai said regarding the open letter calling for a moratorium.” And I think it makes sense to be concerned about it ……. It will take a lot of debate, no one knows all the answers, and no one company can get it right. Artificial intelligence is too important an area to regulate, but the suggestion is that it would be better to simply apply regulations in existing industries – such as privacy regulations and regulations in health care – rather than create new laws to specifically address AI.”
Some experts worry about immediate risks, such as the tendency of chatbots to spread inaccurate misinformation, while others warn of additional threats; given that these systems are difficult to control, they could be used disruptively once they are connected to the broader network. Some believe that current programs are also approaching what is known as artificial general intelligence, or AGI: systems that are at least as capable as humans in a wide range of tasks.
To me, these systems will be very, very capable, so it hardly matters whether you reach AGI or not,” Pichai said. Can we have an AI system that can cause disinformation on a large scale? Yes. Is it AGI? It really doesn’t matter. Why do we need to worry about the security of AI? Because you have to anticipate that and evolve to meet that moment.”