At last month’s Google (Google) I / O developer conference, Google revealed for the first time the large-scale language model Gemini is developing. According to Wired, Demis Hassabis, co-founder of DeepMind and CEO of Google DeepMind, further revealed the details of Gemini in an interview recently: the system combines the technology behind AlphaGo with a large language model, and the goal is to give the system new capabilities, such as planning or problem-solving, are more capable than OpenAI’s GPT-4.
Hassabis said: “At a high level, you can think of Gemini as combining some of the advantages of AlphaGo’s type system with the amazing language capabilities of large models. We also have some new innovations that will be very interesting.” In development, the process is expected to take several months and could cost tens or hundreds of millions of dollars. OpenAI CEO Sam Altman said in April that the cost of creating GPT-4 was more than $100 million.
Once completed, Gemini could play an important role in Google’s competition against generative artificial intelligence technologies such as ChatGPT. Google has pioneered many of the technologies that have enabled groundbreaking advances in artificial intelligence but has chosen to develop and deploy products based on them carefully.
Since the advent of ChatGPT, Google has launched its own chatbot, Bard, and applied generative artificial intelligence to its search engine and many other products. To accelerate AI research, the company merged Hassabis’ group DeepMind with Google’s main AI lab, Brain, in April to create Google DeepMind. The new team, Hassabis said, will bring together two powerful factions with fundamental roles in the advancement of artificial intelligence. “If you look at where we are in AI, I’d say 80 or 90 percent of innovation comes from one or the other,” Hassabis said. “Both organizations have done a lot of great things over the past decade. .”
Hassabis said the extraordinary potential benefits of AI — such as scientific discoveries in areas such as health or climate — make it necessary for humanity to continue developing the technology. He also believes that a mandatory moratorium is impractical because it would be nearly impossible to enforce. “If done correctly, it will be the most beneficial technology for humanity,” he said.
That doesn’t mean Hassabis is advocating that AI development proceeds in a haphazard fashion. DeepMind began exploring the possible risks of AI long before ChatGPT and has an “AI safety” group led by Shane Legg, the company’s other co-founder. Hassabis signed a statement last month, along with a number of other high-profile AI figures, warning that AI could pose risks comparable to nuclear war or a pandemic.
One of the biggest challenges right now, says Hassabis, is identifying what risks more capable AI might pose, saying, “I don’t think researchers in the field have a consensus yet on what those risks are, or how big they are.” .”
Hassabis said he expects Google to take a leadership role in the development and deployment of artificial intelligence, and to work with other companies and governments to develop sound rules and standards. “We hope to be able to set an example for the entire industry,” he said, “and we hope to be able to work with others to advance the safe and responsible development of this technology.”
Hassabis said he is confident about the future of artificial intelligence, but not taking it lightly. “I think we need to be cautious and humble,” he said. “It’s a very powerful technology and we need to have a lot of respect for it.”