Home App Google CEO interview record: Ten AI issues are related to human survival...

Google CEO interview record: Ten AI issues are related to human survival and development

0

“AI Outpost” in the face of the menacing ChatGPT, Google CEO Sundar Pichai said in an interview on Monday that in some areas of artificial intelligence (AI), the company does behind. However, he is in no rush to move the company forward quickly because caution is key.

Here are excerpts from Sundar Pichai’s interview:

  1. How satisfied are you with the current state of Google’s chatbot, Bard, compared to the progress of competitors?

We’ve done better in some areas and lagged behind in others. I think it’s in a very, very early stage right now.

  1. Does Google’s generative AI product make some mistakes that make you uncomfortable?

There is definitely a problem of trade-offs. This is exciting because there are new use cases. People responded to that. It’s uncomfortable because it’s itself machine-generated, and sometimes it makes things up.

  1. Does Google see this as a race to the death?

It’s a competitive time, but I’ve been building Google as an AI-native company for a long time. I think we are in a better position in the transition to AI than we are in the transition to mobile platforms.

  1. In 2017, Google researchers published a groundbreaking paper introducing AI techniques that are now used by ChatGPT. Will Google change the way it publishes AI research?

It has been a long process of discovery, dedicated to solving these ambitious problems. These questions attract the best minds in the world and help fuel our virtuous cycle of cutting-edge innovation. These are largely unchanged so far. In the future, as AI research is used in products, we may consider which research results are proprietary. But do I expect Google to be actively publishing research in this area? Yes.

  1. You have said that tech companies have to be careful not to see AI as just a race, but Google and others are really racing to market. What are you doing now to ensure that morality is not cast aside?

We have been cautious. In some areas, we choose not to be first to market with products. We already have good structures around responsible AI. You’ll continue to see, and we’ll take it slowly.

  1. In May of this year, dozens of prominent AI researchers, including some leaders at Google, jointly signed an open letter warning that AI has the risk of causing human extinction. How seriously do you take this question? What should we do?

We do make sure we focus on risks like bias, misinformation, security incidents, etc. Deepfake technology will pose serious problems. We need to take all of this seriously.

  1. Many of Google’s new AI products are launched as experiments. How much potential do you think they have for becoming a permanent part of Google Search?

These products will become part of the mainstream search experience. There are some things we want to make sure we get right. For example, people go to Google and type in questions like, “What’s the Tylenol dosage for my 3-year-old?” There’s no room for error in the answer to that kind of question.

  1. The industry agrees that some kind of regulation of AI is needed. What does this mean in practice?

Consider it in the context of existing regulations and regulate in proportion to the use case and associated risks.

  1. Are you worried that Google researchers will leave to form competitors or join companies such as OpenAI?

By my count, Googlers have left to create over 2,000 startups, which I think is pretty good. Some of them are our future cloud customers. Some of them are back. I think it’s a healthy flow.

  1. Earlier this year, Microsoft CEO Satya Nadella said that Microsoft’s move to integrate AI into search has caused Google to “dance,” a Silicon Valley adage that means it’s hard for big companies to stay agile. Is that fair?

I think he said that to get you to ask me that question. It’s all part of the game.

Exit mobile version