According to leaked internal documents, artificial intelligence will become the central theme of Google’s developer conference Google I/O this year. The company plans to release a series of generative artificial intelligence (AIGC) function updates, including the launch of general Large Language Models (LLM).
This internal document shows that Google will launch its latest and most advanced LLM – PaLM 2. PaLM 2 supports more than 100 languages and operates internally under the codename “Unified Language Model”. Google also gives it extensive coding and math tests, as well as creative writing tests and analysis.
At the event, Google will announce how artificial intelligence can “help people reach their full potential,” including “generative experiences” with Bard and search, the documents also show. Sundar Pichai, chief executive of Google and its parent company Alphabet, will speak to developers on hand about the company’s advances in artificial intelligence.
Google’s update comes amid heightened competition in the artificial intelligence space, with both the company and Microsoft racing to incorporate chat AI technology into their products. Microsoft is using its investment in ChatGPT creator OpenAI to bolster its Bing search engine, while Google has moved quickly to try to integrate its Bard technology across different teams and launch its own LLM.
Google first announced the PaLM Language model in April 2022. In March, the company unveiled an API for PaLM, along with a set of AI enterprise tools it says will help companies “generate text, images, code, video, audio, and more from simple natural language prompts.”
Last month, Google said its medical LLM, dubbed “Med-PaLM 2,” could answer “expert doctor-level” medical exam questions with 85 percent accuracy.
Google also plans to share progress made with Bard and search, offering what it calls “generative experiences,” including using Bard for coding, math and “logic,” and expanding to Japanese and Korean, the documents show. Google has been working on a series of more powerful Bard models and officially rolled them out as an experiment in March.
Google is also internally developing a version of its multimodal model called “Multi-Bard,” which uses larger data sets and can help with complex math and coding problems, according to another internal document. Additionally, Google tested versions called “Big Bard” and “Giant Bard.”
Google also plans to expand its “Workspace AI collaborator” to include discussions on template generation in Worksheets and image generation in Slides and Meet products. In March, the company said it would give a small group of users access to artificial intelligence features in Gmail and Google Docs as part of a test, and plans to introduce more generative features in its meetings, sheets and slides apps. AI features.
One of the images shows a slideshow sidebar with a chat box that allows users to enter text and optionally create an image based on that text. Other updates include use cases for image recognition tool Google Lens. Google will show improvements to its camera and voice “multi-search” technology, following last year when it allowed users to ask what was in an image they were looking at.
Earlier reports said that outside of the field of artificial intelligence, Google will show off its new foldable phone, the Pixel Fold. The company claims that the Pixel Fold will have “the most durable hinge ever on a foldable phone,” and will offer a phone trade-in option. Google claims that the biggest selling points of Pixel Fold are waterproof and only pocket size.