Google just yesterday opened up its chatbot Bard, a lightweight optimized version of Google’s LaMDA general-purpose language pre-trained big model, to users in the UK and the US, but the bot made an embarrassing mistake on its first day of public testing when a user asked how soon it would be shut down by Google and it incorrectly replied that it had already been shut down by Google.
Factual errors in chatbot responses aren’t new – they’re one of the main problems with such big language models these days – and Google said flat out yesterday that Bard’s answers would contain factual errors, so why even report this news? Not to pick on Bard, but it’s an interesting mistake.
It turns out that the source of Bard’s incorrect answer was a joking post on Hacker News by a user who teased that Google would shut down Bard within a year and that Google had indeed shut down many of its own services in the past.
Google also admits that Bard contains some factual errors because it gets its information from the web, and Microsoft’s new Bing has a similar problem. This issue is one of the biggest challenges for artificial intelligence based on large language models (LLMs), and both Google and Microsoft will improve the process of getting information over time, but there will still be occasional problems. In addition, Bard is still in preview, so it still needs to go through more public testing.
How to make these chatbots better at identifying fake messages is a problem that these tech giants need to seriously consider.