According to Reuters, four people familiar with the matter said that while Google parent company Alphabet is promoting its chatbot Bard globally, it is also reminding employees to be careful when using chatbots, including its own Bard.
Alphabet has advised employees not to enter their confidential material into an artificial intelligence chatbot, the people said, a fact Google confirmed, saying it is a long-standing policy to protect information.
Alphabet also reminded its engineers to avoid directly using computer code generated by chatbots, some of people said. Google said that Bard may give inappropriate code suggestions, but it can still help programmers. Google also said it aimed to be transparent about the limitations of its technology.
Google’s caution also reflects a trend in corporate security standards that warns employees not to use publicly available chat programs. A growing number of businesses around the world have put in place safeguards against AI chatbots, including Samsung, Amazon and Deutsche Bank. The same is reportedly true for Apple, but the company did not respond to a request for comment.
According to a survey conducted by the social networking site Fishbowl, 43% of professionals were using ChatGPT or other AI tools in January of this year, many of them without telling their bosses. The survey included nearly 12,000 respondents, including some employees from top U.S. companies.
it is noticed that Google is currently promoting Bard to more than 180 countries, and Google also stated in the privacy statement updated on June 1: “Do not include confidential or sensitive information in your Bard conversations.”