Shortly before launching its artificial intelligence chatbot Bard to the public in March, Google reportedly asked its internal employees to test the new tool.
According to a screenshot of an internal discussion, one employee concluded that Bard was a “pathological liar”. Another employee said that Bard was “embarrassing”. One employee wrote that when they asked Bard how to land a plane, the chatbot often gave advice that would lead to a crash. Another employee said that Bard’s advice on diving “could lead to serious injury or death”.
According to several current and former employees, as well as internal Google documents, Google, previously a highly trusted internet search giant, provided low-quality information through Bard in order to keep up with its competitors, while placing a low priority on its commitment to ethics in technology. ethics research team doubled in size and devote more resources to assessing the potential harms of AI technology. However, with rival OpenAI’s chatbot ChatGPT making its debut in November 2022 and becoming hugely popular, Google began rushing to incorporate generative AI technology into its most important products.
For the technology itself, this brings significantly faster development and could have a profound impact on society. Current and former employees of Google say that the team of ethical technology that Google has promised to strengthen is currently voiceless and demoralised. They revealed that employees responsible for the safety and ethical implications of new products have been told not to hinder or try to stifle any generative artificial intelligence tools in development.
Google’s goal is to revitalise its increasingly sophisticated search business around this cutting-edge technology, get ahead of Microsoft-backed OpenAI, drive generative AI applications into tens of millions of phones and homes worldwide and win the competition.
The ethics of AI has taken a back seat,” said Meredith Whittaker, chairman of the rights group Signal Foundation and former Google manager. If tech ethics can’t be prioritised over profits and growth, they ultimately won’t work.”
In response, Google said that responsible AI remains the company’s top priority. Google spokesman Brian Gabriel said, “We will continue to invest in teams that apply AI principles to technology.” The team working on responsible AI research lost at least three people in a round of layoffs in January, including technical governance and project directors. The layoffs affected about 12,000 employees at Google and its parent company.
For years, Google has led much of the research underpinning current AI developments, yet at the time of ChatGPT’s launch, it had yet to integrate user-friendly generative AI technology into its products. Google employees say that in the past, Google has been careful to assess its own capabilities and ethical considerations when applying new technologies to search and other important products.
Last December, however, Google’s senior management issued a “red alert” and changed its appetite for risk. Employees say Google’s leadership team has decided that the public will forgive the flaws in new products as long as they are positioned as “experiments”. However, Google still needed to get the tech ethics team on board. That month, Jen Gennai, head of AI governance, convened a meeting of the Responsible Innovation Team, which is responsible for upholding Google’s AI principles.
Gennai suggested that some compromises might be necessary for Google to accelerate the pace of product launches. Google scores products in several important categories to gauge whether they are ready to be released to the public. In some areas, such as child safety, engineers still need to address 100% of potential problems. But in other areas, she said at the conference, Google may not have time to keep waiting. She said: “‘Fairness’ may not be, it doesn’t need to be 99 points. In this area, we may just need to get to 80, 85, or higher to meet the needs of a product launch.” “Fair” in this case means reducing product bias.
In February, a Google employee suggested in an internal messaging group, “The problem with Bard is not just that it’s useless: please don’t release this product.” The message was viewed by nearly 7,000 people. Many of them felt that Bard’s answer was contradictory, if not wrong, on a simple matter of fact. However, sources revealed that Jannai dismissed the risk assessment report submitted by team members the following month. At the time, the report concluded that Bard was not ready and could cause harm. Shortly afterwards, Google opened the Bard to the public, describing it as an “experiment”.
In a statement, Janaj said that this was not just a decision at her level. After the team made its assessment, she “made a list of potential risks and escalated the analysis” to senior managers in product, research and business lines. The team then judged, she says, that it “could move forward with a limited experimental release, including continued pre-training, raising the security fence, and making appropriate disclaimers”.
As a whole, Silicon Valley is still grappling with the tension between competitive pressures and technological security. “In a recent presentation, the Center for Humane Technology Research stated that the ratio of AI researchers to those focused on AI security was 30:1, suggesting that concerns about AI in large organisations are often raised alone.
As the development of AI accelerates, there is growing concerned about its adverse impact on society. As the technology behind ChatGPT and Bard, Big Language Models obtain large amounts of textual information from news stories, social media postings and other internet sources, which are then used to train software that automatically predicts and generates content based on the user’s input information or questions. This means that, by their very nature, these products have the potential to produce offensive, harmful or inaccurate speech.
However, ChatGPT created a stir when it debuted. By the beginning of the year, there was no turning back on this front. in February, Google began releasing a range of generative AI products with the launch of Bard, a chatbot, followed by an upgrade to its video platform YouTube. at the time, Google said that YouTube creators would soon be able to virtually dress up in videos, as well as use generative AI to create “fantastical movie scenes”. Two weeks later, Google announced the addition of new AI features to Google Cloud, showing how services such as Google Docs and Slides can automatically create presentations, sales training documents and emails. On the same day, Google also announced that it would be incorporating generative AI into its healthcare product line. Employees said they were concerned that such a pace of development meant that Google did not have enough time to study the potential hazards of AI.
Whether or not cutting-edge AI technology should be developed in an ethical way with technology has long been a topic of debate within Google. In the past few years, Google products have been notorious for mistakes. For example, in an embarrassing incident in 2015, Google Photo Album incorrectly labelled a photo of a black software developer and his friend as “gorillas”.
Three years later, Google said that instead of fixing the underlying artificial intelligence technology, it had removed all results for the search terms “gorilla”, “chimpanzee” and “monkey”. According to Google, “a group of experts from diverse backgrounds” evaluated the solution. The company has also set up an AI ethics department to proactively conduct research to ensure that AI is fairer to users.
However, according to several current and former employees, a major turning point was the departure of AI researchers Timnit Gebru and Margaret Mitchell. Together they had led Google’s AI ethics team. However, they left in December 2020 and February 2021 because of a controversy about the fairness of Google’s AI research. Eventually, Samy Bengio, the computer scientist who managed their work, and several other researchers also left over the course of several years to join Google’s competitors.
In the wake of the scandal, Google also took steps to try to restore its public reputation. The responsible AI team was reorganised under the leadership of Marian Croak, then vice president of engineering. She promised to double the size of the AI ethics team and to strengthen the team’s ties with the rest of the company.
But even after the public announcement, some people are finding it difficult to work on AI ethics at Google. One former employee said they made requests to address the issue of fairness in machine learning. Yet they also found that they were often discouraged from doing so, to the extent that it even affected their performance scoring at work. He said that managers also protested that the ethical issues raised about AI were getting in the way of their “real jobs”.
Those who continue to work on the ethics of AI at Google face a dilemma: how to keep their jobs at Google while continuing to advance their research in this area. Nyalleng Moorosi, a former Google researcher, is now a senior researcher at the Distributed Artificial Intelligence Institute, which was founded by Grubb. She said the AI ethics research position was created to show that the technology may not be ready for a large-scale release and that the company needs to slow down.
Both employees said that to date, Google’s ethical review of AI for products and features has been almost entirely voluntary, with the exception of the publication of research papers and the review of Google Cloud for customer collaborations and product launches. Biometrics, identity, and child-related AI research must be reviewed by Janaj’s team for “sensitive topics”, but other projects do not necessarily require it. However, some employees do contact the AI ethics team without being asked to do so.
Nevertheless, when employees in Google’s product and engineering teams try to understand why Google’s AI is slow in the marketplace, they are often aware of public concerns about AI ethics. However, some at Google believe that new technology should be made available to the public more quickly so that the technology can be optimised through public feedback.
Another former employee said that it may be difficult for Google engineers to access the company’s most advanced AI models until Red Alert is released. He said Google engineers often brainstorm by studying generative AI models from other companies to explore the possibilities of the technology, and then later go on to look at how to go about implementing the technology within the company.
Gaurav Nemade, a former product manager at Google and a Google chatbot developer until 2020, said: “There’s no doubt in my mind that I’ve seen the positive changes that the ‘Red Alert’ and OpenAI touches have brought to Google. Can they really be the leader and challenge OpenAI in its own game?” A number of recent changes, such as Samsung considering replacing Google with Microsoft Bing as the default search engine in its phones, are a good indication of how important a first-mover advantage in technology can be. Microsoft Bing has also adopted ChatGPT’s technology.
Some Google employees have said that they believe Google has done enough security checks on its latest generative AI products and that Bard is safer than rival chatbots. However, the immediate priority now is to release generative AI products quickly, so any further emphasis on ethical issues in technology would be futile.
The team working on the new AI features is currently in a closed state, so it is difficult for the average Google employee to see the full extent of Google’s AI technology. In the past, employees could also express their concerns publicly through the company’s email groups and internal channels, but now Google has come up with a series of community guidelines that restrict employees from doing so, citing the need to reduce toxicity. Several employees have said they see these restrictions as a way to control speech.
This brings frustration,” Mitchell said. Our feeling was, what the hell are we doing?” Even though there is no explicit directive from Google to stop working on tech ethics, it is clear in the current climate that employees doing this type of work will feel unsupported and may end up working less as a result. “When Google management openly discusses the ethics of technology, they tend to explore a hypothetical future in which all-powerful technology cannot be controlled by humans, and do not deal much with everyday scenarios that could already cause harm. This approach has also been criticised by some industry insiders as a form of marketing.
El-Mahdi El-Mhamdi, who worked as a research scientist at Google, left the company in February this year because of its refusal to tackle the ethical issues of artificial intelligence head-on. He said that a paper he co-authored late last year suggested that it was mathematically impossible for a basic AI model to be large-scale, robust and secure in terms of privacy at the same time.
He said that Google challenged the research he was involved in through his employment relationship. Rather than defend his academic research, he then simply renounced his employment relationship with Google and opted for an academic status. He said, “If you want to stay at Google, you have to serve the whole system, not be in conflict with it. “