Home News Microsoft and Google cut AI ethics employees, causing industry concerns

Microsoft and Google cut AI ethics employees, causing industry concerns

0

ChatGPT is on fire, technology giants such as Microsoft and Google cut AI ethics employees, causing industry concerns

As artificial intelligence (AI) technology becomes more widely used in consumer products, there are reportedly growing concerns about the safety of the new technology, especially as large tech companies are cutting back on their “AI ethics” teams.

Today, companies such as Microsoft, Meta, Google, Amazon and Twitter are cutting members of “responsible AI teams,” which advise on the safety of consumer products that use artificial intelligence.

Industry experts say the decision to cut staff at major tech companies is worrisome because of the potential for misuse of the technology as more people begin to experiment with new AI tools. This concern was further heightened after the success of ChatGPT, an artificial intelligence chatbot launched by OpenAI.

Andrew Strait, a former ethics and policy researcher at Alphabet’s AI research company DeepMind and deputy director of the Ada Lovelace Institute, said, “It’s alarming that more of these teams are being used at a time when arguably more of them are needed than ever before. At a time when more of these teams are arguably needed than ever, members of these teams are being let go.”

In January, Microsoft disbanded its “ethics and society” team, which led its early work in related areas. Microsoft said fewer than 10 people were laid off, and hundreds of people are still working in the company’s artificial intelligence office.

“We have significantly increased our ‘responsible AI’ efforts and are working to institutionalize them company-wide,” said Natasha Crampton, Microsoft’s artificial intelligence business director.

Meanwhile, under new boss Elon Musk, Twitter has cut more than half of its staff, including the AI ethics team. The team had fixed a bias in Twitter’s algorithm that favored white faces when cropping images.

Twitter has not commented on this.

Amazon’s streaming platform Twitch laid off its AI ethics team last week, holding all teams that develop AI products accountable for bias-related issues, people familiar with the matter said.

In response, Twitch declined to comment.

Last September, Facebook’s parent company, Meta, disbanded its “responsible innovation” team. The team, which consisted of about 20 engineers and ethicists, was tasked with evaluating civil rights and ethics on Instagram and Facebook.

Meta has not yet commented.

Josh Simons, a former Facebook AI ethics researcher and author of Algorithms for People, said, “Responsible AI teams are one of the only internal bastions of large tech companies. The speed with which they are being abolished leaves big tech’s algorithms at the mercy of advertising interests, to the detriment of children, the well-being of vulnerable people, and our democracy.”

Meanwhile, another concern is that the large language models behind chatbots such as ChatGPT can make false statements, even to the point of “falsity,” which could be used for nefarious purposes such as spreading disinformation or cheating on exams.

Michael Luck, director of the Institute of Artificial Intelligence at King’s College London, said, “We’re starting to realize that people can’t fully anticipate some of the consequences of these new technologies, and it’s important to keep a close eye on it.”

Exit mobile version