More than 30 dispatchers who helped train the language model behind ChatGPT, the hot chatbot, were fired in March this year, according to sources familiar with the matter and internal communication documents.
Screenshots from an internal Slack chat show that San Francisco-based outsourcing firm Invisible Technologies had fired 31 dispatchers as of March 16. OpenAI, however, continues to hire throughout its company.
The screenshots also show hundreds of Invisible Technologies dispatchers, described as “advanced AI data trainers”, working with OpenAI to help train its GPT chatbots. One dispatcher said that the company’s AI data trainers were responsible for improving the models’ coding skills, enhancing their creative writing skills, or training them to refuse to respond to certain topics. The dispatcher asked to remain anonymous due to a confidentiality agreement, although the informant confirmed his identity and employment.
Kamron Palizban, vice president of operations at Invisible Technologies, spoke about the layoffs at an all-staff meeting in March. In a leaked recording of the meeting, he said that OpenAI was looking to reduce the number of dispatchers due to changes in business needs. Parizban also said at the meeting that many of the furloughed dispatchers were working on projects that did not provide a high enough return on investment for OpenAI.
OpenAI drastically reduces the number of dispatchers
Invisible Technologies’ relationship with OpenAI has provided a glimpse into the ChatGPT maker’s data training. To a large extent, OpenAI has kept this training secret.
The restructuring of OpenAI’s contract with Invisible Technologies followed reports that the former had increased its workforce for six consecutive months. As of January this year, OpenAI had hired nearly 1,000 data annotation dispatchers in places such as Eastern Europe and Latin America, sources close to the situation said.
The layoffs at Invisible Technologies come just two months after Microsoft injected $10 billion into OpenAI. But Invisible Technologies isn’t the only outsourcing company working with OpenAI.
A Time investigation revealed that in February 2022, Sama, an outsourcing company also based in San Francisco, ended its relationship with OpenAI after learning that its data tagging employees in Kenya were censoring harmful content such as sexual abuse, hate speech and violence.
In a statement to the Times, an OpenAI spokesperson explained that “classifying and filtering harmful text and images is a necessary step to minimise the amount of violent and pornographic content contained in training data and helps create tools that can detect harmful content.”
The job of an artificial intelligence trainer
According to dispatchers at Invisible Technologies, the most basic duties of an AI trainer include reviewing conversations between the AI and its users to identify potentially illegal, privacy-invasive, offensive or error-filled messages. The dispatchers interviewed described their daily lives in this way:
When they start their shifts, they first open the internal work browser and check the team's task list. They might click on a task like this: "Have a conversation about a random topic with browsing disabled" and then enter a query into the message box.
After submitting the query, OpenAI's model will generate four responses. The dispatcher evaluates each response by opening a drop-down menu and selecting the type of error present, such as factual errors, spelling or grammatical errors, or the presence of harassment. The dispatcher then categorises the severity of the error on a scale of one to seven, with seven representing a 'mostly perfect' answer.
Next, the dispatcher must craft a perfect response and submit it to show that the task has been completed. The dispatcher says the results will be sent to quality checkers at OpenAI and Invisible Technologies. This process needs to be repeated again and again for each subsequent task.
Referring to OpenAI during the conference, Camren Parizban, vice president of operations at Invisible Technologies, said, “They’re at a stage where they’re about to get more clarity on where they’re going.”
Grace Matelich, partner and operations manager at Invisible Technologies, said in the recorded session that the company dismisses underperformers based on performance metrics such as “quality” and “throughput” of completed tasks. Underperforming dispatchers were dismissed based on performance indicators such as “quality” and “throughput” of tasks completed.
Matelich said that underperforming dispatchers, as well as those who were hired but did not “meet the certification threshold”, were fired, although some were given the option to move to a different OpenAI team. He added: “If you’re still here today, I want you to know that it’s because I believe in your ability to do a great job.”