It was reported that Dylan Patel, chief analyst of SemiAnalysis, a chip industry research company, said that because ChatGPT runs on expensive computing the cost of ChatGPT investment may be as high as 700,000 US dollars (currently about 4.816 million CNY).
Patel pointed out that ChatGPT needs huge computing power to give feedback based on user input, including writing cover letters, generating teaching plans, and helping users optimize their personal data. “Most of the cost comes from expensive servers,” he said.
Also, while Patel’s original estimate was based on OpenAI’s GPT-3 model, ChatGPT may now be more expensive to run after adopting the latest GPT-4 model.
In response, OpenAI has not responded to this.
Afzal Ahmad (Afzal Ahmad), another analyst at Patel and SemiAnalysis, said that the outside world has previously noticed that training the large language model behind ChatGPT may cost hundreds of millions of dollars, but the operating expenses, In other words, the cost of artificial intelligence reasoning far exceeds the cost of training at any reasonable scale of deployment. “Indeed, on a weekly basis, the inference cost of ChatGPT exceeds the training cost,” they point out.
Companies using OpenAI’s language models have also been paying high prices for the past few years. Startup Latitude has developed an AI dungeon game that generates a storyline based on user input. The company’s CEO Nick Walton (Nick Walton) said that the cost of running the model and the corresponding purchase of Amazon AWS cloud servers will reach $200,000 per month in 2021 (currently about 1.376 million CNY). So Walton ultimately decided to switch to a language software provider backed by AI21 Labs. This helped him cut the company’s artificial intelligence costs in half to $100,000 a month (currently about 688,000 CNY).
“We’ll joke that we have human workers and AI workers and spend roughly the same amount on both,” Walton said in an interview. “We’re spending hundreds of thousands of dollars a month on AI. , and we’re not a big startup, so it’s a huge expense.”
Recently, it was reported that in order to reduce the running cost of generative artificial intelligence models, Microsoft is developing an artificial intelligence chip code-named “Athena”. The project started in 2019. A few years before this, Microsoft reached a $1 billion investment agreement with OpenAI, requiring OpenAI to run its models only on Microsoft’s Azure cloud servers.
There are two aspects of thinking behind Microsoft’s launch of this chip project. Microsoft executives realized they were behind Google and Amazon in developing their own chips, according to people familiar with the matter. Meanwhile, Microsoft is looking for cheaper alternatives to Nvidia’s GPU chips.
Currently, Microsoft has about 300 employees working on the chip. The chip could be released as early as next year for internal use by Microsoft and OpenAI, the sources said. Microsoft declined to comment for this story.