Home News OpenAI CEO: Large language models are nearing their limits in scale, not...

OpenAI CEO: Large language models are nearing their limits in scale, not bigger is better

0

Sam Altman, co-founder and CEO of OpenAI, was interviewed at MIT’s Imagine Action event on April 16, talking about trends and security issues in Large Language Models (LLM).

Sam Altman argued that we are approaching the limits of LLM scale, and that larger scale does not necessarily mean a better model, but may simply be the pursuit of a number. the scale of LLM is no longer an important indicator of model quality, and there will be more ways to improve the capability and utility of models in the future. He drew an analogy between the scale of LLM and the race for chip speed in the past, noting that today we are more concerned with whether chips can do the job than how fast they are. He said that OpenAI’s goal is to provide the world with the most capable, useful, and safe models, not to be self-absorbed over the number of parameters.

Sam Altman also responded to an open letter asking OpenAI to suspend the development of more powerful AI for six months, saying he agrees that safety standards need to improve as capabilities get more powerful, but that he believes the letter lacks technical detail and accuracy. He said OpenAI had spent more than six months working on security issues before releasing GPT-4 and had invited external audits and “red teams” to test the models. “I also agree that as capabilities become more robust, security standards must improve. But I think this letter is missing most of the technical nuances about what we need to pause — an earlier version of this letter claimed we were training for GPT-5. we’re not, and won’t be for some time, so in that sense, it’s kind of silly. But there are other things that we’re doing on top of GPT-4 that I think have various security issues that need to be addressed that are not mentioned in the letter at all. So I think it’s very important to be cautious and to be increasingly rigorous about security issues. I don’t think the recommendations (in the letter) are the ultimate solution to this problem,” he said.

OpenAI has been working in the field for seven years and has put a lot of effort into it, which most people are not willing to do, Ault said. He said he is willing to openly discuss security issues and the limitations of the model because it is the right thing to do. He admitted that sometimes he and other company representatives say “dumb things” that turn out to be wrong, but he’s willing to take the risk because the technology requires dialogue.

Exit mobile version