Size doesn't matter for AI models, says OpenAI CEO Sam Altman

DragonSlayer101

Posts: 372   +2
Staff
In brief: Artificial intelligence models have grown ever-larger in size over the past few years, but that will no longer be the case in the future, according to OpenAI CEO Sam Altman. Speaking at an MIT event earlier this month, Altman said that going forward, progress in AI won't come from giant models that feed on copious amounts of data, and developers will have to instead find "other ways" to make them better.

Altman's statement is seemingly at odds with how OpenAI has developed ChatGPT, the viral chatbot that has thrilled, amused, and scared tech enthusiasts in equal measure with its ability to write poems, solve mathematical equations, debug code, and more. The current version of ChatGPT is based on GPT-4, which is believed to be one of the most sophisticated large language models (LLMs) available today.

Despite the unqualified success of GPT-4, Altman believes that size will not define the quality of AI models going forward. According to him, there's currently too much emphasis on parameter count, akin to the gigahertz race in chips in the 1990s and early 2000s. However, that's likely to change in the future, when AI models will keep improving despite a marked reduction in the parameter count.

As reported by Wired, GPT-2 had 1.5 billion parameters, while GPT-3 increased it to a whopping 175 billion. However, OpenAI did not reveal the parameter count of GPT-4, probably because size is no longer the defining criterion for OpenAI's famed large language model.

There's also been speculation that training GPT-4 cost $100 million dollars, although there's been no confirmation about that from OpenAI until now. When asked about it at the MIT event, Altman said that it actually cost more than that. He did not elaborate on that any further, however.

Meanwhile, the success of ChatGPT has not only intrigued the general population, it has also unnerved some of the biggest names in technology. That includes the likes of Elon Musk, Steve Wozniak and Tristan Harris, who were among 100 technologists that wrote an open letter last month calling on all AI companies to pause the training and development of artificial intelligence systems for at least 6 months.

When asked about it, Altman said that the letter was correct about AI systems needing to put safety guardrails in place before being released to the public. However, he also claimed that some of the concerns mentioned in the letter are simply misplaced. For example, he said OpenAI was not training GPT-5, unlike what was claimed in the original version of the letter. He also claimed that the company is constantly working on safety issues, and will continue to do so to make sure its AI models don't end up doing more harm than good.

Permalink to story.

 
Back