Researchers create reasoning model for under $50, performs similar to OpenAI's o1

zohaibahd

Posts: 934   +19
Staff
Why it matters: Everyone's coming up with new and innovative ways to work around the massive costs involved with training and creating new AI models. After DeepSeek's impressive debut, which shook Silicon Valley, a group of researchers has developed an open rival that reportedly matches the reasoning abilities of OpenAI's o1.

Stanford and University of Washington researchers devised a technique to create a new AI model dubbed "s1." They have already open-sourced it on GitHub, along with the code and data used to build it. A paper published last Friday explained how the team achieved these results through clever technical tricks.

Rather than training a reasoning model from scratch, an expensive endeavor costing millions, they took an existing off-the-shelf language model and "fine-tuned" it using distillation. They extracted the reasoning capabilities from one of Google's AI models – specifically, Gemini 2.0 Flash Thinking Experimental. They then trained the base model to mimic its step-by-step problem-solving process on a small dataset.

Others have used this approach before. In fact, distillation is what OpenAI was accusing DeepSeek of doing. However, the Stanford/UW team found an ultra-low-cost way to implement it through "supervised fine-tuning."

This process involves explicitly teaching the model how to reason using curated examples. Their full dataset consisted of only 1,000 carefully selected questions and solutions pulled from Google's model.

TechCrunch notes that the training process took 30 minutes, using 16 Nvidia H100 GPUs. Of course, these GPUs cost a small fortune – around $25,000 per unit – but renting works out to under $50 in cloud compute credits.

The researchers also discovered a neat trick to boost s1's capabilities even further. They instructed the model to "wait" before providing its final answer. This command allowed it more time to check its reasoning to arrive at slightly improved solutions.

The model is not without its caveats. Since the team used Google's model as its teacher, there is the question that s1's skills, while impressive for its minuscule cost, may not be able to scale up to match the best AI has to offer just yet. There is also the potential for Google to protest. It could be waiting to see how OpenAI's case goes.

Permalink to story:

 
Just wondering if only $50 of compute time. Do these cloud resources do any analysis of Joe Bloggs get them to analyse eg a better bomb , better 0 day hacks etc. the last is a goody, if you can build at AI with all know vulnerabilities and hacking methods, you could probably make good coin selling to white or dark markets
 
This reminds me of how Databricks made Dolly a couple years ago. At the time, they essentially took Llama (or was it Alpaca?) that was not an instruct version (so not a chatbot) and fed it Q&A that they crowdsourced from within their company to turn it into a chatbot. Seems like we are back to that kind of example driven learning/distillation, only this time for reasoning models. If the prior pattern holds, we should expect to see more native reasoning models throughout the rest of this year from all the major open source (and proprietary) providers.
 
I trust my own, traditional decision maker...

decision-maker.png


Beats AI every time ;)
 
I have check almost all (→ lmarena.ai) the public available LLMs from LLaMA 1 until o3, the most impressive to me until now is the Phi4 14B.
b7vRJls.jpeg
 
Back