AI researchers at Stanford University and the University of Washington created the powerful reasoning model s1, which operates under a $50 cloud computing budget. The breakthrough challenges existing high-cost AI model dominance by raising queries about both AI accessibility through low-cost solutions and its eventual commoditization.
The study began with a standard model purchased from Qwen, which belongs to the Chinese AI lab Alibaba. Distillation technology allowed them to extract reasoning capabilities from another AI system by training on its output responses. The researchers obtained S1 from Google’s Gemini 2.0 Flash Thinking Experimental model, which users can access through Google AI Studio.
The training process of S1 needed only 1,000 specifically selected questions and their answers and reasoning steps from Gemini 2.0. The training process required 30 minutes or less with 16 Nvidia H100 GPUs, and Stanford researcher Niklas Muennighoff calculated that the required computing could be rented today for $20.

The training process of s1 using 1,000 questions cost only $20 yet produced performance results similar to o1 from OpenAI on benchmarks for math and coding. Research investigation revealed that when S1 received instructions to “wait” during its reasoning process, it led to enhanced performance outcomes.
The development of s1 demonstrates that AI development has become more accessible, yet it creates uncertainty about the investments major AI labs make in their proprietary models. OpenAI has accused DeepSeek of misusing its API for distillation, while Google prohibits its users from making competing services by reverse-engineering its models. Future model investments from major AI companies will prove that similar advanced AI features can be developed with reduced financial expense through techniques such as distillation shown in S1.