Google DeepMind CEO Demis Hassabis has praised Chinese AI company DeepSeek’s latest model as “probably the best work” to emerge from China. However, he emphasized that despite its impact on global markets, the model does not introduce any groundbreaking scientific advancements.
DeepSeek made headlines last month when it published a research paper claiming that its AI model was trained at significantly lower costs than those of leading AI firms, using less sophisticated Nvidia chips. This revelation triggered a sell-off in tech stocks and reignited debates over whether major companies are overspending on AI infrastructure.
While acknowledging the quality of DeepSeek’s engineering, Hassabis noted that its success lies in optimization rather than innovation. “Despite the hype, there’s no actual new scientific advance … it’s using known techniques [in AI],” he explained, suggesting that the attention around DeepSeek has been “exaggerated a little bit.” In contrast, he highlighted Google’s newly released Gemini 2.0 Flash models as more efficient than DeepSeek’s offering.

Skepticism remains over DeepSeek’s claims about low-cost development, with some experts suggesting that the actual expenses were higher than reported.
Hassabis also weighed in on artificial general intelligence (AGI), the concept of AI surpassing human intelligence across all cognitive tasks. He believes AGI could be achieved within the next five years.
“I think we’re close now, you know, maybe we are only, you know, perhaps 5 years or something away from a system like that which would be pretty extraordinary,” he said at a Google event in Paris ahead of the AI Action Summit. He stressed the importance of preparing society for the implications of AGI, ensuring its benefits are shared while mitigating potential risks.

His perspective aligns with that of OpenAI CEO Sam Altman, who recently expressed confidence that AGI is within reach.
However, many industry experts, including AI researchers Max Tegmark and Yoshua Bengio, have warned of the dangers, particularly the risk of humans losing control over these advanced systems.