Site icon Wonderful Engineering

Chinese Scientists Claim To Have Built An AI Model 100 Times Faster Than ChatGPT

China’s AI researchers are making a bold claim: they’ve built a model that doesn’t just process language like a machine, but more like the human brain. The new system, called SpikingBrain1.0, has been described as the first “brain-like” large language model and could shake up how artificial intelligence is built.

According to Windows Central, the model uses an unusual approach to language. Instead of looking at every single word in a sentence equally, it zooms in on the closest ones first and only later considers the rest. That’s closer to how our own brains work when making sense of speech or text, where context builds gradually rather than all at once.

The team behind it says this selective process makes SpikingBrain1.0 run much more efficiently. They claim it can be up to 100 times faster than some current open-source models while needing far less training data. In an era when training the biggest AI models costs billions of dollars and demands massive computing power, a system that gets smarter while consuming less could be a big deal.

The current crop of LLMs work very differently to what’s claimed of SpikingBrain1.0 (Image credit: Windows Central)

Hardware independence is another part of the pitch. Reports suggest the model was trained and run on MetaX processors, chips developed inside China rather than relying on Nvidia’s powerful GPUs. If that proves true, it would mark a strategic step in China’s push to develop advanced AI systems without depending on US technology.

Of course, the big question is whether the hype matches reality. Independent tests haven’t been published yet, and skeptics warn that simplifying attention might limit performance on complex language tasks like long conversations or subtle reasoning. Critics also note that many AI breakthroughs look impressive in early demonstrations but stumble when scaled up to millions of users.

Still, this isn’t happening in a vacuum. Global labs are racing to make AI more efficient, whether through smaller models like Mistral or Google’s Gemini Nano or through new hardware optimized for AI inference. If SpikingBrain1.0 delivers what its creators promise, it could force rivals to rethink how they balance speed, cost, and intelligence.

For now, it’s an intriguing experiment with big ambitions. But until researchers outside the project can test it, the real measure of whether this “brain-like” AI actually thinks more like us remains an open question.

Exit mobile version