Scientists in China have developed a new chip, with a twist: it’s analog, meaning it performs calculations on its own physical circuits rather than via the binary 1s and 0s of standard digital processors.
What’s more, its creators say the new chip is capable of outperforming top-end graphics processing units (GPUs) from Nvidia and AMD by as much as 1,000 times.
In a new study published Oct. 13 in the journal Nature Electronics, researchers from Peking University said their device tackled two key bottlenecks: the energy and data constraints digital chips face in emerging fields like artificial intelligence (AI) and 6G, and the “century-old problem” of poor precision and impracticality that has limited analog computing.
What’s more, its creators say the new chip is capable of outperforming top-end graphics processing units (GPUs) from Nvidia and AMD by as much as 1,000 times.
When put to work on complex communications problems — including matrix inversion problems used in massive multiple-input multiple-output (MIMO) systems (a wireless technological system) — the chip matched the accuracy of standard digital processors while using about 100 times less energy.
By making adjustments, the researchers said the device then trounced the performance of top-end GPUs like the Nvidia H100 and AMD Vega 20 by as much as 1,000 times. Both chips are major players in AI model training; Nvidia’s H100, for instance, is the newer version of the A100 graphics cards, which OpenAI used to train ChatGPT.
The new device is built from arrays of resistive random-access memory (RRAM) cells that store and process data by adjusting how easily electricity flows through each cell.
Unlike digital processors that compute in binary 1s and 0s, the analog design processes information as continuous electrical currents across its network of RRAM cells. By processing data directly within its own hardware, the chip avoids the energy-intensive task of shuttling information between itself and an external memory source.
“With the rise of applications using vast amounts of data, this creates a challenge for digital computers, particularly as traditional device scaling becomes increasingly challenging,” the researchers said in the study. “Benchmarking shows that our analogue computing approach could offer a 1,000 times higher throughput and 100 times better energy efficiency than state-of-the-art digital processors for the same precision.”
Old tech, new tricks
Analog computing isn’t new — quite the opposite, in fact. The Antikythera mechanism, discovered off the coast of Greece in 1901, is estimated to have been built more than 2,000 years ago. It used interlocking gears to perform calculations.
For most of modern computing history, however, analog technology has been written off as an impractical alternative to digital processors. This is because analog systems rely on continuous physical signals to process information — for example, a voltage or electric current. These are much more difficult to control precisely than the two stable states (1 and 0) that digital computers have to work with.
Where analog systems excel is in speed and efficiency. Because they don’t need to break calculations down into long strings of binary code — instead representing them as physical operations on the chip’s circuitry — analog chips can handle large volumes of information simultaneously while using far less energy.
This becomes particularly significant in data- and energy-intensive applications like AI, where digital processors face limitations in how much information they can process sequentially, as well as in future 6G communications — where networks will have to process huge volumes of overlapping wireless signals in real time.
The researchers said that recent advances in memory hardware could make analog computing viable once again. The team configured the chip’s RRAM cells into two circuits: one that provided a fast but approximate calculation, and a second that refined and fine-tuned the result over subsequent iterations until it landed on a more precise number.
Configuring the chip in this way meant that the team was able to combine the speed of analog computation with the accuracy normally associated with digital processing. Crucially, the chip was manufactured using a commercial production process, meaning it could potentially be mass-produced.
Future improvements to the chip’s circuitry could boost its performance even more, the researchers said. Their next goal is to build larger, fully integrated chips capable of handling more complex problems at faster speeds.
it sounds pretty breakthrough and genious…i hope this will be useful to reduce the huge mark left by the western chips and their thirsty data centers (or at least the chinese ones)
I think the current AI companies in the west would absolutely love to jump on this given that the only entity that is actually profiting off of AI right now is Nvidia.
Old tech, new tricks
This is really underselling this. If this can be scaled up for manufacturing and applications it is an absolute game changer. Thanks for the share!
1,000× faster with 100× less energy? Nuclear fusion by 2030? Restoring large parts of the desert into rich forest environments?
China’s going to genetically alter its citizens to be born with innate dialectical materialism and live 200 years, possessing all current knowledge about Marxism.
In 2050 the first words of all babies in China will be “Revisionists! You are all revisionists!”
??? Even assuming this works, wouldn’t it just get bottlenecked when all communication has to run through a DAC and then an ADC to talk to the rest of the computer (which is still digital)? What am I missing here?
So they are addressing the issue of matrix inversion, something that can take a long time to compute depending on the size of the matrix. Though there is an overhead converting to and from analog, the data throughput is not the bottleneck.
Flash ADC/DAC are pretty fast
Here is the paper published in Nature for anyone who wants to read more: http://archive.today/2025.11.01-172706/https://www.nature.com/articles/s41928-025-01477-0
Thank you!
o7
I’m skeptical of this. If anyone finds anything conclusive on this let me know cba to research it.
You can read the paper. It has been published in nature magazine. Note that their experiment is about doing matrix inversion with a 16x16 matrix. I think the whole thing is still experimental. Even so, it’s remarcable, it’s a specialized hardware to do a specific kind of computation which is much faster and energy efficient.
I think if china BTFO’d nvidia’s shit right this second right now to the degree of “1000x faster” the U.S. would have launched nukes already, so I’m kinda skeptical of a very clickbait title
don’t @ me about being a china hater btw my most used emoji is probably

It is click bait. And like other hype pseudo scientific magazines, they exaggerate things a bit. But if you have interest and knowledge in the area, you can read the paper in Nature magazine.
If China bails out the western AI industry with this I’m going to be a bit mad.
What the hell China won’t do anymore? Analog chips? I didn’t think I’d live to see this. It’s a great breakthrough in analog chip design .







