The AI revolution is reshaping our world, and quantum technology is poised to redefine computing as we know it. Now, imagine the possibilities when AI and quantum join forces.
DeepMind has just released AlphaQubit AI model which enhances error correction in quantum computing. A couple of days later, Google presented its new quantum processor, Willow—a major milestone in quantum computing with its increased qubit count, improved error correction and coherence times, pushing us closer to practical quantum applications. In recent months, progress in quantum computing has accelerated. What's happening?
The Major Challenge
Quantum computers could solve humanity's toughest problems in seconds—but they're highly sensitive to noise. Even slight disturbances like heat, vibrations or cosmic rays cause frequent errors, making the results of the computations neither reliable nor useful. To address this problem, modern quantum computing employs error correction techniques.
“Without error correction, all information processing, and hence all knowledge-creation, is necessarily bounded. Error correction is the beginning of infinity.”– David Deutsch
One popular error correction method, used by tech giants like Google and IBM, is the surface code, where multiple physical qubits encode a single logical qubit, enabling error detection and correction. Despite this, modern quantum systems still experience about one error per 1,000 operations—a far cry from the 1-in-a-trillion error rate needed for practical quantum computers. The challenge lies in decoding, as qubit noise varies dynamically and unpredictably.
AlphaQubit Explained
This is where DeepMind's new AlphaQubit AI model comes in. AlphaQubit, is a neural network trained to detect and correct errors in quantum computations—essentially acting as a spellchecker for quantum systems. Built on the Transformer architecture (used in models like GPT-4 and Google’s Gemini), AlphaQubit was trained on thousands of simulated examples and fine-tuned using experimental data from Google’s Sycamore quantum processor.
How does it work if the error is random? Each logical qubit is represented by several physical qubits. Every microsecond, we read the states of these physical qubits. In the case of three qubits, we read an 8-bit number, which is then fed into the neural network. The network updates its state by combining the new readings with the existing state, and at each step, the state evolves. At the end of the experiment, the network makes a prediction: it outputs 1 if it thinks an error occurred, or 0 if it didn’t. Additionally, it provides the probability of whether the error occurred or not.
In this way, AlphaQubit predicts whether a qubit has flipped during an experiment by analyzing real-time data from physical qubits, updating its state with each new reading, and providing a probability-based prediction at the end. To learn how it works check out the video.
According to the paper, DeepMind’s AlphaQubit outperforms all existing decoders, achieving 98.5% error correction accuracy and reducing errors by 30% compared to the best correction methods. This is crucial for enabling quantum computers to handle long, complex computations and tackle challenging problems in quantum physics, chemistry, and cryptography.
The results look great on paper, but many challenges remain, primarily around speed. Superconducting quantum processors perform millions of consistency checks per second, meaning the data must be incorporated into the decoder at that speed. Currently, the decoders are about 10 times too slow, even though they’re already faster than any previous decoders. Improving both speed and accuracy to achieve error rates as low as 1 in a trillion operations is their current focus—essential for the next big leap in quantum computing.
Quantum Spring
In recent months, progress in quantum computing has been accelerating. Let me know in the comments if you've noticed this too. After a "quantum winter" over the past couple of years, the field is now regaining momentum. This resurgence is partually driven by the exchange of advancements between AI and quantum computing, fueling improvements in quantum hardware, algorithms, and error correction.
“Quantum computation is a distinctively new way of harnessing nature. It will be the first technology that allows useful tasks to be performed in collaboration between parallel universes.” - David Deutsch
Combining Quantum Technology and AI
So, what happens when we combine AI with quantum computing? There are three or more possible outcomes:
1. AI Accelerating Quantum Computing Progress
AI is already driving advancements in quantum computing, especially in error correction and creating more efficient quantum algorithms, like for T-gates (as Mike from Google Quantum mentioned in our conversation).
2. Quantum technology accelerating AI
AI models are increasingly limited by hardware efficiency and scalability. As AI models grow larger, the demand for computing power outpaces classical hardware. Quantum computing could address this, accelerating the training of neural networks that are too large for classical systems.
3. AI Reducing Quantum Computing Applications
There's an ongoing debate that AI may reduce the need for certain quantum computing applications, as AI can often find patterns that make complex problems solvable by classical computers. Let me know what your thoughts are on mixing Quantum and AI technologies in the comments.
I believe AI and quantum computing are not competing technologies but are instead synergistic. The next computing revolution could be driven by a combination of both. Major tech giants are heavily investing in quantum technology, which has led to a significant surge in quantum computing stocks—reflected in my investment portfolio. Have you noticed it as well?
This Newsletter issue is sponsored by AMD. Click here to test Ryzen™ PRO laptops for free and experience the benefits they can bring to your business.
Happy Holidays!🎄
Warm Regards, Anastasi
Thank you for the updates! Merry Christmas 🎄🎁 and happy new year 🎆🎈
this webpage just wiped entire message when signing in after writing it which I have never struck before so this site can't even do that and IT WANTS TO TALK CLEAVER???
next time maybe