Microsoft and Quantinuum improve quantum error rate by 800 times

Microsoft and Quantinuum quantum computing researchers just announced significant progress in reducing error rates using qubit virtualization technology. This method combines Quantinuums’ high-precision H-2 ion trap quantum computer with Microsoft’s syndrome extraction method. The breakthrough lays the foundation for the development of larger, more reliable quantum computers that can solve problems far beyond the capabilities of classical machines.

This is the great promise of quantum computing: the potential to solve challenges that even classical supercomputers cannot solve. However, the error rates of existing quantum hardware must be significantly improved to achieve this goal, which is why error correction is such an important area of ​​quantum computing research.

Let’s take a look at Quantinuums hardware and Microsoft’s Qubit Virtualization System Work together to achieve this important breakthrough.

From physical qubits to logical qubits

Scientists from both companies established a symbiotic relationship between Quantinuums high-fidelity hardware and Microsoft systems to create four stable logical qubits out of 30 physical qubits. The result is a record-breaking error rate, 800 times higher than the basic physics error rate.

A few years ago, Google scientists speculated that 1,000 physical qubits would be needed to create a logical qubit. It turns out that this number is much lower in practice, as demonstrated by the error correction performance achieved in this new study.

A logical error rate that is better than the underlying physical error rate could be a sign that fault-tolerant quantum computers may be easier to achieve than previously thought. Microsoft estimates that a quantum machine equipped with 100 reliable logic qubits could solve many scientific problems that are currently difficult for classical computers to solve.

Keep in mind that Microsoft and Quantinuum still have work to do to make this happen. Continued efforts to correct some of the limitations identified in this groundbreaking study will make future results even better.

shared vision

During the briefing, Matt Zanner, principal project manager at Microsoft, and Dr. Jenni Strabley, senior director of product management at Quantinuum, reviewed the four-year history of quantum collaboration between the two organizations. Both companies are committed to enabling quantum computing at scale and share the vision of using fault-tolerant computing to create hybrid classical quantum supercomputers to solve world-class problems.

Zainer said Microsoft is fully committed to the path to quantum at scale. We have many different pillars of work that are aligned with the overall mission, and quantum computing is at the forefront.

Microsoft plans to integrate quantum computing into its existing Azure Quantum Elements product, which already integrates HPC and AI. This may be run on a Quantinuum machine. You can read more about Azure Quantum Elements in my previous Forbes article, which describes how Microsoft researchers used HPC and AI to create 32 million new materials to find more efficient materials for electronic batteries of lithium-ion materials.

Microsoft and Quantinuum also share interests in chemistry and materials science. Quantinuum offers a cutting-edge quantum chemistry platform called Inquanto that enables complex simulations of molecules and materials. The platform complements Microsoft’s Azure Quantum Elements.

Dr. Strabley said a key factor in achieving these advances is the close collaboration between Quantinuum and Microsoft as end-to-end companies with expertise spanning both hardware and software. Their respective error correction and logical qubit mapping teams work together to exchange ideas and co-create new solutions to drive quantum computing forward.

Eliminate quantum noise

We must have a viable solution for quantum error correction before we can build quantum machines capable of solving complex problems in areas such as climate modeling, large-scale financial optimization, and high-order physics simulations. However, error correction is elusive and complex due to the natural limitations of the no-replication theorem, which makes it impossible to replicate quantum information in the same way as a classical computer. Joint research by Microsoft and Quantinuums may lead to solutions that remove this barrier.

Because quantum information cannot be directly copied, correcting qubit errors relies on alternative methods using logical qubits. The quantum information stored in Quantinuums’ physical qubits is converted into 30 entangled physical qubits, which then form four reliable logical qubits. To be useful, logical qubits must have lower error rates than the physical qubits used to create them. Microsoft’s qubit virtualization system incorporates error correction technology to enhance qubit reliability.

Microsoft uses a method called active syndrome extraction to diagnose and fix qubit errors without destroying the quantum state. Depending on the QEC code used, syndrome measurements can determine whether an error occurred, as well as the location and type of error. Because Microsoft’s approach accounts for noise at the logical qubit level, overall reliability is significantly improved. The results are similar to the signal improvements provided by noise-canceling headphones. In this case, noisy qubits are converted into highly reliable logical qubits.

The success of this experiment also relies on having a high-performance quantum computer. Quantinuums H-2 features a state-of-the-art trapped ion, shuttle-based processor with best-in-class 99.8% dual-qubit gate fidelity, along with 32 fully connected qubits and unique quantum charge-coupling Device architecture.

Notably, Quantinuum also has extensive experience with logic qubits. It published the first research paper demonstrating a fault-tolerant end-to-end circuit of entangled logic qubits using on-the-fly error correction. This is also the first time that two error-correcting logic qubits have performed a circuit with higher fidelity than their constituent physical qubits. You can read my article about it here.

Prior to the H-2 launch, I was invited to attend a quantum briefing in Broomfield, Colorado. I’ve also written a detailed white paper on its capabilities and features, which you can read here. In short, the H-2s’ benchmark results are very impressive.

Instant error correction and post-processing error correction

This study not only provides valuable quantum error correction information, but also produces interesting results because it uses two error correction methods in two different ways to provide a comparison between methods. Specifically, Steane codes are used for immediate error correction, and Carbon codes are used for post-selection.

Steane’s code uses seven physical qubits to encode one logical qubit. The researchers used the code to implement proactive, real-time error correction. This requires two additional modes to detect and correct any errors that occur during the calculation.

At the same time, the circuit efficiency of Carbon code is higher, the code distance is larger, and post-selection can be performed when necessary. Additionally, the greater the distance between codes, the more resistant the code is to errors. Its circuit efficiency and error correction capabilities also keep the number of runs that can be discarded in post-selection to a minimum.

Compared to Steane code, Carbon code has a much higher threshold and can tolerate higher error rates. To help maintain the integrity of quantum information, carbon codes are constructed in such a way that when errors occur, certain states or syndromes are created that can be identified and corrected through post-selection.

Insights gained from running two debugging methods

While both codes showed the ability to suppress the logical error rate significantly below the physical error rate, the Carbon code showed a much larger gain, with a (still impressive) 500x reduction compared to the Steane code. In comparison, it is reduced by 800 times. The performance difference between the two codes may be due to the Carbon code’s greater error correction capabilities. Carbon code syndrome extraction is more efficient and therefore introduces fewer errors, and because the code distance is larger, more errors can also be tolerated.

One reason for using post-op selection is to demonstrate that there are some errors that can be detected but cannot be reliably corrected. So if we detect these errors during a run, we can abandon the run and make sure it actually contains errors.

Under certain conditions, post-selection may be more robust to noise. For example, if false positives are measured in error correction mode, any unnecessary corrective actions that may be taken will introduce noise. However, in error detection mode, this data is discarded without any further action.

In experiments, error correction was successfully applied to most runs. For a small number of runs, the researchers were able to diagnose errors that the code couldn’t fix, so the runs were discarded. The vast majority of errors in this study were corrected before the material was corrupted, and only a small proportion were uncorrectable.

The research team stated that there was no technical reason why on-the-fly decoding could not be used in all experiments. Both methods provide scientists with a way to compare the impact of each method.

Next steps for Quantinuum and Microsoft

In addition to working together, both Microsoft and Quantinuum have their own internal roadmaps to drive future growth. In the distant future, Quantinuum is considering creating quantum machines with 1,000 logical qubits. At today’s scale, this would require 7,500 physical qubits.

Quantinuum plans to launch a new H-series quantum computer called Helios in 2025. Dr. Strabley explained that Helios will be a cloud-based system that provides both services and on-premises deployment. Based on recent announcements with Microsoft, she expects Helios to have 10 or more logical qubits. She sees this as a rapid advancement in expanding the system’s functionality compared to previous generations.

At the same time, once Microsoft integrates highly reliable logical qubits into Azure Quantum Elements, the product will combine the high performance of cloud computing, advanced artificial intelligence models and improved quantum computing capabilities. Microsoft plans to use logical qubits to scale hybrid supercomputers with performance that limits errors to just one per 100 million operations.

The two companies also share a common interest in topology research. Quantinuum’s topological interests focus on methods for using non-Abelian states for quantum information processing and on using non-Abelian weaving to create universal gates. Meanwhile, Microsoft’s research is focused on the development of topological qubits to take advantage of built-in error protection and digital control. So far, the Microsoft research team has made significant progress in the research of topological qubits.

wrap up

Microsoft’s qubit virtualization system, combined with Quantinuums ion trap quantum computer and its QCCD architecture, accomplished what was impossible a year ago: executing 14,000 experiments flawlessly without any errors. This is not just a simple advance, but an important step forward in quantum error correction.

this The success of this research doesn’t just benefit these two companies; it impacts the entire quantum ecosystem and provides evidence that reliable logic qubits may play an important role in solving future problems. This work points to a future in which thousands of reliable logic qubits will help create solutions to complex scientific challenges, from chemistry and materials science to drug discovery, clean energy research, financial modeling, and logistics optimization. and climate predictions.

#Microsoft #Quantinuum #improve #quantum #error #rate #times
Image Source : www.forbes.com

Leave a Comment