Why 'erasure' could be key to practical quantum computing
Researchers have discovered a new method for correcting errors in the calculations of quantum computers, potentially clearing a major obstacle to a powerful new realm of computing.
In conventional computers, fixing errors is a well-developed field. Every cellphone requires checks and fixes to send and receive data over messy airwaves. Quantum computers offer enormous potential to solve certain complex problems that are impossible for conventional computers, but this power depends on harnessing extremely fleeting behaviors of subatomic particles. These computing behaviors are so ephemeral that even looking in on them to check for errors can cause the whole system to collapse.
In a theoretical paper published Aug. 9 in Nature Communications, an interdisciplinary team led by Jeff Thompson, an associate professor of electrical and computer engineering at Princeton, and collaborators Yue Wu and Shruti Puri at Yale University and Shimon Kolkowitz at the University of Wisconsin-Madison, showed that they could dramatically improve a quantum computer’s tolerance for faults, and reduce the amount of redundant information needed to isolate and fix errors. The new technique increases the acceptable error rate four-fold, from 1% to 4%, which is practical for quantum computers currently in development.
“The fundamental challenge to quantum computers is that the operations you want to do are noisy,” said Thompson, meaning that calculations are prone to myriad modes of failure.
In a conventional computer, an error can be as simple as a bit of memory accidentally flipping from a 1 to a 0, or as messy as one wireless router interfering with another. A common approach for handling such faults is to build in some redundancy, so that each piece of data is compared with duplicate copies. However, that approach increases the amount of data needed and creates more possibilities for errors. Therefore, it only works when the vast majority of information is already correct. Otherwise, checking wrong data against wrong data leads deeper into a pit of error.
“If your baseline error rate is too high, redundancy is a bad strategy,” Thompson said. “Getting below that threshold is the main challenge.”
Rather than focusing solely on reducing the number of errors, Thompson’s team essentially made errors more visible. The team delved deeply into the actual physical causes of error, and engineered their system so that the most common source of error effectively eliminates, rather than simply corrupting, the damaged data. Thompson said this behavior represents a particular kind of error known as an “erasure error,” which is fundamentally easier to weed out than data that is corrupted but still looks like all the other data. More