More stories

  • in

    AI sees what doctors miss: Fatty liver disease hidden in chest x-rays

    Fatty liver disease, caused by the accumulation of fat in the liver, is estimated to affect one in four people worldwide. If left untreated, it can lead to serious complications, such as cirrhosis and liver cancer, making it crucial to detect early and initiate treatment.
    Currently, standard tests for diagnosing fatty liver disease include ultrasounds, CTs, and MRIs, which require costly specialized equipment and facilities. In contrast, chest X-rays are performed more frequently, are relatively inexpensive, and involve low radiation exposure. Although this test is primarily used to examine the condition of the lungs and heart, it also captures part of the liver, making it possible to detect signs of fatty liver disease. However, the relationship between chest X-rays and fatty liver disease has rarely been a subject of in-depth study.
    Therefore, a research group led by Associate Professor Sawako Uchida-Kobayashi and Associate Professor Daiju Ueda at Osaka Metropolitan University’s Graduate School of Medicine developed an AI model that can detect the presence of fatty liver disease from chest X-ray images.
    In this retrospective study, a total of 6,599 chest X-ray images containing data from 4,414 patients were used to develop an AI model utilizing controlled attenuation parameter (CAP) scores. The AI model was verified to be highly accurate, with the area under the receiver operating characteristic curve (AUC) ranging from 0.82 to 0.83.
    “The development of diagnostic methods using easily obtainable and inexpensive chest X-rays has the potential to improve fatty liver detection. We hope it can be put into practical use in the future,” stated Professor Uchida-Kobayashi. More

  • in

    Quantum computers just got an upgrade – and it’s 10× more efficient

    Quantum computers can solve extraordinarily complex problems, unlocking new possibilities in fields such as drug development, encryption, AI, and logistics. Now, researchers at Chalmers University of Technology in Sweden have developed a highly efficient amplifier that activates only when reading information from qubits. Thanks to its smart design, it consumes just one-tenth of the power consumed by the best amplifiers available today. This reduces qubit decoherence and lays the foundation for more powerful quantum computers with significantly more qubits and enhanced performance.
    Bits, which are the building blocks of a conventional computer, can only ever have the value of 1 or 0. By contrast, the common building blocks of a quantum computer, quantum bits or qubits, can exist in states having the value 1 and 0 simultaneously as well as all states in between in any combination. This means that a 20 qubit quantum computer can represent over a million different states simultaneously. This phenomenon, which is called superposition, is one of the key reasons that quantum computers can solve exceptionally complex problems that are beyond the capabilities of today’s conventional supercomputers.
    Amplifiers are essential – but cause decoherence
    To be able to utilize a quantum computer’s computational power, qubits must be measured and converted into interpretable information. This process requires extremely sensitive microwave amplifiers to ensure that these weak signals are accurately detected and read. However, reading quantum information is an extremely delicate business – even the slightest temperature fluctuation, noise, or electromagnetic interference can cause qubits to lose their integrity, their quantum state, rendering the information unusable. Because the amplifiers generate output in the form of heat, they also cause decoherence. As a result, researchers in this field are always in pursuit of more efficient qubit amplifiers. Now, Chalmers researchers have taken an important step forward with their new, highly efficient amplifier.
    “This is the most sensitive amplifier that can be built today using transistors. We’ve now managed to reduce its power consumption to just one-tenth of that required by today’s best amplifiers – without compromising performance. We hope and believe that this breakthrough will enable more accurate readout of qubits in the future,” says Yin Zeng, a doctoral student in terahertz and millimeter wave technology at Chalmers, and the first author of the study published in the journal IEEE Transactions on Microwave Theory and Techniques.
    An essential breakthrough in scaling up quantum computers
    This advance could be significant in scaling up quantum computers to accommodate significantly more qubits than today. Chalmers has been actively engaged in this field for many years through a national research program, the Wallenberg Centre for Quantum Technology. As the number of qubits increases, so does the computer’s computational power and capacity to handle highly complex calculations. However, larger quantum systems also require more amplifiers, leading to greater overall power consumption, which can lead to decoherence of the qubits.

    “This study offers a solution in future upscaling of quantum computers where the heat generated by these qubit amplifiers poses a major limiting factor,” says Jan Grahn, professor of microwave electronics at Chalmers and Yin Zeng’s principal supervisor.
    Activated only when needed
    Unlike other low-noise amplifiers, the new amplifier developed by the Chalmers researchers is pulse-operated, meaning that it is activated only when needed for qubit amplification rather than being always switched on.
    “This is the first demonstration of low-noise semiconductor amplifiers for quantum readout in pulsed operation that does not affect performance and with drastically reduced power consumption compared to the current state of the art,” says Jan Grahn.
    Since quantum information is transmitted in pulses, one of the key challenges was to ensure that the amplifier was activated rapidly enough to keep pace with the qubit readout. The Chalmers team addressed this by designing a smart amplifier using an algorithm that improves the operation of the amplifier. To validate their approach, they also developed a novel technique for measuring the noise and amplification of a pulse-operated low-noise microwave amplifier.
    “We used genetic programming to enable smart control of the amplifier. As a result, it responded much faster to the incoming qubit pulse, in just 35 nanoseconds,” says Yin Zeng.

    More information about the study:
    Read the article Pulsed HEMT LNA Operation for Qubit Readout in IEEE Transactions on Microwave Theory and Techniques.
    The article is authored by Yin Zeng and Jan Grahn, both active at the Terahertz and Millimeter Wave Technology Laboratory at the Department of Microtechnology and Nanoscience at Chalmers University of Technology, and by Jörgen Stenarson and Peter Sobis, both active at Low Noise Factory AB.
    The amplifier has been developed using the Kollberg Laboratory at Chalmers University of Technology and at Low Noise Factory AB in Gothenburg, Sweden.
    The research project is funded by the Chalmers Centre for Wireless Infrastructure Technology (WiTECH) and by the Vinnova program Smarter electronic systems. More

  • in

    Your CT scan could reveal a hidden heart risk—and AI just learned how to find it

    Mass General Brigham researchers have developed a new AI tool in collaboration with the United States Department of Veterans Affairs (VA) to probe through previously collected CT scans and identify individuals with high coronary artery calcium (CAC) levels that place them at a greater risk for cardiovascular events. Their research, published in NEJM AI, showed the tool called AI-CAC had high accuracy and predictive value for future heart attacks and 10-year mortality. Their findings suggest that implementing such a tool widely may help clinicians assess their patients’ cardiovascular risk.
    “Millions of chest CT scans are taken each year, often in healthy people, for example to screen for lung cancer. Our study shows that important information about cardiovascular risk is going unnoticed in these scans,” said senior author Hugo Aerts, PhD, director of the Artificial Intelligence in Medicine (AIM) Program at Mass General Brigham. “Our study shows that AI has the potential to change how clinicians practice medicine and enable physicians to engage with patients earlier, before their heart disease advances to a cardiac event.”
    Chest CT scans can detect calcium deposits in the heart and arteries that increase the risk of a heart attack. The gold standard for quantifying CAC uses “gated” CT scans, that synchronize to the heartbeat to reduce motion during the scan. But most chest CT scans obtained for routine clinical purposes are “nongated.”
    The researchers recognized that CAC could still be detected on these nongated scans, which led them to develop AI-CAC, a deep learning algorithm to probe through the nongated scans and quantify CAC to help predict the risk of cardiovascular events. They trained the model on chest CT scans collected as part of the usual care of veterans across 98 VA medical centers and then tested AI-CAC’s performance on 8,052 CT scans to simulate CAC screening in routine imaging tests.
    The researchers found the AI-CAC model was 89.4% accurate at determining whether a scan contained CAC or not. For those with CAC present, the model was 87.3% accurate at determining whether the score was higher or lower than 100, indicating a moderate cardiovascular risk. AI-CAC was also predictive of 10-year all-cause mortality — those with a CAC score of over 400 had a 3.49 times higher risk of death over a 10-year period than patients with a score of zero. Of the patients the model identified as having very high CAC scores (greater than 400), four cardiologists verified that almost all of them (99.2%) would benefit from lipid lowering therapy.
    “At present, VA imaging systems contain millions of nongated chest CT scans that may have been taken for another purpose, around 50,000 gated studies. This presents an opportunity for AI-CAC to leverage routinely collected nongated scans for purposes of cardiovascular risk evaluation and to enhance care,” said first author Raffi Hagopian, MD, a cardiologist and researcher in the Applied Innovations and Medical Informatics group at the VA Long Beach Healthcare System. “Using AI for tasks like CAC detection can help shift medicine from a reactive approach to the proactive prevention of disease, reducing long-term morbidity, mortality and healthcare costs.”
    Limitations to the study include the fact that the algorithm was developed on an exclusively veteran population. The team hopes to conduct future studies in the general population and test whether the tool can assess the impact of lipid-lowering medications on CAC scores.
    Authorship: In addition to Aerts, Mass General Brigham authors include Simon Bernatz, and Leonard Nürnberg. Additional authors include Raffi Hagopian, Timothy Strebel, Gregory A. Myers, Erik Offerman, Eric Zuniga, Cy Y. Kim, Angie T. Ng, James A. Iwaz, Sunny P. Singh, Evan P. Carey, Michael J. Kim, R. Spencer Schaefer, Jeannie Yu, and Amilcare Gentili.
    Funding: This work was funded by the Veterans Affairs health care system. More

  • in

    Artificial intelligence isn’t hurting workers—It might be helping

    As artificial intelligence reshapes workplaces worldwide, a new study provides early evidence suggesting AI exposure has not, thus far, caused widespread harm to workers’ mental health or job satisfaction. In fact, the data reveals that AI may even be linked to modest improvements in worker physical health, particularly among employees with less than a college degree.
    But the authors caution: It is way too soon to draw definitive conclusions.
    The paper, “Artificial Intelligence and the Wellbeing of Workers,” published June 23 in Nature: Scientific Reports, uses two decades of longitudinal data from the German Socio-Economic Panel. Using that rich data, the researchers — Osea Giuntella of the University of Pittsburgh and the National Bureau of Economic Research (NBER), Luca Stella of the University of Milan and the Berlin School of Economics, and Johannes King of the German Ministry of Finance — explored how workers in AI-exposed occupations have fared in contrast to workers in less-exposed roles.
    “Public anxiety about AI is real, but the worst-case scenarios are not inevitable,” said Professor Stella, who is also affiliated with independent European bodies the Center for Economic Studies (CESifo) and the Institute for Labor Economics (IZA). “So far, we find little evidence that AI adoption has undermined workers’ well-being on average. If anything, physical health seems to have slightly improved, likely due to declining job physical intensity and overall job risk in some of the AI-exposed occupations.”
    Yet the study also highlights reasons for caution.
    The analysis relies primarily on a task-based measure of AI exposure — considered more objective — but alternative estimates based on self-reported exposure reveal small negative effects on job and life satisfaction. In addition, the sample excludes younger workers and only covers the early phases of AI diffusion in Germany.
    “We may simply be too early in the AI adoption curve to observe its full effects,” Stella emphasized. “AI’s impact could evolve dramatically as technologies advance, penetrate more sectors, and alter work at a deeper level.”
    Key findings from the study include: No significant average effects of AI exposure on job satisfaction, life satisfaction, or mental health. Small improvements in self-rated physical health and health satisfaction, especially among lower-educated workers. Evidence of reduced physical job intensity, suggesting that AI may alleviate physically demanding tasks. A modest decline in weekly working hours, without significant changes in income or employment rates. Self-reported AI exposure suggests small but negative effects on subjective well-being, reinforcing the need for more granular future research.Due to the data supply, the study focuses on Germany — a country with strong labor protections and a gradual pace of AI adoption. The co-authors noted that outcomes may differ in more flexible labor markets or among younger cohorts entering increasingly AI-saturated workplaces.
    “This research is an early snapshot, not the final word,” said Pitt’s Giuntella, who previously conducted significant research into the effect of robotics on households and labor, and on types of workers. “As AI adoption accelerates, continued monitoring of its broader impacts on work and health is essential. Technology alone doesn’t determine outcomes — institutions and policies will decide whether AI enhances or erodes the conditions of work.” More

  • in

    Quantum dice: Scientists harness true randomness from entangled photons

    Randomness is incredibly useful. People often draw straws, throw dice or flip coins to make fair choices. Random numbers can enable auditors to make completely unbiased selections. Randomness is also key in security; if a password or code is an unguessable string of numbers, it’s harder to crack. Many of our cryptographic systems today use random number generators to produce secure keys.
    But how do you know that a random number is truly random? Classical computer algorithms can only create pseudo-random numbers, and someone with enough knowledge of the algorithm or the system could manipulate it or predict the next number. An expert in sleight of hand could rig a coin flip to guarantee a heads or tails result. Even the most careful coin flips can have bias; with enough study, their outcomes could be predicted.
    “True randomness is something that nothing in the universe can predict in advance,” said Krister Shalm, a physicist at the National Institute of Standards and Technology (NIST). Even if a random number generator used seemingly random processes in nature, it would be hard to verify that those numbers are truly random, Shalm added.
    Einstein believed that nature isn’t random, famously saying, “God does not play dice with the universe.” Scientists have since proved that Einstein was wrong. Unlike dice or computer algorithms, quantum mechanics is inherently random. Carrying out a quantum experiment called a Bell test, Shalm and his team have transformed this source of true quantum randomness into a traceable and certifiable random-number service.
    “If God does play dice with the universe, then you can turn that into the best random number generator that the universe allows,” Shalm said. “We really wanted to take that experiment out of the lab and turn it into a useful public service.”
    To make that happen, NIST researchers and their colleagues at the University of Colorado Boulder created the Colorado University Randomness Beacon (CURBy). CURBy produces random numbers automatically and broadcasts them daily through a website for anyone to use.
    At the heart of this service is the NIST-run Bell test, which provides truly random results. This randomness acts as a kind of raw material that the rest of the researchers’ setup “refines” into random numbers published by the beacon.

    The Bell test measures pairs of “entangled” photons whose properties are correlated even when separated by vast distances. When researchers measure an individual particle, the outcome is random, but the properties of the pair are more correlated than classical physics allows, enabling researchers to verify the randomness. Einstein called this quantum nonlocality “spooky action at a distance.”
    This is the first random number generator service to use quantum nonlocality as a source of its numbers, and the most transparent source of random numbers to date. That’s because the results are certifiable and traceable to a greater extent than ever before.
    “CURBy is one of the first publicly available services that operates with a provable quantum advantage. That’s a big milestone for us,” Shalm explained. “The quality and origin of these random bits can be directly certified in a way that conventional random number generators are unable to.”
    NIST performed one of the first complete experimental Bell tests in 2015, which firmly established that quantum mechanics is truly random. In 2018, NIST pioneered methods to use these Bell tests to build the world’s first sources of true randomness.
    However, turning these quantum correlations into random numbers is hard work. NIST’s first breakthrough demonstrations of the Bell test required months of setup to run for a few hours, and it took a great deal of time to collect enough data to generate 512 bits of true randomness. Shalm and the team spent the past few years building the experiment to be robust and to run automatically so it can provide random numbers on demand. In its first 40 days of operation, the protocol produced random numbers 7,434 times out of 7,454 attempts, a 99.7% success rate.
    The process starts by generating a pair of entangled photons inside a special nonlinear crystal. The photons travel via optical fiber to separate labs at opposite ends of the hall. Once the photons reach the labs, their polarizations are measured. The outcomes of these measurements are truly random. This process is repeated 250,000 times per second.

    NIST passes millions of these quantum coin flips to a computer program at the University of Colorado Boulder. Special processing steps and strict protocols are used to turn the outcomes of the quantum measurements on entangled photons into 512 random bits of binary code (0s and 1s). The result is a set of random bits that no one, not even Einstein, could have predicted. In some sense, this system acts as the universe’s best coin flip.
    NIST and its collaborators added the ability to trace and verify every step in the randomness generation process. They developed the Twine protocol, a novel set of quantum-compatible blockchain technologies that enable multiple different entities to work together to generate and certify the randomness from the Bell test. The Twine protocol marks each set of data for the beacon with a hash. Hashes are used in blockchain technology to mark sets of data with a digital fingerprint, allowing each block of data to be identified and scrutinized.
    The Twine protocol allows any user to verify the data behind each random number, explained Jasper Palfree, a research assistant on the project at the University of Colorado Boulder. The protocol can expand to let other random number beacons join the hash graph, creating a network of randomness that everyone contributes to but no individual controls.
    Intertwining these hash chains acts as a timestamp, linking the data for the beacon together into a traceable data structure. It also provides security, allowing Twine protocol participants to immediately spot manipulation of the data.
    “The Twine protocol lets us weave together all these other beacons into a tapestry of trust,” Palfree added.
    Turning a complex quantum physics problem into a public service is exactly why this work appealed to Gautam Kavuri, a graduate student on the project. The whole process is open source and available to the public, allowing anyone to not only check their work, but even build on the beacon to create their own random number generator.
    CURBy can be used anywhere an independent, public source of random numbers would be useful, such as selecting jury candidates, making a random selection for an audit, or assigning resources through a public lottery.
    “I wanted to build something that is useful. It’s this cool thing that is the cutting edge of fundamental science,” Kavuri added. “NIST is a place where you have that freedom to pursue projects that are ambitious but also will give you something useful.” More

  • in

    Affordances in the brain: The human superpower AI hasn’t mastered

    How do you intuitively know that you can walk on a footpath and swim in a lake? Researchers from the University of Amsterdam have discovered unique brain activations that reflect how we can move our bodies through an environment. The study not only sheds new light on how the human brain works, but also shows where artificial intelligence is lagging behind. According to the researchers, AI could become more sustainable and human-friendly if it incorporated this knowledge about the human brain.
    When we see a picture of an unfamiliar environment — a mountain path, a busy street, or a river — we immediately know how we could move around in it: walk, cycle, swim or not go any further. That sounds simple, but how does your brain actually determine these action opportunities?
    PhD student Clemens Bartnik and a team of co-authors show how we make estimates of possible actions thanks to unique brain patterns. The team, led by computational neuroscientist Iris Groen, also compared this human ability with a large number of AI models, including ChatGPT. “AI models turned out to be less good at this and still have a lot to learn from the efficient human brain,” Groen concludes.
    Viewing images in the MRI scanner
    Using an MRI scanner, the team investigated what happens in the brain when people look at various photos of indoor and outdoor environments. The participants used a button to indicate whether the image invited them to walk, cycle, drive, swim, boat or climb. At the same time, their brain activity was measured.
    “We wanted to know: when you look at a scene, do you mainly see what is there — such as objects or colors — or do you also automatically see what you can do with it,” says Groen. “Psychologists call the latter “affordances” — opportunities for action; imagine a staircase that you can climb, or an open field that you can run through.”
    Unique processes in the brain
    The team discovered that certain areas in the visual cortex become active in a way that cannot be explained by visible objects in the image. “What we saw was unique,” says Groen. “These brain areas not only represent what can be seen, but also what you can do with it.” The brain did this even when participants were not given an explicit action instruction. ‘These action possibilities are therefore processed automatically,” says Groen. “Even if you do not consciously think about what you can do in an environment, your brain still registers it.”

    The research thus demonstrates for the first time that affordances are not only a psychological concept, but also a measurable property of our brains.
    What AI doesn’t understand yet
    The team also compared how well AI algorithms — such as image recognition models or GPT-4 — can estimate what you can do in a given environment. They were worse at predicting possible actions. “When trained specifically for action recognition, they could somewhat approximate human judgments, but the human brain patterns didn’t match the models’ internal calculations,” Groen explains.
    “Even the best AI models don’t give exactly the same answers as humans, even though it’s such a simple task for us,” Groen says. “This shows that our way of seeing is deeply intertwined with how we interact with the world. We connect our perception to our experience in a physical world. AI models can’t do that because they only exist in a computer.”
    AI can still learn from the human brain
    The research thus touches on larger questions about the development of reliable and efficient AI. “As more sectors — from healthcare to robotics — use AI, it is becoming important that machines not only recognize what something is, but also understand what it can do,” Groen explains. “For example, a robot that has to find its way in a disaster area, or a self-driving car that can tell apart a bike path from a driveway.”
    Groen also points out the sustainable aspect of AI. “Current AI training methods use a huge amount of energy and are often only accessible to large tech companies. More knowledge about how our brain works, and how the human brain processes certain information very quickly and efficiently, can help make AI smarter, more economical and more human-friendly.” More

  • in

    Half of today’s jobs could vanish—Here’s how smart countries are future-proofing workers

    Artificial intelligence is spreading into many aspects of life, from communications and advertising to grading tests. But with the growth of AI comes a shake-up in the workplace.
    New research from the University of Georgia is shedding light on how different countries are preparing for how AI will impact their workforces.
    According to previous research, almost half of today’s jobs could vanish over the next 20 years. But it’s not all doom and gloom.
    Researchers also estimate that 65% of current elementary school students will have jobs in the future that don’t exist now. Most of these new careers will require advanced AI skills and knowledge.
    “Human soft skills, such as creativity, collaboration and communication cannot be replaced by AI.” — Lehong Shi, College of Education
    To tackle these challenges, governments around the world are taking steps to help their citizens gain the skills they’ll need. The present study examined 50 countries’ national AI strategies, focusing on policies for education and the workforce.
    Learning what other countries are doing could help the U.S. improve its own plans for workforce preparation in the era of AI, the researcher said.

    “AI skills and competencies are very important,” said Lehong Shi, author of the study and an assistant research scientist at UGA’s Mary Frances Early College of Education. “If you want to be competitive in other areas, it’s very important to prepare employees to work with AI in the future.”
    Some countries put larger focus on training, education
    Shi used six indicators to evaluate each country’s prioritization on AI workforce training and education: the plan’s objective, how goals will be reached, examples of projects, how success will be measured, how projects will be supported and the timelines for each project.
    Each nation was classified as giving high, medium or low priority to prepare an AI competent workforce depending on how each aspect of their plan was detailed.
    Of the countries studied, only 13 gave high prioritization to training the current workforce and improving AI education in schools. Eleven of those were European countries, with Mexico and Australia being the two exceptions. This may be because European nations tend to have more resources for training and cultures of lifelong learning, the researcher said.
    The United States was one of 23 countries that considered workforce training and AI education a medium priority, with a less detailed plan compared to countries that saw them as a high priority.

    Different countries prioritize different issues when it comes to AI preparation
    Some common themes emerged between countries, even when their approaches to AI differed. For example, almost every nation aimed to establish or improve AI-focused programs in universities. Some also aimed to improve AI education for K-12 students.
    On-the-job training was also a priority for more than half the countries, with some offering industry-specific training programs or internships. However, few focused on vulnerable populations such as the elderly or unemployed through programs to teach them basic AI skills.
    Shi stressed that just because a country gives less prioritization to education and workforce preparation doesn’t mean AI isn’t on its radar. Some Asian countries, for example, put more effort into improving national security and health care rather than education.
    Cultivating interest in AI could help students prepare for careers
    Some countries took a lifelong approach to developing these specialized skills. Germany, for instance, emphasized creating a culture that encourages interest in AI. Spain started teaching kids AI-related skills as early as preschool.
    Of the many actions governments took, Shi noted one area that needs more emphasis when preparing future AI-empowered workplaces. “Human soft skills, such as creativity, collaboration and communication cannot be replaced by AI,” Shi said. “And they were only mentioned by a few countries.”
    Developing these sorts of “soft skills” is key to making sure students and employees continue to have a place in the workforce.
    This study was published in Human Resource Development Review. More

  • in

    Quantum breakthrough: ‘Magic states’ now easier, faster, and way less noisy

    For decades, quantum computers that perform calculations millions of times faster than conventional computers have remained a tantalizing yet distant goal. However, a new breakthrough in quantum physics may have just sped up the timeline.In an article published in PRX Quantum, researchers from the Graduate School of Engineering Science and the Center for Quantum Information and Quantum Biology at The University of Osaka devised a method that can be used to prepare high-fidelity “magic states” for use in quantum computers with dramatically less overhead and unprecedented accuracy.Quantum computers harness the fantastic properties of quantum mechanics such as entanglement and superposition to perform calculations much more efficiently than classical computers can. Such machines could catalyze innovations in fields as diverse as engineering, finance, and biotechnology. But before this can happen, there is a significant obstacle that must be overcome.“Quantum systems have always been extremely susceptible to noise,” says lead researcher Tomohiro Itogawa. “Even the slightest perturbation in temperature or a single wayward photon from an external source can easily ruin a quantum computer setup, making it useless. Noise is absolutely the number one enemy of quantum computers.”Thus, scientists have become very interested in building so-called fault-tolerant quantum computers, which are robust enough to continue computing accurately even when subject to noise. Magic state distillation, in which a single high-fidelity quantum state is prepared from many noisy ones, is a popular method for creating such systems. But there is a catch.“The distillation of magic states is traditionally a very computationally expensive process because it requires many qubits,” explains Keisuke Fujii, senior author. “We wanted to explore if there was any way of expediting the preparation of the high-fidelity states necessary for quantum computation.”Following this line of inquiry, the team was inspired to create a “level-zero” version of magic state distillation, in which a fault-tolerant circuit is developed at the physical qubit or “zeroth” level as opposed to higher, more abstract levels. In addition to requiring far fewer qubits, this new method led to a roughly several dozen times decrease in spatial and temporal overhead compared with that of the traditional version in numerical simulations.Itogawa and Fujii are optimistic that the era of quantum computing is not as far off as we imagine. Whether one calls it magic or physics, this technique certainly marks an important step toward the development of larger-scale quantum computers that can withstand noise. More