More stories

  • in

    Your CT scan could reveal a hidden heart risk—and AI just learned how to find it

    Mass General Brigham researchers have developed a new AI tool in collaboration with the United States Department of Veterans Affairs (VA) to probe through previously collected CT scans and identify individuals with high coronary artery calcium (CAC) levels that place them at a greater risk for cardiovascular events. Their research, published in NEJM AI, showed the tool called AI-CAC had high accuracy and predictive value for future heart attacks and 10-year mortality. Their findings suggest that implementing such a tool widely may help clinicians assess their patients’ cardiovascular risk.
    “Millions of chest CT scans are taken each year, often in healthy people, for example to screen for lung cancer. Our study shows that important information about cardiovascular risk is going unnoticed in these scans,” said senior author Hugo Aerts, PhD, director of the Artificial Intelligence in Medicine (AIM) Program at Mass General Brigham. “Our study shows that AI has the potential to change how clinicians practice medicine and enable physicians to engage with patients earlier, before their heart disease advances to a cardiac event.”
    Chest CT scans can detect calcium deposits in the heart and arteries that increase the risk of a heart attack. The gold standard for quantifying CAC uses “gated” CT scans, that synchronize to the heartbeat to reduce motion during the scan. But most chest CT scans obtained for routine clinical purposes are “nongated.”
    The researchers recognized that CAC could still be detected on these nongated scans, which led them to develop AI-CAC, a deep learning algorithm to probe through the nongated scans and quantify CAC to help predict the risk of cardiovascular events. They trained the model on chest CT scans collected as part of the usual care of veterans across 98 VA medical centers and then tested AI-CAC’s performance on 8,052 CT scans to simulate CAC screening in routine imaging tests.
    The researchers found the AI-CAC model was 89.4% accurate at determining whether a scan contained CAC or not. For those with CAC present, the model was 87.3% accurate at determining whether the score was higher or lower than 100, indicating a moderate cardiovascular risk. AI-CAC was also predictive of 10-year all-cause mortality — those with a CAC score of over 400 had a 3.49 times higher risk of death over a 10-year period than patients with a score of zero. Of the patients the model identified as having very high CAC scores (greater than 400), four cardiologists verified that almost all of them (99.2%) would benefit from lipid lowering therapy.
    “At present, VA imaging systems contain millions of nongated chest CT scans that may have been taken for another purpose, around 50,000 gated studies. This presents an opportunity for AI-CAC to leverage routinely collected nongated scans for purposes of cardiovascular risk evaluation and to enhance care,” said first author Raffi Hagopian, MD, a cardiologist and researcher in the Applied Innovations and Medical Informatics group at the VA Long Beach Healthcare System. “Using AI for tasks like CAC detection can help shift medicine from a reactive approach to the proactive prevention of disease, reducing long-term morbidity, mortality and healthcare costs.”
    Limitations to the study include the fact that the algorithm was developed on an exclusively veteran population. The team hopes to conduct future studies in the general population and test whether the tool can assess the impact of lipid-lowering medications on CAC scores.
    Authorship: In addition to Aerts, Mass General Brigham authors include Simon Bernatz, and Leonard Nürnberg. Additional authors include Raffi Hagopian, Timothy Strebel, Gregory A. Myers, Erik Offerman, Eric Zuniga, Cy Y. Kim, Angie T. Ng, James A. Iwaz, Sunny P. Singh, Evan P. Carey, Michael J. Kim, R. Spencer Schaefer, Jeannie Yu, and Amilcare Gentili.
    Funding: This work was funded by the Veterans Affairs health care system. More

  • in

    Artificial intelligence isn’t hurting workers—It might be helping

    As artificial intelligence reshapes workplaces worldwide, a new study provides early evidence suggesting AI exposure has not, thus far, caused widespread harm to workers’ mental health or job satisfaction. In fact, the data reveals that AI may even be linked to modest improvements in worker physical health, particularly among employees with less than a college degree.
    But the authors caution: It is way too soon to draw definitive conclusions.
    The paper, “Artificial Intelligence and the Wellbeing of Workers,” published June 23 in Nature: Scientific Reports, uses two decades of longitudinal data from the German Socio-Economic Panel. Using that rich data, the researchers — Osea Giuntella of the University of Pittsburgh and the National Bureau of Economic Research (NBER), Luca Stella of the University of Milan and the Berlin School of Economics, and Johannes King of the German Ministry of Finance — explored how workers in AI-exposed occupations have fared in contrast to workers in less-exposed roles.
    “Public anxiety about AI is real, but the worst-case scenarios are not inevitable,” said Professor Stella, who is also affiliated with independent European bodies the Center for Economic Studies (CESifo) and the Institute for Labor Economics (IZA). “So far, we find little evidence that AI adoption has undermined workers’ well-being on average. If anything, physical health seems to have slightly improved, likely due to declining job physical intensity and overall job risk in some of the AI-exposed occupations.”
    Yet the study also highlights reasons for caution.
    The analysis relies primarily on a task-based measure of AI exposure — considered more objective — but alternative estimates based on self-reported exposure reveal small negative effects on job and life satisfaction. In addition, the sample excludes younger workers and only covers the early phases of AI diffusion in Germany.
    “We may simply be too early in the AI adoption curve to observe its full effects,” Stella emphasized. “AI’s impact could evolve dramatically as technologies advance, penetrate more sectors, and alter work at a deeper level.”
    Key findings from the study include: No significant average effects of AI exposure on job satisfaction, life satisfaction, or mental health. Small improvements in self-rated physical health and health satisfaction, especially among lower-educated workers. Evidence of reduced physical job intensity, suggesting that AI may alleviate physically demanding tasks. A modest decline in weekly working hours, without significant changes in income or employment rates. Self-reported AI exposure suggests small but negative effects on subjective well-being, reinforcing the need for more granular future research.Due to the data supply, the study focuses on Germany — a country with strong labor protections and a gradual pace of AI adoption. The co-authors noted that outcomes may differ in more flexible labor markets or among younger cohorts entering increasingly AI-saturated workplaces.
    “This research is an early snapshot, not the final word,” said Pitt’s Giuntella, who previously conducted significant research into the effect of robotics on households and labor, and on types of workers. “As AI adoption accelerates, continued monitoring of its broader impacts on work and health is essential. Technology alone doesn’t determine outcomes — institutions and policies will decide whether AI enhances or erodes the conditions of work.” More

  • in

    Quantum dice: Scientists harness true randomness from entangled photons

    Randomness is incredibly useful. People often draw straws, throw dice or flip coins to make fair choices. Random numbers can enable auditors to make completely unbiased selections. Randomness is also key in security; if a password or code is an unguessable string of numbers, it’s harder to crack. Many of our cryptographic systems today use random number generators to produce secure keys.
    But how do you know that a random number is truly random? Classical computer algorithms can only create pseudo-random numbers, and someone with enough knowledge of the algorithm or the system could manipulate it or predict the next number. An expert in sleight of hand could rig a coin flip to guarantee a heads or tails result. Even the most careful coin flips can have bias; with enough study, their outcomes could be predicted.
    “True randomness is something that nothing in the universe can predict in advance,” said Krister Shalm, a physicist at the National Institute of Standards and Technology (NIST). Even if a random number generator used seemingly random processes in nature, it would be hard to verify that those numbers are truly random, Shalm added.
    Einstein believed that nature isn’t random, famously saying, “God does not play dice with the universe.” Scientists have since proved that Einstein was wrong. Unlike dice or computer algorithms, quantum mechanics is inherently random. Carrying out a quantum experiment called a Bell test, Shalm and his team have transformed this source of true quantum randomness into a traceable and certifiable random-number service.
    “If God does play dice with the universe, then you can turn that into the best random number generator that the universe allows,” Shalm said. “We really wanted to take that experiment out of the lab and turn it into a useful public service.”
    To make that happen, NIST researchers and their colleagues at the University of Colorado Boulder created the Colorado University Randomness Beacon (CURBy). CURBy produces random numbers automatically and broadcasts them daily through a website for anyone to use.
    At the heart of this service is the NIST-run Bell test, which provides truly random results. This randomness acts as a kind of raw material that the rest of the researchers’ setup “refines” into random numbers published by the beacon.

    The Bell test measures pairs of “entangled” photons whose properties are correlated even when separated by vast distances. When researchers measure an individual particle, the outcome is random, but the properties of the pair are more correlated than classical physics allows, enabling researchers to verify the randomness. Einstein called this quantum nonlocality “spooky action at a distance.”
    This is the first random number generator service to use quantum nonlocality as a source of its numbers, and the most transparent source of random numbers to date. That’s because the results are certifiable and traceable to a greater extent than ever before.
    “CURBy is one of the first publicly available services that operates with a provable quantum advantage. That’s a big milestone for us,” Shalm explained. “The quality and origin of these random bits can be directly certified in a way that conventional random number generators are unable to.”
    NIST performed one of the first complete experimental Bell tests in 2015, which firmly established that quantum mechanics is truly random. In 2018, NIST pioneered methods to use these Bell tests to build the world’s first sources of true randomness.
    However, turning these quantum correlations into random numbers is hard work. NIST’s first breakthrough demonstrations of the Bell test required months of setup to run for a few hours, and it took a great deal of time to collect enough data to generate 512 bits of true randomness. Shalm and the team spent the past few years building the experiment to be robust and to run automatically so it can provide random numbers on demand. In its first 40 days of operation, the protocol produced random numbers 7,434 times out of 7,454 attempts, a 99.7% success rate.
    The process starts by generating a pair of entangled photons inside a special nonlinear crystal. The photons travel via optical fiber to separate labs at opposite ends of the hall. Once the photons reach the labs, their polarizations are measured. The outcomes of these measurements are truly random. This process is repeated 250,000 times per second.

    NIST passes millions of these quantum coin flips to a computer program at the University of Colorado Boulder. Special processing steps and strict protocols are used to turn the outcomes of the quantum measurements on entangled photons into 512 random bits of binary code (0s and 1s). The result is a set of random bits that no one, not even Einstein, could have predicted. In some sense, this system acts as the universe’s best coin flip.
    NIST and its collaborators added the ability to trace and verify every step in the randomness generation process. They developed the Twine protocol, a novel set of quantum-compatible blockchain technologies that enable multiple different entities to work together to generate and certify the randomness from the Bell test. The Twine protocol marks each set of data for the beacon with a hash. Hashes are used in blockchain technology to mark sets of data with a digital fingerprint, allowing each block of data to be identified and scrutinized.
    The Twine protocol allows any user to verify the data behind each random number, explained Jasper Palfree, a research assistant on the project at the University of Colorado Boulder. The protocol can expand to let other random number beacons join the hash graph, creating a network of randomness that everyone contributes to but no individual controls.
    Intertwining these hash chains acts as a timestamp, linking the data for the beacon together into a traceable data structure. It also provides security, allowing Twine protocol participants to immediately spot manipulation of the data.
    “The Twine protocol lets us weave together all these other beacons into a tapestry of trust,” Palfree added.
    Turning a complex quantum physics problem into a public service is exactly why this work appealed to Gautam Kavuri, a graduate student on the project. The whole process is open source and available to the public, allowing anyone to not only check their work, but even build on the beacon to create their own random number generator.
    CURBy can be used anywhere an independent, public source of random numbers would be useful, such as selecting jury candidates, making a random selection for an audit, or assigning resources through a public lottery.
    “I wanted to build something that is useful. It’s this cool thing that is the cutting edge of fundamental science,” Kavuri added. “NIST is a place where you have that freedom to pursue projects that are ambitious but also will give you something useful.” More

  • in

    Affordances in the brain: The human superpower AI hasn’t mastered

    How do you intuitively know that you can walk on a footpath and swim in a lake? Researchers from the University of Amsterdam have discovered unique brain activations that reflect how we can move our bodies through an environment. The study not only sheds new light on how the human brain works, but also shows where artificial intelligence is lagging behind. According to the researchers, AI could become more sustainable and human-friendly if it incorporated this knowledge about the human brain.
    When we see a picture of an unfamiliar environment — a mountain path, a busy street, or a river — we immediately know how we could move around in it: walk, cycle, swim or not go any further. That sounds simple, but how does your brain actually determine these action opportunities?
    PhD student Clemens Bartnik and a team of co-authors show how we make estimates of possible actions thanks to unique brain patterns. The team, led by computational neuroscientist Iris Groen, also compared this human ability with a large number of AI models, including ChatGPT. “AI models turned out to be less good at this and still have a lot to learn from the efficient human brain,” Groen concludes.
    Viewing images in the MRI scanner
    Using an MRI scanner, the team investigated what happens in the brain when people look at various photos of indoor and outdoor environments. The participants used a button to indicate whether the image invited them to walk, cycle, drive, swim, boat or climb. At the same time, their brain activity was measured.
    “We wanted to know: when you look at a scene, do you mainly see what is there — such as objects or colors — or do you also automatically see what you can do with it,” says Groen. “Psychologists call the latter “affordances” — opportunities for action; imagine a staircase that you can climb, or an open field that you can run through.”
    Unique processes in the brain
    The team discovered that certain areas in the visual cortex become active in a way that cannot be explained by visible objects in the image. “What we saw was unique,” says Groen. “These brain areas not only represent what can be seen, but also what you can do with it.” The brain did this even when participants were not given an explicit action instruction. ‘These action possibilities are therefore processed automatically,” says Groen. “Even if you do not consciously think about what you can do in an environment, your brain still registers it.”

    The research thus demonstrates for the first time that affordances are not only a psychological concept, but also a measurable property of our brains.
    What AI doesn’t understand yet
    The team also compared how well AI algorithms — such as image recognition models or GPT-4 — can estimate what you can do in a given environment. They were worse at predicting possible actions. “When trained specifically for action recognition, they could somewhat approximate human judgments, but the human brain patterns didn’t match the models’ internal calculations,” Groen explains.
    “Even the best AI models don’t give exactly the same answers as humans, even though it’s such a simple task for us,” Groen says. “This shows that our way of seeing is deeply intertwined with how we interact with the world. We connect our perception to our experience in a physical world. AI models can’t do that because they only exist in a computer.”
    AI can still learn from the human brain
    The research thus touches on larger questions about the development of reliable and efficient AI. “As more sectors — from healthcare to robotics — use AI, it is becoming important that machines not only recognize what something is, but also understand what it can do,” Groen explains. “For example, a robot that has to find its way in a disaster area, or a self-driving car that can tell apart a bike path from a driveway.”
    Groen also points out the sustainable aspect of AI. “Current AI training methods use a huge amount of energy and are often only accessible to large tech companies. More knowledge about how our brain works, and how the human brain processes certain information very quickly and efficiently, can help make AI smarter, more economical and more human-friendly.” More

  • in

    Half of today’s jobs could vanish—Here’s how smart countries are future-proofing workers

    Artificial intelligence is spreading into many aspects of life, from communications and advertising to grading tests. But with the growth of AI comes a shake-up in the workplace.
    New research from the University of Georgia is shedding light on how different countries are preparing for how AI will impact their workforces.
    According to previous research, almost half of today’s jobs could vanish over the next 20 years. But it’s not all doom and gloom.
    Researchers also estimate that 65% of current elementary school students will have jobs in the future that don’t exist now. Most of these new careers will require advanced AI skills and knowledge.
    “Human soft skills, such as creativity, collaboration and communication cannot be replaced by AI.” — Lehong Shi, College of Education
    To tackle these challenges, governments around the world are taking steps to help their citizens gain the skills they’ll need. The present study examined 50 countries’ national AI strategies, focusing on policies for education and the workforce.
    Learning what other countries are doing could help the U.S. improve its own plans for workforce preparation in the era of AI, the researcher said.

    “AI skills and competencies are very important,” said Lehong Shi, author of the study and an assistant research scientist at UGA’s Mary Frances Early College of Education. “If you want to be competitive in other areas, it’s very important to prepare employees to work with AI in the future.”
    Some countries put larger focus on training, education
    Shi used six indicators to evaluate each country’s prioritization on AI workforce training and education: the plan’s objective, how goals will be reached, examples of projects, how success will be measured, how projects will be supported and the timelines for each project.
    Each nation was classified as giving high, medium or low priority to prepare an AI competent workforce depending on how each aspect of their plan was detailed.
    Of the countries studied, only 13 gave high prioritization to training the current workforce and improving AI education in schools. Eleven of those were European countries, with Mexico and Australia being the two exceptions. This may be because European nations tend to have more resources for training and cultures of lifelong learning, the researcher said.
    The United States was one of 23 countries that considered workforce training and AI education a medium priority, with a less detailed plan compared to countries that saw them as a high priority.

    Different countries prioritize different issues when it comes to AI preparation
    Some common themes emerged between countries, even when their approaches to AI differed. For example, almost every nation aimed to establish or improve AI-focused programs in universities. Some also aimed to improve AI education for K-12 students.
    On-the-job training was also a priority for more than half the countries, with some offering industry-specific training programs or internships. However, few focused on vulnerable populations such as the elderly or unemployed through programs to teach them basic AI skills.
    Shi stressed that just because a country gives less prioritization to education and workforce preparation doesn’t mean AI isn’t on its radar. Some Asian countries, for example, put more effort into improving national security and health care rather than education.
    Cultivating interest in AI could help students prepare for careers
    Some countries took a lifelong approach to developing these specialized skills. Germany, for instance, emphasized creating a culture that encourages interest in AI. Spain started teaching kids AI-related skills as early as preschool.
    Of the many actions governments took, Shi noted one area that needs more emphasis when preparing future AI-empowered workplaces. “Human soft skills, such as creativity, collaboration and communication cannot be replaced by AI,” Shi said. “And they were only mentioned by a few countries.”
    Developing these sorts of “soft skills” is key to making sure students and employees continue to have a place in the workforce.
    This study was published in Human Resource Development Review. More

  • in

    Quantum breakthrough: ‘Magic states’ now easier, faster, and way less noisy

    For decades, quantum computers that perform calculations millions of times faster than conventional computers have remained a tantalizing yet distant goal. However, a new breakthrough in quantum physics may have just sped up the timeline.In an article published in PRX Quantum, researchers from the Graduate School of Engineering Science and the Center for Quantum Information and Quantum Biology at The University of Osaka devised a method that can be used to prepare high-fidelity “magic states” for use in quantum computers with dramatically less overhead and unprecedented accuracy.Quantum computers harness the fantastic properties of quantum mechanics such as entanglement and superposition to perform calculations much more efficiently than classical computers can. Such machines could catalyze innovations in fields as diverse as engineering, finance, and biotechnology. But before this can happen, there is a significant obstacle that must be overcome.“Quantum systems have always been extremely susceptible to noise,” says lead researcher Tomohiro Itogawa. “Even the slightest perturbation in temperature or a single wayward photon from an external source can easily ruin a quantum computer setup, making it useless. Noise is absolutely the number one enemy of quantum computers.”Thus, scientists have become very interested in building so-called fault-tolerant quantum computers, which are robust enough to continue computing accurately even when subject to noise. Magic state distillation, in which a single high-fidelity quantum state is prepared from many noisy ones, is a popular method for creating such systems. But there is a catch.“The distillation of magic states is traditionally a very computationally expensive process because it requires many qubits,” explains Keisuke Fujii, senior author. “We wanted to explore if there was any way of expediting the preparation of the high-fidelity states necessary for quantum computation.”Following this line of inquiry, the team was inspired to create a “level-zero” version of magic state distillation, in which a fault-tolerant circuit is developed at the physical qubit or “zeroth” level as opposed to higher, more abstract levels. In addition to requiring far fewer qubits, this new method led to a roughly several dozen times decrease in spatial and temporal overhead compared with that of the traditional version in numerical simulations.Itogawa and Fujii are optimistic that the era of quantum computing is not as far off as we imagine. Whether one calls it magic or physics, this technique certainly marks an important step toward the development of larger-scale quantum computers that can withstand noise. More

  • in

    MIT’s tiny 5G receiver could make smart devices last longer and work anywhere

    MIT researchers have designed a compact, low-power receiver for 5G-compatible smart devices that is about 30 times more resilient to a certain type of interference than some traditional wireless receivers.
    The low-cost receiver would be ideal for battery-powered internet of things (IoT) devices like environmental sensors, smart thermostats, or other devices that need to run continuously for a long time, such as health wearables, smart cameras, or industrial monitoring sensors.
    The researchers’ chip uses a passive filtering mechanism that consumes less than a milliwatt of static power while protecting both the input and output of the receiver’s amplifier from unwanted wireless signals that could jam the device.
    Key to the new approach is a novel arrangement of precharged, stacked capacitors, which are connected by a network of tiny switches. These miniscule switches need much less power to be turned on and off than those typically used in IoT receivers.
    The receiver’s capacitor network and amplifier are carefully arranged to leverage a phenomenon in amplification that allows the chip to use much smaller capacitors than would typically be necessary.
    “This receiver could help expand the capabilities of IoT gadgets. Smart devices like health monitors or industrial sensors could become smaller and have longer battery lives. They would also be more reliable in crowded radio environments, such as factory floors or smart city networks,” says Soroush Araei, an electrical engineering and computer science (EECS) graduate student at MIT and lead author of a paper on the receiver.
    He is joined on the paper by Mohammad Barzgari, a postdoc in the MIT Research Laboratory of Electronics (RLE); Haibo Yang, an EECS graduate student; and senior author Negar Reiskarimian, the X-Window Consortium Career Development Assistant Professor in EECS at MIT and a member of the Microsystems Technology Laboratories and RLE. The research was recently presented at the IEEE Radio Frequency Integrated Circuits Symposium.

    A new standard
    A receiver acts as the intermediary between an IoT device and its environment. Its job is to detect and amplify a wireless signal, filter out any interference, and then convert it into digital data for processing.
    Traditionally, IoT receivers operate on fixed frequencies and suppress interference using a single narrow-band filter, which is simple and inexpensive.
    But the new technical specifications of the 5G mobile network enable reduced-capability devices that are more affordable and energy-efficient. This opens a range of IoT applications to the faster data speeds and increased network capability of 5G. These next-generation IoT devices need receivers that can tune across a wide range of frequencies while still being cost-effective and low-power.
    “This is extremely challenging because now we need to not only think about the power and cost of the receiver, but also flexibility to address numerous interferers that exist in the environment,” Araei says.
    To reduce the size, cost, and power consumption of an IoT device, engineers can’t rely on the bulky, off-chip filters that are typically used in devices that operate on a wide frequency range.

    One solution is to use a network of on-chip capacitors that can filter out unwanted signals. But these capacitor networks are prone to special type of signal noise known as harmonic interference.
    In prior work, the MIT researchers developed a novel switch-capacitor network that targets these harmonic signals as early as possible in the receiver chain, filtering out unwanted signals before they are amplified and converted into digital bits for processing.
    Shrinking the circuit
    Here, they extended that approach by using the novel switch-capacitor network as the feedback path in an amplifier with negative gain. This configuration leverages the Miller effect, a phenomenon that enables small capacitors to behave like much larger ones.
    “This trick lets us meet the filtering requirement for narrow-band IoT without physically large components, which drastically shrinks the size of the circuit,” Araei says.
    Their receiver has an active area of less than 0.05 square millimeters.
    One challenge the researchers had to overcome was determining how to apply enough voltage to drive the switches while keeping the overall power supply of the chip at only 0.6 volts.
    In the presence of interfering signals, such tiny switches can turn on and off in error, especially if the voltage required for switching is extremely low.
    To address this, the researchers came up with a novel solution, using a special circuit technique called bootstrap clocking. This method boosts the control voltage just enough to ensure the switches operate reliably while using less power and fewer components than traditional clock boosting methods.
    Taken together, these innovations enable the new receiver to consume less than a milliwatt of power while blocking about 30 times more harmonic interference than traditional IoT receivers.
    “Our chip also is very quiet, in terms of not polluting the airwaves. This comes from the fact that our switches are very small, so the amount of signal that can leak out of the antenna is also very small,” Araei adds.
    Because their receiver is smaller than traditional devices and relies on switches and precharged capacitors instead of more complex electronics, it could be more cost-effective to fabricate. In addition, since the receiver design can cover a wide range of signal frequencies, it could be implemented on a variety of current and future IoT devices.
    Now that they have developed this prototype, the researchers want to enable the receiver to operate without a dedicated power supply, perhaps by harvesting Wi-Fi or Bluetooth signals from the environment to power the chip.
    This research is supported, in part, by the National Science Foundation. More

  • in

    Scientists create ‘universal translator’ for quantum tech

    UBC researchers are proposing a solution to a key hurdle in quantum networking: a device that can “translate” microwave to optical signals and vice versa.
    The technology could serve as a universal translator for quantum computers — enabling them to talk to each other over long distances and converting up to 95 per cent of a signal with virtually no noise. And it all fits on a silicon chip, the same material found in everyday computers.
    “It’s like finding a translator that gets nearly every word right, keeps the message intact and adds no background chatter,” says study author Mohammad Khalifa, who conducted the research during his PhD at UBC’s faculty of applied science and the UBC Blusson Quantum Matter Institute.
    “Most importantly, this device preserves the quantum connections between distant particles and works in both directions. Without that, you’d just have expensive individual computers. With it, you get a true quantum network.”
    How it works
    Quantum computers process information using microwave signals. But to send that information across cities or continents, it needs to be converted into optical signals that travel through fibre optic cables. These signals are so fragile, even tiny disturbances during translation can destroy them.
    That’s a problem for entanglement, the phenomenon quantum computers rely on, where two particles remain connected regardless of distance. Einstein called it “spooky action at a distance.” Losing that connection means losing the quantum advantage. The UBC device, described in npj Quantum Information, could enable long-distance quantum communication while preserving these entangled links.

    The silicon solution
    The team’s model is a microwave-optical photon converter that can be fabricated on a silicon wafer. The breakthrough lies in tiny engineered flaws, magnetic defects intentionally embedded in silicon to control its properties. When microwave and optical signals are precisely tuned, electrons in these defects convert one signal to the other without absorbing energy, avoiding the instability that plagues other transformation methods.
    The device also runs efficiently at extremely low power — just millionths of a watt. The authors outlined a practical design that uses superconducting components, materials that conduct electricity perfectly, alongside this specially engineered silicon.
    What’s next
    While the work is still theoretical, it marks an important step in quantum networking.
    “We’re not getting a quantum internet tomorrow — but this clears a major roadblock,” says the study’s senior author Dr. Joseph Salfi, an assistant professor in the department of electrical and computer engineering and principal investigator at UBC Blusson QMI.
    “Currently, reliably sending quantum information between cities remains challenging. Our approach could change that: silicon-based converters could be built using existing chip fabrication technology and easily integrated into today’s communication infrastructure.”
    Eventually, quantum networks could enable virtually unbreakable online security, GPS that works indoors, and the power to tackle problems beyond today’s reach such as designing new medicines or predicting weather with dramatically improved accuracy. More