More stories

  • in

    Scientists brew “quantum ink” to power next-gen night vision

    Manufacturers of infrared cameras face a growing problem: the toxic heavy metals in today’s infrared detectors are increasingly banned under environmental regulations, forcing companies to choose between performance and compliance.
    This regulatory pressure is slowing the broader adoption of infrared detectors across civilian applications, just as demand in fields like autonomous vehicles, medical imaging and national security is accelerating.
    In a paper published in ACS Applied Materials & Interfaces, researchers at NYU Tandon School of Engineering reveal a potential solution that uses environmentally friendly quantum dots to detect infrared light without relying on mercury, lead, or other restricted materials.
    The researchers use colloidal quantum dots which upends the age-old, expensive, and tedious processing of infrared detectors. Traditional devices are fabricated through slow, ultra-precise methods that place atoms almost one by one across the pixels of a detector — much like assembling a puzzle piece by piece under a microscope.
    Colloidal quantum dots are instead synthesized entirely in solution, more like brewing ink, and can be deposited using scalable coating techniques similar to those used in roll-to-roll manufacturing for packaging or newspapers. This shift from painstaking assembly to solution-based processing dramatically reduces manufacturing costs and opens the door to widespread commercial applications.
    “The industry is facing a perfect storm where environmental regulations are tightening just as demand for infrared imaging is exploding,” said Ayaskanta Sahu, associate professor in the Department of Chemical and Biomolecular Engineering (CBE) at NYU Tandon and the study’s senior author. “This creates real bottlenecks for companies trying to scale up production of thermal imaging systems.”
    Another challenge the researchers addressed was making the quantum dot ink conductive enough to relay signals from incoming light. They achieved this using a technique called solution-phase ligand exchange, which tailors the quantum dot surface chemistry to enhance performance in electronic devices. Unlike traditional fabrication methods that often leave cracked or uneven films, this solution-based process yields smooth, uniform coatings in a single step — ideal for scalable manufacturing.

    The resulting devices show remarkable performance: they respond to infrared light on the microsecond timescale — for comparison, the human eye blinks at speeds hundreds of times slower — and they can detect signals as faint as a nanowatt of light.
    “What excites me is that we can take a material long considered too difficult for real devices and engineer it to be more competitive,” said graduate researcher Shlok J. Paul, lead author on the study. “With more time this material has the potential to shine deeper in the infrared spectrum where few materials exist for such tasks.”
    This work adds to earlier research from the same lead researchers that developed new transparent electrodes using silver nanowires. Those electrodes remain highly transparent to infrared light while efficiently collecting electrical signals, addressing one component of the infrared camera system.
    Combined with their earlier transparent electrode work, these developments address both major components of infrared imaging systems. The quantum dots provide environmentally compliant sensing capability, while the transparent electrodes handle signal collection and processing.
    This combination addresses challenges in large-area infrared imaging arrays, which require high-performance detection across wide areas and signal readout from millions of individual detector pixels. The transparent electrodes allow light to reach the quantum dot detectors while providing electrical pathways for signal extraction.
    “Every infrared camera in a Tesla or smartphone needs detectors that meet environmental standards while remaining cost-effective,” Sahu said. “Our approach could help make these technologies much more accessible.”
    The performance still falls short of the best heavy-metal-based detectors in some measurements. However, the researchers expect continued advances in quantum dot synthesis and device engineering could reduce this gap.
    In addition to Sahu and Paul, the paper’s authors are Letian Li, Zheng Li, Thomas Kywe, and Ana Vataj, all from NYU Tandon CBE. The work was supported by the Office of Naval Research and the Defense Advanced Research Projects Agency. More

  • in

    Caltech’s massive 6,100-qubit array brings the quantum future closer

    Quantum computers will need large numbers of qubits to tackle challenging problems in physics, chemistry, and beyond. Unlike classical bits, qubits can exist in two states at once — a phenomenon called superposition. This quirk of quantum physics gives quantum computers the potential to perform certain complex calculations better than their classical counterparts, but it also means the qubits are fragile. To compensate, researchers are building quantum computers with extra, redundant qubits to correct any errors. That is why robust quantum computers will require hundreds of thousands of qubits.
    Now, in a step toward this vision, Caltech physicists have created the largest qubit array ever assembled: 6,100 neutral-atom qubits trapped in a grid by lasers. Previous arrays of this kind contained only hundreds of qubits.
    This milestone comes amid a rapidly growing race to scale up quantum computers. There are several approaches in development, including those based on superconducting circuits, trapped ions, and neutral atoms, as used in the new study.
    “This is an exciting moment for neutral-atom quantum computing,” says Manuel Endres, professor of physics at Caltech. “We can now see a pathway to large error-corrected quantum computers. The building blocks are in place.” Endres is the principal investigator of the research published on September 24 in Nature. Three Caltech graduate students led the study: Hannah Manetsch, Gyohei Nomura, and Elie Bataille.
    The team used optical tweezers — highly focused laser beams — to trap thousands of individual cesium atoms in a grid. To build the array of atoms, the researchers split a laser beam into 12,000 tweezers, which together held 6,100 atoms in a vacuum chamber. “On the screen, we can actually see each qubit as a pinpoint of light,” Manetsch says. “It’s a striking image of quantum hardware at a large scale.”
    A key achievement was showing that this larger scale did not come at the expense of quality. Even with more than 6,000 qubits in a single array, the team kept them in superposition for about 13 seconds — nearly 10 times longer than what was possible in previous similar arrays — while manipulating individual qubits with 99.98 percent accuracy. “Large scale, with more atoms, is often thought to come at the expense of accuracy, but our results show that we can do both,” Nomura says. “Qubits aren’t useful without quality. Now we have quantity and quality.”
    The team also demonstrated that they could move the atoms hundreds of micrometers across the array while maintaining superposition. The ability to shuttle qubits is a key feature of neutral-atom quantum computers that enables more efficient error correction compared with traditional, hard-wired platforms like superconducting qubits.

    Manetsch compares the task of moving the individual atoms while keeping them in a state of superposition to balancing a glass of water while running. “Trying to hold an atom while moving is like trying to not let the glass of water tip over. Trying to also keep the atom in a state of superposition is like being careful to not run so fast that water splashes over,” she says.
    The next big milestone for the field is implementing quantum error correction at the scale of thousands of physical qubits, and this work shows that neutral atoms are a strong candidate to get there. “Quantum computers will have to encode information in a way that’s tolerant to errors, so we can actually do calculations of value,” Bataille says. “Unlike in classical computers, qubits can’t simply be copied due to the so-called no-cloning theorem, so error correction has to rely on more subtle strategies.”
    Looking ahead, the researchers plan to link the qubits in their array together in a state of entanglement, where particles become correlated and behave as one. Entanglement is a necessary step for quantum computers to move beyond simply storing information in superposition; entanglement will allow them to begin carrying out full quantum computations. It is also what gives quantum computers their ultimate power — the ability to simulate nature itself, where entanglement shapes the behavior of matter at every scale. The goal is clear: to harness entanglement to unlock new scientific discoveries, from revealing new phases of matter to guiding the design of novel materials and modeling the quantum fields that govern space-time.
    “It’s exciting that we are creating machines to help us learn about the universe in ways that only quantum mechanics can teach us,” Manetsch says.
    The new study, “A tweezer array with 6100 highly coherent atomic qubits,” was funded by the Gordon and Betty Moore Foundation, the Weston Havens Foundation, the National Science Foundation via its Graduate Research Fellowship Program and the Institute for Quantum Information and Matter (IQIM) at Caltech, the Army Research Office, the U.S. Department of Energy including its Quantum Systems Accelerator, the Defense Advanced Research Projects Agency, the Air Force Office for Scientific Research, the Heising-Simons Foundation, and the AWS Quantum Postdoctoral Fellowship. Other authors include Caltech’s Kon H. Leung, the AWS Quantum senior postdoctoral scholar research associate in physics, as well as former Caltech postdoctoral scholar Xudong Lv, now at the Chinese Academy of Sciences. More

  • in

    AI-powered smart bandage heals wounds 25% faster

    As a wound heals, it goes through several stages: clotting to stop bleeding, immune system response, scabbing, and scarring.
    A wearable device called “a-Heal,” designed by engineers at the University of California, Santa Cruz, aims to optimize each stage of the process. The system uses a tiny camera and AI to detect the stage of healing and deliver a treatment in the form of medication or an electric field. The system responds to the unique healing process of the patient, offering personalized treatment.
    The portable, wireless device could make wound therapy more accessible to patients in remote areas or with limited mobility. Initial preclinical results, published in the journal npj Biomedical Innovations, show the device successfully speeds up the healing process.
    Designing a-Heal
    A team of UC Santa Cruz and UC Davis researchers, sponsored by the DARPA-BETR program and led by UC Santa Cruz Baskin Engineering Endowed Chair and Professor of Electrical and Computer Engineering (ECE) Marco Rolandi, designed a device that combines a camera, bioelectronics, and AI for faster wound healing. The integration in one device makes it a “closed-loop system” — one of the firsts of its kind for wound healing as far as the researchers are aware.
    “Our system takes all the cues from the body, and with external interventions, it optimizes the healing progress,” Rolandi said.
    The device uses an onboard camera, developed by fellow Associate Professor of ECE Mircea Teodorescu and described in a Communications Biology study, to take photos of the wound every two hours. The photos are fed into a machine learning (ML) model, developed by Associate Professor of Applied Mathematics Marcella Gomez, which the researchers call the “AI physician” running on a nearby computer.

    “It’s essentially a microscope in a bandage,” Teodorescu said. “Individual images say little, but over time, continuous imaging lets AI spot trends, wound healing stages, flag issues, and suggest treatments.”
    The AI physician uses the image to diagnose the wound stage and compares that to where the wound should be along a timeline of optimal wound healing. If the image reveals a lag, the ML model applies a treatment: either medicine, delivered via bioelectronics; or an electric field, which can enhance cell migration toward wound closure.
    The treatment topically delivered through the device is fluoxetine, a selective serotonin reuptake inhibitor which controls serotonin levels in the wound and improves healing by decreasing inflammation and increasing wound tissue closure. The dose, determined by preclinical studies by the Isseroff group at UC Davis group to optimize healing, is administered by bioelectronic actuators on the device, developed by Rolandi. An electric field, optimized to improve healing and developed by prior work of the UC Davis’ Min Zhao and Roslyn Rivkah Isseroff, is also delivered through the device.
    The AI physician determines the optimal dosage of medication to deliver and the magnitude of the applied electric field. After the therapy has been applied for a certain period of time, the camera takes another image, and the process starts again.
    While in use, the device transmits images and data such as healing rate to a secure web interface, so a human physician can intervene manually and fine-tune treatment as needed. The device attaches directly to a commercially available bandage for convenient and secure use.
    To assess the potential for clinical use, the UC Davis team tested the device in preclinical wound models. In these studies, wounds treated with a-Heal followed a healing trajectory about 25% faster than standard of care. These findings highlight the promise of the technology not only for accelerating closure of acute wounds, but also for jump-starting stalled healing in chronic wounds.

    AI reinforcement
    The AI model used for this system, which was led by Assistant Professor of Applied Mathematics Marcella Gomez, uses a reinforcement learning approach, described in a study in the journal Bioengineering, to mimic the diagnostic approach used by physicians.
    Reinforcement learning is a technique in which a model is designed to fulfill a specific end goal, learning through trial and error how to best achieve that goal. In this context, the model is given a goal of minimizing time to wound closure, and is rewarded for making progress toward that goal. It continually learns from the patient and adapts its treatment approach.
    The reinforcement learning model is guided by an algorithm that Gomez and her students created called Deep Mapper, described in a preprint study, which processes wound images to quantify the stage of healing in comparison to normal progression, mapping it along the trajectory of healing. As time passes with the device on a wound, it learns a linear dynamic model of the past healing and uses that to forecast how the healing will continue to progress.
    “It’s not enough to just have the image, you need to process that and put it into context. Then, you can apply the feedback control,” Gomez said.
    This technique makes it possible for the algorithm to learn in real-time the impact of the drug or electric field on healing, and guides the reinforcement learning model’s iterative decision making on how to adjust the drug concentration or electric-field strength.
    Now, the research team is exploring the potential for this device to improve healing of chronic and infected wounds.
    Additional publications related to this work can be found linked here.
    This research was supported by the Defense Advanced Research Projects Agency and the Advanced Research Projects Agency for Health. More

  • in

    AI breakthrough finds life-saving insights in everyday bloodwork

    Routine blood samples, such as those taken daily at any hospital and tracked over time, could help predict the severity of an injury and even provide insights into mortality after spinal cord damage, according to a recent University of Waterloo study.
    The research team utilized advanced analytics and machine learning, a type of artificial intelligence, to assess whether routine blood tests could serve as early warning signs for spinal cord injury patient outcomes.
    More than 20 million people worldwide were affected by spinal cord injury in 2019, with 930,000 new cases each year, according to the World Health Organization. Traumatic spinal cord injury often requires intensive care and is characterized by variable clinical presentations and recovery trajectories, complicating diagnosis and prognosis, especially in emergency departments and intensive care units.
    “Routine blood tests could offer doctors important and affordable information to help predict risk of death, the presence of an injury and how severe it might be,” said Dr. Abel Torres Espín, a professor in Waterloo’s School of Public Health Sciences.
    The researchers sampled hospital data from more than 2,600 patients in the U.S. They used machine learning to analyze millions of data points and discover hidden patterns in common blood measurements, such as electrolytes and immune cells, taken during the first three weeks after a spinal cord injury.
    They found that these patterns could help forecast recovery and injury severity, even without early neurological exams, which are not always reliable as they depend on a patient’s responsiveness.
    “While a single biomarker measured at a single time point can have predictive power, the broader story lies in multiple biomarkers and the changes they show over time,” said Dr. Marzieh Mussavi Rizi, a postdoctoral scholar in Torres Espín’s lab at Waterloo.

    The models, which do not rely on early neurological assessment, were accurate in predicting mortality and the severity of injury as early as one to three days after admission to the hospital, compared to standard non-specific severity measures that are often performed during the first day of arrival to intensive care.
    The research also found that accuracy increased over time as more blood tests became available. Although other measures, such as MRI and fluid omics-based biomarkers, can also provide objective data, they are not always readily accessible across medical settings. Routine blood tests, on the other hand, are economical, easy to obtain, and available in every hospital.
    “Prediction of injury severity in the first days is clinically relevant for decision-making, yet it is a challenging task through neurological assessment alone,” Torres Espín said. “We show the potential to predict whether an injury is motor complete or incomplete with routine blood data early after injury, and an increase in prediction performance as time progresses.
    “This foundational work can open new possibilities in clinical practice, allowing for better-informed decisions about treatment priorities and resource allocation in critical care settings for many physical injuries.”
    The study, Modeling trajectories of routine blood tests as dynamic biomarkers for outcome in spinal cord injury, was published in Nature’s NPJ Digital Medicine Magazine. More

  • in

    Can meditation apps really reduce stress, anxiety, and insomnia?

    Do you have a meditation app on your smartphone, computer or wearable device? Well, you’re not alone.
    There are now thousands of meditation apps available worldwide, the top 10 of which have been collectively downloaded more than 300 million times. What’s more, early work on these digital meditation platforms shows that even relatively brief usage can lead to benefits, from reduced depression, anxiety, and stress to improved insomnia symptoms.
    “Meditation apps, such as Calm and Headspace, have been enormously popular in the commercial market,” said J. David Creswell, a health psychologist at Carnegie Mellon University and lead author of a review paper on meditation apps, published today in the journal American Psychologist. “What they’re doing now is not only engaging millions of users every day, but they’re also creating new scientific opportunities and challenges.”
    One huge boon provided by meditation apps for users is access.
    “You can imagine a farmer in rural Nebraska not having many available opportunities to go to traditional group-based meditation programs, and now they have an app in their pocket which is available 24/7,” said Creswell, who is the William S. Dietrich II Professor in Psychology and Neuroscience.
    Meditation apps also provide scientists with opportunities to scale up their research.
    “Historically, I might bring 300 irritable bowel syndrome patients into my lab and study the impacts of meditation on pain management,” said Creswell. “But now I’m thinking, how do we harness the capacity of meditation apps and wearable health sensors to study 30,000 irritable bowel syndrome patients across the world?”
    Combined with products that measure heart rate and sleep patterns, such as Fitbit and the Apple Watch, meditation apps now also have the capacity to incorporate biometrics into meditation practices like never before.

    The biggest takeaway, though, is that meditation apps are fundamentally changing the way these practices are distributed to the general public. Scientific studies of use patterns show that meditation apps account for 96 percent of overall users in the mental health app marketplace.
    “Meditation apps dominate the mental health app market,” said Creswell. “And this paper is really the first to lay out the new normal and challenge researchers and tech developers to think in new ways about the disruptive nature of these apps and their reach.”
    Meditation apps challenge users to train their minds, in small initial training doses
    As with in-person meditation training, meditation apps start by meeting users where they are. Introductory courses may focus on breathing or mindfulness, but they tend to do so in small doses, the merits of which are still being debated.
    According to the data, just 10 to 21 minutes of meditation app exercises done three times a week is enough to see measurable results.
    “Of course, that looks really different from the daily meditation practice you might get within an in-person group-based meditation program, which might be 30 to 45 minutes a day,” said Creswell.

    The a la carte nature of meditation through a smartphone app may appeal to those pressed for time or without the budget for in-person coaching sessions. Users may also find it comforting to know that they have access to guided meditation on-demand, rather than at scheduled places, days, and times.
    “Maybe you’re waiting in line at Starbucks, and you’ve got three minutes to do a brief check-in mindfulness training practice,” said Creswell.
    Finally, as meditation apps continue to evolve, Creswell believes integration of AI, such as meditation-guiding chat-bots, will only become more common, and this will offer the option of even more personalization. This could mark an important development for meditation adoption at large, as offerings go from one-size-fits all group classes to training sessions tailored to the individual.
    “People use meditation for different things, and there’s a big difference between someone looking to optimize their free-throw shooting performance and someone trying to alleviate chronic pain,” said Creswell, who has trained Olympic athletes in the past.
    The elephant in the room
    Of course, with new technology comes new challenges, and for meditation apps, continued engagement remains a huge problem.
    “The engagement problem is not specific to meditation apps,” said Creswell. “But the numbers are really sobering. Ninety-five percent of participants who download a meditation app aren’t using it after 30 days.”
    If the meditation app industry is going to succeed, it will need to find ways to keep its users engaged, as apps like Duolingo have. But overall, Creswell said the market demand is clearly there.
    “People are suffering right now. There are just unbelievably high levels of stress and loneliness in the world, and these tools have tremendous potential to help,” he said.
    “I don’t think there is ever going to be a complete replacement for a good, in-person meditation group or teacher,” said Creswell. “But I think meditation apps are a great first step for anyone who wants to dip their toes in and start training up their mindfulness skills. The initial studies show that these meditation apps help with symptom relief and even reduce stress biomarkers.” More

  • in

    Tiny new lenses, smaller than a hair, could transform phone and drone cameras

    A new approach to manufacturing multicolor lenses could inspire a new generation of tiny, cheap, and powerful optics for portable devices such as phones and drones.
    The design uses layers of metamaterials to simultaneously focus a range of wavelengths from an unpolarized source and over a large diameter, overcoming a major limitation of metalenses, said the first author of the paper reporting the design, Mr Joshua Jordaan, from the Research School of Physics at the Australian National University and the ARC Centre of Excellence for Transformative Meta-Optical Systems (TMOS).
    “Our design has a lot of nice features that make it applicable to practical devices.”
    “It’s easy to manufacture because it has a low aspect ratio and each layer can be fabricated individually and then packaged together, it’s also polarisation insensitive, and is potentially scalable through mature semiconductor nanofabrication platforms,” Mr Jordaan said.
    The project was led by researchers from the Friedrich Schiller University Jena in Germany as part of the International Research Training Group Meta-ACTIVE. The paper reporting their design is published in Optics Express.
    Metalenses have thickness mere fractions of the width of a hair, which is orders of magnitude thinner than conventional lenses. They can be designed to have properties such as focal lengths that would be impossibly short for conventional optics.
    Initially the team attempted to focus multiple wavelengths with a single layer, but they hit up against some fundamental constraints, Mr Jordaan said.

    “It turns out the maximum group-delay attainable in a single-layer metasurface has physical limitations, and these in turn set upper bounds on the product of the numerical aperture, physical diameter and operating bandwidth.”
    “To work at the wavelength range we needed, a single layer would either have to have a very small diameter, which would defeat the purpose of the design, or basically have such a low numerical aperture that it’s hardly focusing the light at all,” he said.
    “We realized we needed a more complex structure, which then led to a multi-layer approach.”
    With the design shifted to incorporating several metalens layers, the team approached the problem with an inverse design algorithm based on shape optimization, with parameterization that meant a lot of degrees of freedom.
    They guided the software to search for metasurface shapes that, for a single wavelength, created simple resonances in both the electric and magnetic dipole, known as Huygens resonances. By employing resonances, the team were able to improve on previous designs by other groups, and develop metalens designs that were polarization independent, and had greater tolerances in manufacturing specifications – crucial in the quest to scale fabrication to industrial quantities.
    The optimization routine came up with a library of metamaterial elements in a surprising range of shapes, such as rounded squares, four-leaf clovers and propellers.

    These tiny shapes, around 300 nm tall and 1000 nm wide, spanned the full range of phase shifts, from zero to two pi, enabling the team to create a phase gradient map to achieve any arbitrary focusing pattern – although they were initially just aiming for a simple ring structure of a conventional lens.
    “We could, for example, focus different wavelengths into different locations to create a colour router,” Mr Jordaan said.
    However, the multilayer approach is limited to a maximum of around five different wavelengths, Mr Jordaan said.
    “The problem is you need structures large enough to be resonant at the longest wavelength, without getting diffraction from the shorter wavelengths,” he said.
    Within these constraints, Mr Jordaan said the ability to make metalenses to collect a lot of light will be a boon for future portable imaging systems.
    “The metalenses we have designed would be ideal for drones or earth-observation satellites, as we’ve tried to make them as small and light as possible,” he said. More

  • in

    Scientists just made atoms talk to each other inside silicon chips

    UNSW engineers have made a significant advance in quantum computing: they created ‘quantum entangled states’ – where two separate particles become so deeply linked they no longer behave independently – using the spins of two atomic nuclei. Such states of entanglement are the key resource that gives quantum computers their edge over conventional ones.
    The research was published on Sept. 18 in the journal Science, and is an important step towards building large-scale quantum computers – one of the most exciting scientific and technological challenges of the 21st century.
    Lead author Dr Holly Stemp says the achievement unlocks the potential to build the future microchips needed for quantum computing using existing technology and manufacturing processes.
    “We succeeded in making the cleanest, most isolated quantum objects talk to each other, at the scale at which standard silicon electronic devices are currently fabricated,” she says.
    The challenge facing quantum computer engineers has been to balance two opposing needs: shielding the computing elements from external interference and noise, while still enabling them to interact to perform meaningful computations. This is why there are so many different types of hardware still in the race to be the first operating quantum computer: some are very good for performing fast operations, but suffer from noise; others are well shielded from noise, but difficult to operate and scale up.
    The UNSW team has invested on a platform that – until today – could be placed in the second camp. They have used the nuclear spin of phosphorus atoms, implanted in a silicon chip, to encode quantum information.
    “The spin of an atomic nucleus is the cleanest, most isolated quantum object one can find in the solid state,” says Scientia Professor Andrea Morello, UNSW School of Electrical Engineering & Telecommunications.

    “Over the last 15 years, our group has pioneered all the breakthroughs that made this technology a real contender in the quantum computing race. We already demonstrated that we could hold quantum information for over 30 seconds – an eternity, in the quantum world – and perform quantum logic operations with less than 1% errors.
    “We were the first in the world to achieve this in a silicon device, but it all came at a price: the same isolation that makes atomic nuclei so clean, makes it hard to connect them together in a large-scale quantum processor.”
    Until now, the only way to operate multiple atomic nuclei was for them to be placed very close together inside a solid, and to be surrounded by one and the same electron.
    “Most people think of an electron as the tiniest subatomic particle, but quantum physics tells us that it has the ability to ‘spread out’ in space, so that it can interact with multiple atomic nuclei,” says Dr Holly Stemp, who conducted this research at UNSW and is now a postdoctoral researcher at MIT in Boston.
    “Even so, the range over which the electron can spread is quite limited. Moreover, adding more nuclei to the same electron makes it very challenging to control each nucleus individually.”
    Making atomic nuclei talk through electronic ‘telephones’
    “By way of metaphor one could say that, until now, nuclei were like people placed in a sound-proof room,” Dr Stemp says.

    “They can talk to each other as long as they are all in the same room, and the conversations are really clear. But they can’t hear anything from the outside, and there’s only so many people who can fit inside the room. This mode of conversation doesn’t ‘scale’.
    “With this breakthrough, it’s as if we gave people telephones to communicate to other rooms. All the rooms are still nice and quiet on the inside, but now we can have conversations between many more people, even if they are far away.”
    The ‘telephones’ are, in fact, electrons. Mark van Blankenstein, another author on the paper, explains what’s really going on at the sub-atomic level.
    “By their ability to spread out in space, two electrons can ‘touch’ each other at quite some distance. And if each electron is directly coupled to an atomic nucleus, the nuclei can communicate through that.”
    So how far apart were the nuclei involved in the experiments?
    “The distance between our nuclei was about 20 nanometers – one thousandth of the width of a human hair,” says Dr Stemp.
    “That doesn’t sound like much, but consider this: if we scaled each nucleus to the size of a person, the distance between the nuclei would be about the same as that between Sydney and Boston!”
    She adds that 20 nanometers is the scale at which modern silicon computer chips are routinely manufactured to work in personal computers and mobile phones.
    “You have billions of silicon transistors in your pocket or in your bag right now, each one about 20 nanometers in size. This is our real technological breakthrough: getting our cleanest and most isolated quantum objects talking to each other at the same scale as existing electronic devices. This means we can adapt the manufacturing processes developed by the trillion-dollar semiconductor industry, to the construction of quantum computers based on the spins of atomic nuclei.”
    A scalable way forward
    Despite the exotic nature of the experiments, the researchers say these devices remain fundamentally compatible with the way all current computer chips are built. The phosphorus atoms were introduced in the chip by the team of Professor David Jamieson at the University of Melbourne, using an ultra-pure silicon slab supplied by Professor Kohei Itoh at Keio University in Japan.
    By removing the need for the atomic nuclei to be attached to the same electron, the UNSW team has swept aside the biggest roadblock to the scale-up of silicon quantum computers based on atomic nuclei.
    “Our method is remarkably robust and scalable. Here we just used two electrons, but in the future we can even add more electrons, and force them in an elongated shape, to spread out the nuclei even further,” Prof. Morello says.
    “Electrons are easy to move around and to ‘massage’ into shape, which means the interactions can be switched on and off quickly and precisely. That’s exactly what is needed for a scalable quantum computer.” More

  • in

    Shocking study exposes widespread math research fraud

    An international team of authors led by Ilka Agricola, professor of mathematics at the University of Marburg, Germany, has investigated fraudulent practices in the publication of research results in mathematics on behalf of the German Mathematical Society (DMV) and the International Mathematical Union (IMU), documenting systematic fraud over many years. The results of the study were recently published on the preprint server arxiv.org and in the Notices of the American Mathematical Society (AMS) and have since caused a stir among mathematicians.
    To solve the problem, the study also provides recommendations for the publication of research results in mathematics.
    Nowadays, research quality is often no longer measured directly by the content of publications, but increasingly by commercial indicators such as the number of publications/citations by authors or the “reputation” (impact factor) of journals. These indicators are calculated in a non-transparent manner and without the involvement of the scientific community by commercial providers, who use them to boost sales of their databases worldwide. Fraudulent companies offer their services specifically to optimize these metrics. This is worthwhile for both individuals and institutions, because a higher ranking, e.g., in a university ranking, means better access to funding and (in an international context) the possibility of charging higher tuition fees and attracting more applicants. The collateral damage is a high percentage of publications whose sole purpose is to boost the indicators, but which no one reads because they contain no new scientific findings or are even flawed.
    The study cites some striking examples. For example, based on its database, the market leader for metrics, Clarivate Inc., calculated in 2019 that the university with the most world-class researchers in mathematics is a university in Taiwan — where mathematics is not even offered as a subject. Megajournals, which print anything as long as the authors pay for it, now publish more articles per year than all reputable mathematics journals (which do not require payment) combined. Fraudsters anonymously offer everything that influences key figures for sale, from articles to citations, in exchange for payment.
    “‘Fake science’ is not only annoying, it is a danger to science and society,” emphasizes IMU Secretary General Prof. Christoph Sorger. “Because you don’t know what is valid and what is not. Targeted disinformation undermines trust in science and also makes it difficult for us mathematicians to decide which results can be used as a basis for further research.” DMV President Prof. Jürg Kramer added: “The recommendations developed by the commission are a call to all of us to work toward a system change.” More