More stories

  • in

    Could artificial intelligence help or hurt scientific research articles?

    Since its introduction to the public in November 2022, ChatGPT, an artificial intelligence system, has substantially grown in use, creating written stories, graphics, art and more with just a short prompt from the user. But when it comes to scientific, peer-reviewed research, could the tool be useful?
    “Right now, many journals do not want people to use ChatGPT to write their articles, but a lot of people are still trying to use it,” said Melissa Kacena, PhD, vice chair of research and a professor of orthopaedic surgery at the Indiana University School of Medicine. “We wanted to study whether ChatGPT is able to write a scientific article and what are the different ways you could successfully use it.”
    The researchers took three different topics — fractures and the nervous system, Alzheimer’s disease and bone health and COVID-19 and bone health — and prompted the subscription version of ChatGPT ($20/month) to create scientific articles about them. The researchers took 3 different approaches for the original draft of the articles — all human, all ChatGPT or a combination. The study is published in a compilation of 12 articles in a new, special edition of Current Osteoporosis Reports.
    “The standard way of writing a review article is to do a literature search, write an outline, start writing, and then faculty members revise and edit the draft,” Kacena said. “We collected data about how much time it takes for this human method and how much time it takes for ChatGPT to write and then for faculty to edit the different articles.”
    In the articles written only by ChatGPT, up to 70% of the references were wrong. But when using an AI-assisted approach with more human involvement, they saw more plagiarism, especially when giving the tool more references up front. Overall, the use of AI decreased time spent to write the article, but required more extensive fact checking.
    Another concern is with the writing style used by ChatGPT. Even though the tool was prompted to use a higher level of scientific writing, the words and phrases were not necessarily written at the level someone would expect to see from a researcher.
    “It was repetitive writing and even if it was structured the way you learn to write in school, it was scary to know there were maybe incorrect references or wrong information,” said Lilian Plotkin, PhD, professor of anatomy, cell biology and physiology at the IU School of Medicine and coauthor on five of the papers.

    Jill Fehrenbacher, PhD, associate professor of pharmacology and toxicology at the school and coauthor on nine of the papers, said she believes even though many scientific journals do not want authors to use ChatGPT, many people still will — especially non-native English speakers.
    “People may still write everything themselves, but then put it into ChatGPT to fix their grammar or help with their writing, so I think we need to look at how do we shepherd people in using it appropriately and even helping them?” Fehrenbacher said. “We hope to provide a guide for the scientific community so that if people are going to use it, here are some tips and advice.”
    “I think it’s here to stay, but we need to understand how we can use it in an appropriate manner that won’t compromise someone’s reputation or spread misinformation,” Kacena said.
    Faculty and students from several departments and centers across the IU School of Medicine were involved, including orthopaedic surgery; anatomy, cell biology and physiology; pharmacology and toxicology; radiology and imaging sciences; anesthesia; the Stark Neuroscience Research Institute; the Indiana Center for Musculoskeletal Health; and the IU School of Dentistry. Authors are also affiliated with the Richard L. Roudebush Veterans Affairs Medical Center in Indianapolis, Eastern Virginia Medical School in Norfolk, Virginia, and Mount Holyoke College in South Hadley, Massachusetts. More

  • in

    Doctors have more difficulty diagnosing disease when looking at images of darker skin

    When diagnosing skin diseases based solely on images of a patient’s skin, doctors do not perform as well when the patient has darker skin, according to a new study from MIT researchers.
    The study, which included more than 1,000 dermatologists and general practitioners, found that dermatologists accurately characterized about 38 percent of the images they saw, but only 34 percent of those that showed darker skin. General practitioners, who were less accurate overall, showed a similar decrease in accuracy with darker skin.
    The research team also found that assistance from an artificial intelligence algorithm could improve doctors’ accuracy, although those improvements were greater when diagnosing patients with lighter skin.
    While this is the first study to demonstrate physician diagnostic disparities across skin tone, other studies have found that the images used in dermatology textbooks and training materials predominantly feature lighter skin tones. That may be one factor contributing to the discrepancy, the MIT team says, along with the possibility that some doctors may have less experience in treating patients with darker skin.
    “Probably no doctor is intending to do worse on any type of person, but it might be the fact that you don’t have all the knowledge and the experience, and therefore on certain groups of people, you might do worse,” says Matt Groh PhD ’23, an assistant professor at the Northwestern University Kellogg School of Management. “This is one of those situations where you need empirical evidence to help people figure out how you might want to change policies around dermatology education.”
    Groh is the lead author of the study, which appears today in Nature Medicine. Rosalind Picard, an MIT professor of media arts and sciences, is the senior author of the paper.
    Diagnostic discrepancies
    Several years ago, an MIT study led by Joy Buolamwini PhD ’22 found that facial-analysis programs had much higher error rates when predicting the gender of darker skinned people. That finding inspired Groh, who studies human-AI collaboration, to look into whether AI models, and possibly doctors themselves, might have difficulty diagnosing skin diseases on darker shades of skin — and whether those diagnostic abilities could be improved.

    “This seemed like a great opportunity to identify whether there’s a social problem going on and how we might want fix that, and also identify how to best build AI assistance into medical decision-making,” Groh says. “I’m very interested in how we can apply machine learning to real-world problems, specifically around how to help experts be better at their jobs. Medicine is a space where people are making really important decisions, and if we could improve their decision-making, we could improve patient outcomes.”
    To assess doctors’ diagnostic accuracy, the researchers compiled an array of 364 images from dermatology textbooks and other sources, representing 46 skin diseases across many shades of skin.
    Most of these images depicted one of eight inflammatory skin diseases, including atopic dermatitis, Lyme disease, and secondary syphilis, as well as a rare form of cancer called cutaneous T-cell lymphoma (CTCL), which can appear similar to an inflammatory skin condition. Many of these diseases, including Lyme disease, can present differently on dark and light skin.
    The research team recruited subjects for the study through Sermo, a social networking site for doctors. The total study group included 389 board-certified dermatologists, 116 dermatology residents, 459 general practitioners, and 154 other types of doctors.
    Each of the study participants was shown 10 of the images and asked for their top three predictions for what disease each image might represent. They were also asked if they would refer the patient for a biopsy. In addition, the general practitioners were asked if they would refer the patient to a dermatologist.
    “This is not as comprehensive as in-person triage, where the doctor can examine the skin from different angles and control the lighting,” Picard says. “However, skin images are more scalable for online triage, and they are easy to input into a machine-learning algorithm, which can estimate likely diagnoses speedily.”
    The researchers found that, not surprisingly, specialists in dermatology had higher accuracy rates: They classified 38 percent of the images correctly, compared to 19 percent for general practitioners.

    Both of these groups lost about four percentage points in accuracy when trying to diagnose skin conditions based on images of darker skin — a statistically significant drop. Dermatologists were also less likely to refer darker skin images of CTCL for biopsy, but more likely to refer them for biopsy for noncancerous skin conditions.
    A boost from AI
    After evaluating how doctors performed on their own, the researchers also gave them additional images to analyze with assistance from an AI algorithm the researchers had developed. The researchers trained this algorithm on about 30,000 images, asking it to classify the images as one of the eight diseases that most of the images represented, plus a ninth category of “other.”
    This algorithm had an accuracy rate of about 47 percent. The researchers also created another version of the algorithm with an artificially inflated success rate of 84 percent, allowing them to evaluate whether the accuracy of the model would influence doctors’ likelihood to take its recommendations.
    “This allows us to evaluate AI assistance with models that are currently the best we can do, and with AI assistance that could be more accurate, maybe five years from now, with better data and models,” Groh says.
    Both of these classifiers are equally accurate on light and dark skin. The researchers found that using either of these AI algorithms improved accuracy for both dermatologists (up to 60 percent) and general practitioners (up to 47 percent).
    They also found that doctors were more likely to take suggestions from the higher-accuracy algorithm after it provided a few correct answers, but they rarely incorporated AI suggestions that were incorrect. This suggests that the doctors are highly skilled at ruling out diseases and won’t take AI suggestions for a disease they have already ruled out, Groh says.
    “They’re pretty good at not taking AI advice when the AI is wrong and the physicians are right. That’s something that is useful to know,” he says.
    While dermatologists using AI assistance showed similar increases in accuracy when looking at images of light or dark skin, general practitioners showed greater improvement on images of lighter skin than darker skin.
    “This study allows us to see not only how AI assistance influences, but how it influences across levels of expertise,” Groh says. “What might be going on there is that the PCPs don’t have as much experience, so they don’t know if they should rule a disease out or not because they aren’t as deep into the details of how different skin diseases might look on different shades of skin.”
    The researchers hope that their findings will help stimulate medical schools and textbooks to incorporate more training on patients with darker skin. The findings could also help to guide the deployment of AI assistance programs for dermatology, which many companies are now developing.
    The research was funded by the MIT Media Lab Consortium and the Harold Horowitz Student Research Fund. More

  • in

    One person can supervise ‘swarm’ of 100 unmanned autonomous vehicles

    Research involving Oregon State University has shown that a “swarm” of more than 100 autonomous ground and aerial robots can be supervised by one person without subjecting the individual to an undue workload.
    The findings represent a big step toward efficiently and economically using swarms in a range of roles from wildland firefighting to package delivery to disaster response in urban environments.
    “We don’t see a lot of delivery drones yet in the United States, but there are companies that have been deploying them in other countries,” said Julie A. Adams of the OSU College of Engineering. “It makes business sense to deploy delivery drones at a scale, but it will require a single person be responsible for very large numbers of these drones. I’m not saying our work is a final solution that shows everything is OK, but it is the first step toward getting additional data that would facilitate that kind of a system.”
    The results, published in Field Robotics, stem from the Defense Advanced Research Project Agency’ program known as OFFSET, short for Offensive Swarm-Enabled Tactics. Adams was part of a group that received an OFFSET grant in 2017.
    During the course of the four-year project, researchers deployed swarms of up to 250 autonomous vehicles — multi-rotor aerial drones, and ground rovers — able to gather information in “concrete canyon” urban surroundings where line-of-sight, satellite-based communication is impaired by buildings. The information the swarms collect during their missions at military urban training sites have the potential to help keep U.S. troops and civilians more safe.
    Adams was a co-principal investigator on one of two swarm system integrator teams that developed the system infrastructure and integrated the work of other teams focused on swarm tactics, swarm autonomy, human-swarm teaming, physical experimentation and virtual environments.
    “The project required taking off-the-shelf technologies and building the autonomy needed for them to be deployed by a single human called the swarm commander,” said Adams, the associate director for deployed systems and policy at OSU’s Collaborative Robotics and Intelligent Systems Institute. “That work also required developing not just the needed systems and the software, but also the user interface for that swarm commander to allow a single human to deploy these ground and aerial systems.”
    Collaborators with Smart Information Flow Technologies developed a virtual reality interface called I3 that lets the commander control the swarm with high-level directions.

    “The commanders weren’t physically driving each individual vehicle, because if you’re deploying that many vehicles, they can’t — a single human can’t do that,” Adams said. “The idea is that the swarm commander can select a play to be executed and can make minor adjustments to it, like a quarterback would in the NFL. The objective data from the trained swarm commanders demonstrated that a single human can deploy these systems in built environments, which has very broad implications beyond this project.”
    Testing took place at multiple Department of Defense Combined Armed Collective Training Facilities. Each multiday field exercise introduced additional vehicles, and every 10 minutes swarm commanders provided information about their workload and how stressed or fatigued they were.
    During the final field exercise, featuring more than 100 vehicles, the commanders’ workload levels were also assessed through physiological sensors that fed information into an algorithm that estimates someone’s sensory channel workload levels and their overall workload.
    “The swarm commanders’ workload estimate did cross the overload threshold frequently, but just for a few minutes at a time, and the commander was able to successfully complete the missions, often under challenging temperature and wind conditions,” Adams said. More

  • in

    Computer-engineered DNA to study cell identities

    A new computer program allows scientists to design synthetic DNA segments that indicate, in real time, the state of cells. Reported by the Gargiulo lab in “Nature Communications,” it will be used to screen for anti-cancer or viral infections drugs, or to improve gene and cell-based immunotherapies.
    All the cells in our body have the same genetic code, and yet they can differ in their identities, functions and disease states. Telling one cell apart from another in a simple manner, in real time, would prove invaluable for scientists trying to understand inflammation, infections or cancers. Now, scientists at the Max Delbrück Center have created an algorithm that can design such tools that reveal the identity and state of cells using segments of DNA called “synthetic locus control regions” (sLCRs). They can be used in a variety of biological systems. The findings, by the lab of Dr Gaetano Gargiulo, head of the Molecular Oncology Lab, are reported in Nature Communications.
    “This algorithm enables us to create precise DNA tools for marking and studying cells, offering new insights into cellular behaviors,” says Gargiulo, senior author of the study. “We hope this research opens doors to a more straightforward and scalable way of understanding and manipulating cells.”
    This effort began when Dr Carlos Company, a former graduate student at the Gargiulo lab and co-first author of the study, started to invest energy into making the design of the DNA tools automated and accessible to other scientists. He coded an algorithm that can generate tools to understand basic cellular processes as well as disease processes such as cancers, inflammation and infections.
    “This tool allows researchers to examine the way cells transform from one type to another. It is particularly innovative because it compiles all the crucial instructions that direct these changes into a simple synthetic DNA sequence. In turn, this simplifies studying complex cellular behaviors in important areas like cancer research and human development,” says Company.
    Algorithm to make a tailored DNA tool
    The computer program is named “logical design of synthetic cis-regulatory DNA” (LSD). The researchers input the known genes and transcription factors associated with the specific cell states they want to study, and the program uses this to identify DNA segments (promoters and enhancers) controlling the activity in the cell of interest. This information is sufficient to discover functional sequences, and scientists do not have to know the precise genetic or molecular reason behind a cell’s behavior; they just have to construct the sLCR.

    The program looks within the genomes of either humans or mouse to find places where transcription factors are highly likely to bind, says Yuliia Dramaretska, a graduate student at the Gargiulo lab and co-first author. It spits out a list of 150-basepair long sequences that are relevant, and which likely act as the active promoters and enhancers for the condition being studied.
    “It’s not giving a random list of those regions, obviously,” she says. “The algorithm is actually ranking them and finding the segments that will most efficiently represent the phenotype you want to study.”
    Like a lamp inside the cells
    Scientists can then make a tool, called a “synthetic locus control region” (sLCR), which includes the generated sequence followed by a DNA segment encoding a fluorescent protein. “The sLCRs are like an automated lamp that you can put inside of the cells. This lamp switches on only under the conditions you want to study,” says Dr Michela Serresi, a researcher at the Gargiulo lab and co-first author. The color of the “lamp” can be varied to match different states of interest, so that scientists can look under a fluorescence microscope and immediately know the state of each cell from its color. “We can follow with our eyes the color in a petri dish when we give a treatment,” Serresi says.
    The scientists have validated the utility of the computer program by using it to screen for drugs in SARS-CoV-2 infected cells, as published last year in “Science Advances.” They also used it to find mechanisms implicated in brain cancers called glioblastomas, where no single treatment works. “In order to find treatment combinations that work for specific cell states in glioblastomas, you not only need to understand what defines these cell states, but you also need to see them as they arise,” says Dr Matthias Jürgen Schmitt, the researcher at the Gargiulo lab and co-first author, who used the tools in the lab to showcase their value.
    Now, imagine immune cells engineered in the lab as a gene therapy to kill a type of cancer. When infused into the patient, not all these cells will work as intended. Some will be potent and while others may be in a dysfunctional state. Funded by an European Research Council grant, the Gargiulo lab will be using this system to study the behavior of these delicate anti-cancer cell-based therapeutics during manufacturing. “With the right collaborations, this method holds potential for advancing treatments in areas like cancer, viral infections, and immunotherapies,” Gargiulo says. More

  • in

    Direct view of tantalum oxidation that impedes qubit coherence

    Scientists at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory and DOE’s Pacific Northwest National Laboratory (PNNL) have used a combination of scanning transmission electron microscopy (STEM) and computational modeling to get a closer look and deeper understanding of tantalum oxide. When this amorphous oxide layer forms on the surface of tantalum — a superconductor that shows great promise for making the “qubit” building blocks of a quantum computer — it can impede the material’s ability to retain quantum information. Learning how the oxide forms may offer clues as to why this happens — and potentially point to ways to prevent quantum coherence loss. The research was recently published in the journal ACS Nano.
    The paper builds on earlier research by a team at Brookhaven’s Center for Functional Nanomaterials (CFN), Brookhaven’s National Synchrotron Light Source II (NSLS-II), and Princeton University that was conducted as part of the Co-design Center for Quantum Advantage (C2QA), a Brookhaven-led national quantum information science research center in which Princeton is a key partner.
    “In that work, we used X-ray photoemission spectroscopy at NSLS-II to infer details about the type of oxide that forms on the surface of tantalum when it is exposed to oxygen in the air,” said Mingzhao Liu, a CFN scientist and one of the lead authors on the study. “But we wanted to understand more about the chemistry of this very thin layer of oxide by making direct measurements,” he explained.
    So, in the new study, the team partnered with scientists in Brookhaven’s Condensed Matter Physics & Materials Science (CMPMS) Department to use advanced STEM techniques that enabled them to study the ultrathin oxide layer directly. They also worked with theorists at PNNL who performed computational modeling that revealed the most likely arrangements and interactions of atoms in the material as they underwent oxidation. Together, these methods helped the team build an atomic-level understanding of the ordered crystalline lattice of tantalum metal, the amorphous oxide that forms on its surface, and intriguing new details about the interface between these layers.
    “The key is to understand the interface between the surface oxide layer and the tantalum film because this interface can profoundly impact qubit performance,” said study co-author Yimei Zhu, a physicist from CMPMS, echoing the wisdom of Nobel laureate Herbert Kroemer, who famously asserted, “The interface is the device.”
    Emphasizing that “quantitatively probing a mere one-to-two-atomic-layer-thick interface poses a formidable challenge,” Zhu noted, “we were able to directly measure the atomic structures and bonding states of the oxide layer and tantalum film as well as identify those of the interface using the advanced electron microscopy techniques developed at Brookhaven.”
    “The measurements reveal that the interface consists of a ‘suboxide’ layer nestled between the periodically ordered tantalum atoms and the fully disordered amorphous tantalum oxide. Within this suboxide layer, only a few oxygen atoms are integrated into the tantalum crystal lattice,” Zhu said.

    The combined structural and chemical measurements offer a crucially detailed perspective on the material. Density functional theory calculations then helped the scientists validate and gain deeper insight into these observations.
    “We simulated the effect of gradual surface oxidation by gradually increasing the number of oxygen species at the surface and in the subsurface region,” said Peter Sushko, one of the PNNL theorists.
    By assessing the thermodynamic stability, structure, and electronic property changes of the tantalum films during oxidation, the scientists concluded that while the fully oxidized amorphous layer acts as an insulator, the suboxide layer retains features of a metal.
    “We always thought if the tantalum is oxidized, it becomes completely amorphous, with no crystalline order at all,” said Liu. “But in the suboxide layer, the tantalum sites are still quite ordered.”
    With the presence of both fully oxidized tantalum and a suboxide layer, the scientists wanted to understand which part is most responsible the loss of coherence in qubits made of this superconducting material.
    “It’s likely the oxide has multiple roles,” Liu said.

    First, he noted, the fully oxidized amorphous layer contains many lattice defects. That is, the locations of the atoms are not well defined. Some atoms can shift around to different configurations, each with a different energy level. Though these shifts are small, each one consumes a tiny bit of electrical energy, which contributes to loss of energy from the qubit.
    “This so-called two-level system loss in an amorphous material brings parasitic and irreversible loss to the quantum coherence — the ability of the material to hold onto quantum information,” Liu said.
    But because the suboxide layer is still crystalline, “it may not be as bad as people were thinking,” Liu said. Maybe the more-fixed atomic arrangements in this layer will minimize two-level system loss.
    Then again, he noted, because the suboxide layer has some metallic characteristics, it could cause other problems.
    “When you put a normal metal next to a superconductor, that could contribute to breaking up the pairs of electrons that move through the material with no resistance,” he noted. “If the pair breaks into two electrons again, then you will have loss of superconductivity and coherence. And that is not what you want.”
    Future studies may reveal more details and strategies for preventing loss of superconductivity and quantum coherence in tantalum.
    This research was funded by the DOE Office of Science (BES). In addition to the experimental facilities described above, this research used computational resources at CFN and at the National Energy Research Scientific Computing Center (NERSC) at DOE’s Lawrence Berkeley National Laboratory. CFN, NSLS-II, and NERSC are DOE Office of Science user facilities. More

  • in

    Magnesium protects tantalum, a promising material for making qubits

    Scientists at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory have discovered that adding a layer of magnesium improves the properties of tantalum, a superconducting material that shows great promise for building qubits, the basis of quantum computers. As described in a paper just published in the journal Advanced Materials, a thin layer of magnesium keeps tantalum from oxidizing, improves its purity, and raises the temperature at which it operates as a superconductor. All three may increase tantalum’s ability to hold onto quantum information in qubits.
    This work builds on earlier studies in which a team from Brookhaven’s Center for Functional Nanomaterials (CFN), Brookhaven’s National Synchrotron Light Source II (NSLS-II), and Princeton University sought to understand the tantalizing characteristics of tantalum, and then worked with scientists in Brookhaven’s Condensed Matter Physics & Materials Science (CMPMS) Department and theorists at DOE’s Pacific Northwest National Laboratory (PNNL) to reveal details about how the material oxidizes.
    Those studies showed why oxidation is an issue.
    “When oxygen reacts with tantalum, it forms an amorphous insulating layer that saps tiny bits of energy from the current moving through the tantalum lattice. That energy loss disrupts quantum coherence — the material’s ability to hold onto quantum information in a coherent state,” explained CFN scientist Mingzhao Liu, a lead author on the earlier studies and the new work.
    While the oxidation of tantalum is usually self-limiting — a key reason for its relatively long coherence time — the team wanted to explore strategies to further restrain oxidation to see if they could improve the material’s performance.
    “The reason tantalum oxidizes is that you have to handle it in air and the oxygen in air will react with the surface,” Liu explained. “So, as chemists, can we do something to stop that process? One strategy is to find something to cover it up.”
    All this work is being carried out as part of the Co-design Center for Quantum Advantage (C2QA), a Brookhaven-led national quantum information science research center. While ongoing studies explore different kinds of cover materials, the new paper describes a promising first approach: coating the tantalum with a thin layer of magnesium.

    “When you make a tantalum film, it is always in a high-vacuum chamber, so there is not much oxygen to speak of,” said Liu. “The problem always happens when you take it out. So, we thought, without breaking the vacuum, after we put the tantalum layer down, maybe we can put another layer, like magnesium, on top to block the surface from interacting with the air.”
    Studies using transmission electron microscopy to image structural and chemical properties of the material, atomic layer by atomic layer, showed that the strategy to coat tantalum with magnesium was remarkably successful. The magnesium formed a thin layer of magnesium oxide on the tantalum surface that appears to keep oxygen from getting through.
    “Electron microscopy techniques developed at Brookhaven Lab enabled direct visualization not only of the chemical distribution and atomic arrangement within the thin magnesium coating layer and the tantalum film but also of the changes of their oxidation states,” said Yimei Zhu, a study co-author from CMPMS. “This information is extremely valuable in comprehending the material’s electronic behavior,” he noted.
    X-ray photoelectron spectroscopy studies at NSLS-II revealed the impact of the magnesium coating on limiting the formation of tantalum oxide. The measurements indicated that an extremely thin layer of tantalum oxide — less than one nanometer thick — remains confined directly beneath the magnesium/tantalum interface without disrupting the rest of the tantalum lattice.
    “This is in stark contrast to uncoated tantalum, where the tantalum oxide layer can be more than three nanometers thick — and significantly more disruptive to the electronic properties of tantalum,” said study co-author Andrew Walter, a lead beamline scientist in the Soft X-ray Scattering & Spectroscopy program at NSLS-II.
    Collaborators at PNNL then used computational modeling at the atomic scale to identify the most likely arrangements and interactions of the atoms based on their binding energies and other characteristics. These simulations helped the team develop a mechanistic understanding of why magnesium works so well.

    At the simplest level, the calculations revealed that magnesium has a higher affinity for oxygen than tantalum does.
    “While oxygen has a high affinity to tantalum, it is ‘happier’ to stay with the magnesium than with the tantalum,” said Peter Sushko, one of the PNNL theorists. “So, the magnesium reacts with oxygen to form a protective magnesium oxide layer. You don’t even need that much magnesium to do the job. Just two nanometers of thickness of magnesium almost completely blocks the oxidation of tantalum.”
    The scientists also demonstrated that the protection lasts a long time: “Even after one month, the tantalum is still in pretty good shape. Magnesium is a really good oxygen barrier,” Liu concluded.
    The magnesium had an unexpected beneficial effect: It “sponged out” inadvertent impurities in the tantalum and, as a result, raised the temperature at which it operates as a superconductor.
    “Even though we are making these materials in a vacuum, there is always some residual gas — oxygen, nitrogen, water vapor, hydrogen. And tantalum is very good at sucking up these impurities,” Liu explained. “No matter how careful you are, you will always have these impurities in your tantalum.”
    But when the scientists added the magnesium coating, they discovered that its strong affinity for the impurities pulled them out. The resulting purer tantalum had a higher superconducting transition temperature.
    That could be very important for applications because most superconductors must be kept very cold to operate. In these ultracold conditions, most of the conducting electrons pair up and move through the material with no resistance.
    “Even a slight elevation in the transition temperature could reduce the number of remaining, unpaired electrons,” Liu said, potentially making the material a better superconductor and increasing its quantum coherence time.
    “There will have to be follow-up studies to see if this material improves qubit performance,” Liu said. “But this work provides valuable insights and new materials design principles that could help pave the way to the realization of large-scale, high-performance quantum computing systems.” More

  • in

    A sleeker facial recognition technology tested on Michelangelo’s David

    Many people are familiar with facial recognition systems that unlock smartphones and game systems or allow access to our bank accounts online. But the current technology can require boxy projectors and lenses. Now, researchers report in ACS’ Nano Letters a sleeker 3D surface imaging system with flatter, simplified optics. In proof-of-concept demonstrations, the new system recognized the face of Michelangelo’s David just as well as an existing smartphone system.
    3D surface imaging is a common tool used in smartphone facial recognition, as well as in computer vision and autonomous driving. These systems typically consist of a dot projector that contains multiple components: a laser, lenses, a light guide and a diffractive optical element (DOE). The DOE is a special kind of lens that breaks the laser beam into an array of about 32,000 infrared dots. So, when a person looks at a locked screen, the facial recognition system projects an array of dots onto most of their face, and the device’s camera reads the pattern created to confirm the identity. However, dot projector systems are relatively large for small devices such as smartphones. So, Yu-Heng Hong, Hao-Chung Kuo, Yao-Wei Huang and colleagues set out to develop a more compact facial recognition system that would be nearly flat and require less energy to operate.
    To do this, the researchers replaced a traditional dot projector with a low-power laser and a flat gallium arsenide surface, significantly reducing the imaging device’s size and power consumption. They etched the top of this thin metallic surface with a nanopillar pattern, which creates a metasurface that scatters light as it passes through the material. In this prototype, the low-powered laser light scatters into 45,700 infrared dots that are projected onto an object or face positioned in front of the light source. Like the dot projector system, the new system incorporates a camera to read the patterns that the infrared dots created.
    In tests of the prototype, the system accurately identified a 3D replica of Michelangelo’s David by comparing the infrared dot patterns to online photos of the famous statue. Notably, it accomplished this using five to 10 times less power and on a platform with a surface area about 230 times smaller than a common dot-projector system. The researchers say their prototype demonstrates the usefulness of metasurfaces for effective small-scale low-power imaging solutions for facial recognition, robotics and extended reality.
    The authors acknowledge funding from Hon Hai Precision Industry, the National Science and Technology Council in Taiwan, and the Ministry of Education in Taiwan. More

  • in

    A physical qubit with built-in error correction

    Researchers at the universities of Mainz, Olomouc, and Tokyo succeeded in generating a logical qubit from a single light pulse that has the inherent capacity to correct errors.
    There has been significant progress in the field of quantum computing. Big global players, such as Google and IBM, are already offering cloud-based quantum computing services. However, quantum computers cannot yet help with problems that occur when standard computers reach the limits of their capacities because the availability of qubits or quantum bits, i.e., the basic units of quantum information, is still insufficient. One of the reasons for this is that bare qubits are not of immediate use for running a quantum algorithm.
    While the binary bits of customary computers store information in the form of fixed values of either 0 or 1, qubits can represent 0 and 1 at one and the same time, bringing probability as to their value into play. This is known as quantum superposition. This makes them very susceptible to external influences, which means that the information they store can readily be lost. In order to ensure that quantum computers supply reliable results, it is necessary to generate a genuine entanglement to join together several physical qubits to form a logical qubit. Should one of these physical qubits fail, the other qubits will retain the information. However, one of the main difficulties preventing the development of functional quantum computers is the large number of physical qubits required.
    Advantages of a photon-based approach
    Many different concepts are being employed to make quantum computing viable. Large corporations currently rely on superconducting solid-state systems, for example, but these have the disadvantage that they only function at temperatures close to absolute zero. Photonic concepts, on the other hand, work at room temperature. Single photons usually serve as physical qubits here. These photons, which are, in a sense, tiny particles of light, inherently operate more rapidly than solid-state qubits but, at the same time, are more easily lost. To avoid qubit losses and other errors, it is necessary to couple several single-photon light pulses together to construct a logical qubit — as in the case of the superconductor-based approach.
    A qubit with the inherent capacity for error correction
    Researchers of the University of Tokyo together with colleagues from Johannes Gutenberg University Mainz (JGU) in Germany and Palacký University Olomouc in the Czech Republic have recently demonstrated a new means of constructing a photonic quantum computer. Rather than using a single photon, the team employed a laser-generated light pulse that can consist of several photons. “Our laser pulse was converted to a quantum optical state that gives us an inherent capacity to correct errors,” stated Professor Peter van Loock of Mainz University. “Although the system consists only of a laser pulse and is thus very small, it can — in principle — eradicate errors immediately.” Thus, there is no need to generate individual photons as qubits via numerous light pulses and then have them interact as logical qubits. “We need just a single light pulse to obtain a robust logical qubit,” added van Loock. To put it in other words, a physical qubit is already equivalent to a logical qubit in this system — a remarkable and unique concept. However, the logical qubit experimentally produced at the University of Tokyo was not yet of a sufficient quality to provide the necessary level of error tolerance. Nonetheless, the researchers have clearly demonstrated that it is possible to transform non-universally correctable qubits into correctable qubits using the most innovative quantum optical methods.
    The corresponding research results have recently been published in Science. They are based on a collaboration going back some 20 years between the experimental group of Akira Furusawa in Japan and the theoretical team of Peter van Loock in Germany. More