More stories

  • in

    Magnesium protects tantalum, a promising material for making qubits

    Scientists at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory have discovered that adding a layer of magnesium improves the properties of tantalum, a superconducting material that shows great promise for building qubits, the basis of quantum computers. As described in a paper just published in the journal Advanced Materials, a thin layer of magnesium keeps tantalum from oxidizing, improves its purity, and raises the temperature at which it operates as a superconductor. All three may increase tantalum’s ability to hold onto quantum information in qubits.
    This work builds on earlier studies in which a team from Brookhaven’s Center for Functional Nanomaterials (CFN), Brookhaven’s National Synchrotron Light Source II (NSLS-II), and Princeton University sought to understand the tantalizing characteristics of tantalum, and then worked with scientists in Brookhaven’s Condensed Matter Physics & Materials Science (CMPMS) Department and theorists at DOE’s Pacific Northwest National Laboratory (PNNL) to reveal details about how the material oxidizes.
    Those studies showed why oxidation is an issue.
    “When oxygen reacts with tantalum, it forms an amorphous insulating layer that saps tiny bits of energy from the current moving through the tantalum lattice. That energy loss disrupts quantum coherence — the material’s ability to hold onto quantum information in a coherent state,” explained CFN scientist Mingzhao Liu, a lead author on the earlier studies and the new work.
    While the oxidation of tantalum is usually self-limiting — a key reason for its relatively long coherence time — the team wanted to explore strategies to further restrain oxidation to see if they could improve the material’s performance.
    “The reason tantalum oxidizes is that you have to handle it in air and the oxygen in air will react with the surface,” Liu explained. “So, as chemists, can we do something to stop that process? One strategy is to find something to cover it up.”
    All this work is being carried out as part of the Co-design Center for Quantum Advantage (C2QA), a Brookhaven-led national quantum information science research center. While ongoing studies explore different kinds of cover materials, the new paper describes a promising first approach: coating the tantalum with a thin layer of magnesium.

    “When you make a tantalum film, it is always in a high-vacuum chamber, so there is not much oxygen to speak of,” said Liu. “The problem always happens when you take it out. So, we thought, without breaking the vacuum, after we put the tantalum layer down, maybe we can put another layer, like magnesium, on top to block the surface from interacting with the air.”
    Studies using transmission electron microscopy to image structural and chemical properties of the material, atomic layer by atomic layer, showed that the strategy to coat tantalum with magnesium was remarkably successful. The magnesium formed a thin layer of magnesium oxide on the tantalum surface that appears to keep oxygen from getting through.
    “Electron microscopy techniques developed at Brookhaven Lab enabled direct visualization not only of the chemical distribution and atomic arrangement within the thin magnesium coating layer and the tantalum film but also of the changes of their oxidation states,” said Yimei Zhu, a study co-author from CMPMS. “This information is extremely valuable in comprehending the material’s electronic behavior,” he noted.
    X-ray photoelectron spectroscopy studies at NSLS-II revealed the impact of the magnesium coating on limiting the formation of tantalum oxide. The measurements indicated that an extremely thin layer of tantalum oxide — less than one nanometer thick — remains confined directly beneath the magnesium/tantalum interface without disrupting the rest of the tantalum lattice.
    “This is in stark contrast to uncoated tantalum, where the tantalum oxide layer can be more than three nanometers thick — and significantly more disruptive to the electronic properties of tantalum,” said study co-author Andrew Walter, a lead beamline scientist in the Soft X-ray Scattering & Spectroscopy program at NSLS-II.
    Collaborators at PNNL then used computational modeling at the atomic scale to identify the most likely arrangements and interactions of the atoms based on their binding energies and other characteristics. These simulations helped the team develop a mechanistic understanding of why magnesium works so well.

    At the simplest level, the calculations revealed that magnesium has a higher affinity for oxygen than tantalum does.
    “While oxygen has a high affinity to tantalum, it is ‘happier’ to stay with the magnesium than with the tantalum,” said Peter Sushko, one of the PNNL theorists. “So, the magnesium reacts with oxygen to form a protective magnesium oxide layer. You don’t even need that much magnesium to do the job. Just two nanometers of thickness of magnesium almost completely blocks the oxidation of tantalum.”
    The scientists also demonstrated that the protection lasts a long time: “Even after one month, the tantalum is still in pretty good shape. Magnesium is a really good oxygen barrier,” Liu concluded.
    The magnesium had an unexpected beneficial effect: It “sponged out” inadvertent impurities in the tantalum and, as a result, raised the temperature at which it operates as a superconductor.
    “Even though we are making these materials in a vacuum, there is always some residual gas — oxygen, nitrogen, water vapor, hydrogen. And tantalum is very good at sucking up these impurities,” Liu explained. “No matter how careful you are, you will always have these impurities in your tantalum.”
    But when the scientists added the magnesium coating, they discovered that its strong affinity for the impurities pulled them out. The resulting purer tantalum had a higher superconducting transition temperature.
    That could be very important for applications because most superconductors must be kept very cold to operate. In these ultracold conditions, most of the conducting electrons pair up and move through the material with no resistance.
    “Even a slight elevation in the transition temperature could reduce the number of remaining, unpaired electrons,” Liu said, potentially making the material a better superconductor and increasing its quantum coherence time.
    “There will have to be follow-up studies to see if this material improves qubit performance,” Liu said. “But this work provides valuable insights and new materials design principles that could help pave the way to the realization of large-scale, high-performance quantum computing systems.” More

  • in

    A sleeker facial recognition technology tested on Michelangelo’s David

    Many people are familiar with facial recognition systems that unlock smartphones and game systems or allow access to our bank accounts online. But the current technology can require boxy projectors and lenses. Now, researchers report in ACS’ Nano Letters a sleeker 3D surface imaging system with flatter, simplified optics. In proof-of-concept demonstrations, the new system recognized the face of Michelangelo’s David just as well as an existing smartphone system.
    3D surface imaging is a common tool used in smartphone facial recognition, as well as in computer vision and autonomous driving. These systems typically consist of a dot projector that contains multiple components: a laser, lenses, a light guide and a diffractive optical element (DOE). The DOE is a special kind of lens that breaks the laser beam into an array of about 32,000 infrared dots. So, when a person looks at a locked screen, the facial recognition system projects an array of dots onto most of their face, and the device’s camera reads the pattern created to confirm the identity. However, dot projector systems are relatively large for small devices such as smartphones. So, Yu-Heng Hong, Hao-Chung Kuo, Yao-Wei Huang and colleagues set out to develop a more compact facial recognition system that would be nearly flat and require less energy to operate.
    To do this, the researchers replaced a traditional dot projector with a low-power laser and a flat gallium arsenide surface, significantly reducing the imaging device’s size and power consumption. They etched the top of this thin metallic surface with a nanopillar pattern, which creates a metasurface that scatters light as it passes through the material. In this prototype, the low-powered laser light scatters into 45,700 infrared dots that are projected onto an object or face positioned in front of the light source. Like the dot projector system, the new system incorporates a camera to read the patterns that the infrared dots created.
    In tests of the prototype, the system accurately identified a 3D replica of Michelangelo’s David by comparing the infrared dot patterns to online photos of the famous statue. Notably, it accomplished this using five to 10 times less power and on a platform with a surface area about 230 times smaller than a common dot-projector system. The researchers say their prototype demonstrates the usefulness of metasurfaces for effective small-scale low-power imaging solutions for facial recognition, robotics and extended reality.
    The authors acknowledge funding from Hon Hai Precision Industry, the National Science and Technology Council in Taiwan, and the Ministry of Education in Taiwan. More

  • in

    A physical qubit with built-in error correction

    Researchers at the universities of Mainz, Olomouc, and Tokyo succeeded in generating a logical qubit from a single light pulse that has the inherent capacity to correct errors.
    There has been significant progress in the field of quantum computing. Big global players, such as Google and IBM, are already offering cloud-based quantum computing services. However, quantum computers cannot yet help with problems that occur when standard computers reach the limits of their capacities because the availability of qubits or quantum bits, i.e., the basic units of quantum information, is still insufficient. One of the reasons for this is that bare qubits are not of immediate use for running a quantum algorithm.
    While the binary bits of customary computers store information in the form of fixed values of either 0 or 1, qubits can represent 0 and 1 at one and the same time, bringing probability as to their value into play. This is known as quantum superposition. This makes them very susceptible to external influences, which means that the information they store can readily be lost. In order to ensure that quantum computers supply reliable results, it is necessary to generate a genuine entanglement to join together several physical qubits to form a logical qubit. Should one of these physical qubits fail, the other qubits will retain the information. However, one of the main difficulties preventing the development of functional quantum computers is the large number of physical qubits required.
    Advantages of a photon-based approach
    Many different concepts are being employed to make quantum computing viable. Large corporations currently rely on superconducting solid-state systems, for example, but these have the disadvantage that they only function at temperatures close to absolute zero. Photonic concepts, on the other hand, work at room temperature. Single photons usually serve as physical qubits here. These photons, which are, in a sense, tiny particles of light, inherently operate more rapidly than solid-state qubits but, at the same time, are more easily lost. To avoid qubit losses and other errors, it is necessary to couple several single-photon light pulses together to construct a logical qubit — as in the case of the superconductor-based approach.
    A qubit with the inherent capacity for error correction
    Researchers of the University of Tokyo together with colleagues from Johannes Gutenberg University Mainz (JGU) in Germany and Palacký University Olomouc in the Czech Republic have recently demonstrated a new means of constructing a photonic quantum computer. Rather than using a single photon, the team employed a laser-generated light pulse that can consist of several photons. “Our laser pulse was converted to a quantum optical state that gives us an inherent capacity to correct errors,” stated Professor Peter van Loock of Mainz University. “Although the system consists only of a laser pulse and is thus very small, it can — in principle — eradicate errors immediately.” Thus, there is no need to generate individual photons as qubits via numerous light pulses and then have them interact as logical qubits. “We need just a single light pulse to obtain a robust logical qubit,” added van Loock. To put it in other words, a physical qubit is already equivalent to a logical qubit in this system — a remarkable and unique concept. However, the logical qubit experimentally produced at the University of Tokyo was not yet of a sufficient quality to provide the necessary level of error tolerance. Nonetheless, the researchers have clearly demonstrated that it is possible to transform non-universally correctable qubits into correctable qubits using the most innovative quantum optical methods.
    The corresponding research results have recently been published in Science. They are based on a collaboration going back some 20 years between the experimental group of Akira Furusawa in Japan and the theoretical team of Peter van Loock in Germany. More

  • in

    AI learns through the eyes and ears of a child

    AI systems, such as GPT-4, can now learn and use human language, but they learn from astronomical amounts of language input — much more than children receive when learning how to understand and speak a language. The best AI systems train on text with a word count in the trillions, whereas children receive just millions per year.
    Due to this enormous data gap, researchers have been skeptical that recent AI advances can tell us much about human learning and development. An ideal test for demonstrating a connection would involve training an AI model, not on massive data from the web, but on only the input that a single child receives. What would the model be able to learn then?
    A team of New York University researchers ran this exact experiment. They trained a multimodal AI system through the eyes and ears of a single child, using headcam video recordings from when the child was six months and through their second birthday. They examined if the AI model could learn words and concepts present in a child’s everyday experience.
    Their findings, reported in the latest issue of the journal Science, showed that the model, or neural network, could, in fact, learn a substantial number of words and concepts using limited slices of what the child experienced. That is, the video only captured about 1% of the child’s waking hours, but that was sufficient for genuine language learning.
    In this video, the researchers describe their work in greater detail.
    “We show, for the first time, that a neural network trained on this developmentally realistic input from a single child can learn to link words to their visual counterparts,” says Wai Keen Vong, a research scientist at NYU’s Center for Data Science and the paper’s first author. “Our results demonstrate how recent algorithmic advances paired with one child’s naturalistic experience has the potential to reshape our understanding of early language and concept acquisition.”
    “By using AI models to study the real language-learning problem faced by children, we can address classic debates about what ingredients children need to learn words — whether they need language-specific biases, innate knowledge, or just associative learning to get going,” adds Brenden Lake, an assistant professor in NYU’s Center for Data Science and Department of Psychology and the paper’s senior author. “It seems we can get more with just learning than commonly thought.”
    Vong, Lake, and their NYU colleagues, Wentao Wang and Emin Orhan, analyzed a child’s learning process captured on first-person video — via a light, head-mounted camera — on a weekly basis beginning at six months and through 25 months, using more than 60 hours of footage. The footage contained approximately a quarter of a million word instances (i.e., the number of words communicated, many of them repeatedly) that are linked with video frames of what the child saw when those words were spoken and included a wide range of different activities across development, including mealtimes, reading books, and the child playing.

    The NYU researchers then trained a multimodal neural network with two separate modules: one that takes in single video frames (the vision encoder) and another that takes in the transcribed child-directed speech (the language encoder). These two encoders were combined and trained using an algorithm called contrastive learning, which aims to learn useful input features and their cross-modal associations. For instance, when a parent says something in view of the child, it is likely that some of the words used are likely referring to something that the child can see, meaning comprehension is instilled by linking visual and linguistic cues.
    “This provides the model a clue as to which words should be associated with which objects,” explains Vong. “Combining these cues is what enables contrastive learning to gradually determine which words belong with which visuals and to capture the learning of a child’s first words.”
    After training the model, the researchers tested it using the same kinds of evaluations used to measure word learning in infants — presenting the model with the target word and an array of four different image options and asking it to select the image that matches the target word. Their results showed that the model was able to learn a substantial number of the words and concepts present in the child’s everyday experience. Furthermore, for some of the words the model learned, it could generalize them to very different visual instances than those seen at training, reflecting an aspect of generalization also seen in children when they are tested in the lab.
    “These findings suggest that this aspect of word learning is feasible from the kind of naturalistic data that children receive while using relatively generic learning mechanisms such as those found in neural networks,” observes Lake.
    The work was supported by the U.S. Department of Defense’s Defense Advanced Research Projects Agency (N6600119C4030) and the National Science Foundation (1922658). Participation of the child was approved by the parents and the methodology was approved by NYU’s Institutional Review Board. More

  • in

    Photonics-based wireless link breaks speed records for data transmission

    From coffee-shop customers who connect their laptop to the local Wi-Fi network to remote weather monitoring stations in the Antarctic, wireless communication is an essential part of modern life. Researchers worldwide are currently working on the next evolution of communication networks, called “beyond 5G” or 6G networks. To enable the near-instantaneous communication needed for applications like augmented reality or the remote control of surgical robots, ultra-high data speeds will be needed on wireless channels. In a study published recently in IEICE Electronics Express, researchers from Osaka University and IMRA AMERICA have found a way to increase these data speeds by reducing the noise in the system through lasers.
    To pack in large amounts of data and keep responses fast, the sub-terahertz band, which extends from 100 GHz to 300 GHz, will be used by 6G transmitters and receivers. A sophisticated approach called “multi-level signal modulation” is used to further increase the data transmission rate of these wireless links. However, when operating at the top end of these extremely high frequencies, multi-level signal modulation becomes highly sensitive to noise. To work well, it relies on precise reference signals, and when these signals begin to shift forward and backward in time (a phenomenon called “phase noise”), the performance of multi-level signal modulation drops.
    “This problem has limited 300-GHz communications so far,” says Keisuke Maekawa, lead author of the study. “However, we found that at high frequencies, a signal generator based on a photonic device had much less phase noise than a conventional electrical signal generator.”
    Specifically, the team used a stimulated Brillouin scattering laser, which employs interactions between sound and light waves, to generate a precise signal. They then set up a 300 GHz-band wireless communication system that employs the laser-based signal generator in both the transmitter and receiver. The system also used on-line digital signal processing (DSP) to demodulate the signals in the receiver and increase the data rate.
    “Our team achieved a single-channel transmission rate of 240 gigabits per second,” says Tadao Nagatsuma, PI of the project. “This is the highest transmission rate obtained so far in the world using on-line DSP.”
    As 5G spreads across the globe, researchers are working hard to develop the technology that will be needed for 6G, and the results of this study are a significant step toward 300GHz-band wireless communication. The researchers anticipate that with multiplexing techniques (where more than one channel can be used) and more sensitive receivers, the data rate can be increased to 1 terabit per second, ushering in a new era of near-instantaneous global communication. More

  • in

    Hexagonal copper disk lattice unleashes spin wave control

    A collaborative group of researchers has potentially developed a means of controlling spin waves by creating a hexagonal pattern of copper disks on a magnetic insulator. The breakthrough is expected to lead to greater efficiency and miniaturization of communication devices in fields such as artificial intelligence and automation technology.
    Details of the study were published in the journal Physical Review Applied on January 30, 2024.
    In a magnetic material, the spins of electrons are aligned. When these spins undergo coordinated movement, it generates a kind of ripple in the magnetic order, dubbed spin waves. Spin waves generate little heat and offer an abundance of advantages for next-generation devices.
    Implementing spin waves in semiconductor circuits, which conventionally rely on electrical currents, could lessen power consumption and promote high integration. Since spin waves are waves, they tend to propagate in random directions unless controlled by structures and other means. As such, elements capable of generating, propagating, superimposing, and measuring spin waves are being competitively developed worldwide.
    “We leveraged the wavelike nature of spin waves to successfully control their propagation directly,” points out Taichi Goto, associate professor at Tohoku University’s Electrical Communication Research Institute, and co-author of the paper. “We did so by first developing an excellent magnetic insulator material called magnetic garnet film, which has low spin wave losses. We then periodically arranged small copper disks with diameters less than 1 mm on this film.”
    By arranging copper disks in a hexagonal pattern resembling snowflakes, Goto and his colleagues could effectively reflect the spin waves. Furthermore, by rotating the magnonic crystal and changing the incident angle of spin waves, the researchers revealed that the frequency at which the magnonic band gap occurs remains largely unchanged in the range from 10 to 30 degrees. This suggests the potential for the two-dimensional magnonic crystal to freely control the propagation direction of spin waves.
    Goto notes the novelty of their findings: “To date, there have been no experimental confirmations of changes in the spin wave incident angle for a two-dimensional magnonic crystal comprising a magnetic insulator and copper disks, making this the world’s first report.”
    Looking ahead, the team hopes to demonstrate the direction control of spin waves using two-dimensional magnonic crystals and to develop functional components that utilize this technology. More

  • in

    How to run a password update campaign efficiently and with minimal IT costs

    Updating passwords for all users of a company or institution’s internal computer systems is stressful and disruptive to both users and IT professionals. Many studies have looked at user struggles and password best practices. But very little research has been done to determine how a password update campaign can be conducted most efficiently and with minimal IT costs. Until now.
    A team of computer scientists at the University of California San Diego partnered with the campus’ Information Technology Services to analyze the messaging for a campuswide mandatory password change impacting almost 10,000 faculty and staff members. The team found that email notifications to update passwords potentially yielded diminishing returns after three messages. They also found that a prompt to update passwords while users were trying to log in was effective for those who had ignored email reminders. Researchers also found that users whose jobs didn’t require much computer use struggled the most with the update.
    To the team’s knowledge, it’s the first time an empirical analysis of a mandatory password update has been conducted at this large a scale and in the wild, rather than as part of a simulation or controlled experiment.
    The research team hopes that lessons from their analysis will be helpful to IT professionals at other institutions and companies.
    The team presented their work at ACSAC ’23: Annual Computer Security Applications Conference in December 2023.
    During the campaign, almost 10,000 faculty and staff at UC San Diego received four emails at about a weekly interval prompting them to change their single sign-on password. Users who still hadn’t changed their password even after receiving four emails then got a prompt to do so as they logged in.
    The emails were clearly effective, leading between 5 and 15% of users to update their passwords during each wave of emails. However, even after four such email prompts, a quarter of users had not completed the update procedure.

    The finding contradicts a previous study that found 98% of participants changed their passwords after receiving multiple email messages. But that study had a much smaller sample size.
    Remarkably, 80% of the remaining users who hadn’t changed their passwords after the email campaign finally did so when they were prompted at log in.
    “The active single sign on prompting was a big winner across the board,” said Ariana Mirian, the paper’s first author, who earned her Ph.D. in the UC San Diego Department of Computer Science and Engineering. “You managed to get people who are stubborn-and maybe not paying attention-to take action, and that’s huge.”
    Researchers also noted that despite concerns from the campus, the campaign did not generate a significant increase in tickets to the IT help desk. Ticket volume did increase three to four times, but tickets related to the password update only represented 8% of all requests.
    Not surprisingly, users that struggled the most work in areas where they’re not required to log in to their computers regularly, such as maintenance, recreation and dining services.
    “Targeting such users earlier, or forgoing email reminders and using login intercepts from the start, or even using a different notification mechanism such as text messages, may be more effective,” the researchers write.
    The research was funded in part by the National Science Foundation, the UC San Diego CSE postdoctoral fellows program, the Irwin Mark and Joel Klein Jacobs Chair in Information and Computer Science, and operational support from the UC San Diego Center for Networked Systems.
    An Empirical Analysis of the Enterprise-Wide Mandatory Password Updates
    Ariana Mirian, Grant Ho, Stefan Savage and Geoffrey M. Voelker, Department of Computer Science and Engineering, University of California San Diego More

  • in

    Promising heart drugs ID’d by cutting-edge combo of machine learning, human learning

    University of Virginia scientists have developed a new approach to machine learning — a form of artificial intelligence — to identify drugs that help minimize harmful scarring after a heart attack or other injuries.
    The new machine-learning tool has already found a promising candidate to help prevent harmful heart scarring in a way distinct from previous drugs. The UVA researchers say their cutting-edge computer model has the potential to predict and explain the effects of drugs for other diseases as well.
    “Many common diseases such as heart disease, metabolic disease and cancer are complex and hard to treat,” said researcher Anders R. Nelson, PhD, a computational biologist and former student in the lab of UVA’s Jeffrey J. Saucerman, PhD. “Machine learning helps us reduce this complexity, identify the most important factors that contribute to disease and better understand how drugs can modify diseased cells.”
    “On its own, machine learning helps us to identify cell signatures produced by drugs,” said Saucerman, of UVA’s Department of Biomedical Engineering, a joint program of the School of Medicine and School of Engineering. “Bridging machine learning with human learning helped us not only predict drugs against fibrosis [scarring] but also explain how they work. This knowledge is needed to design clinical trials and identify potential side effects.”
    Combining Machine Learning, Human Learning
    Saucerman and his team combined a computer model based on decades of human knowledge with machine learning to better understand how drugs affect cells called fibroblasts. These cells help repair the heart after injury by producing collagen and contract the wound. But they can also cause harmful scarring, called fibrosis, as part of the repair process. Saucerman and his team wanted to see if a selection of promising drugs would give doctors more ability to prevent scarring and, ultimately, improve patient outcomes.
    Previous attempts to identify drugs targeting fibroblasts have focused only on selected aspects of fibroblast behavior, and how these drugs work often remains unclear. This knowledge gap has been a major challenge in developing targeted treatments for heart fibrosis. So Saucerman and his colleagues developed a new approach called “logic-based mechanistic machine learning” that not only predicts drugs but also predicts how they affect fibroblast behaviors.

    They began by looking at the effect of 13 promising drugs on human fibroblasts, then used that data to train the machine learning model to predict the drugs’ effects on the cells and how they behave. The model was able to predict a new explanation of how the drug pirfenidone, already approved by the federal Food and Drug Administration for idiopathic pulmonary fibrosis, suppresses contractile fibers inside the fibroblast that stiffen the heart. The model also predicted how another type of contractile fiber could be targeted by the experimental Src inhibitor WH4023, which they experimentally validated with human cardiac fibroblasts.
    Additional research is needed to verify the drugs work as intended in animal models and human patients, but the UVA researchers say their research suggests mechanistic machine learning represents a powerful tool for scientists seeking to discover biological cause-and-effect. The new findings, they say, speak to the great potential the technology holds to advance the development of new treatments — not just for heart injury but for many diseases.
    “We’re looking forward to testing whether pirfenidone and WH4023 also suppress the fibroblast contraction of scars in preclinical animal models,” Saucerman said. “We hope this provides an example of how machine learning and human learning can work together to not only discover but also understand how new drugs work.”
    The research was supported by the National Institutes of Health, grants HL137755, HL007284, HL160665, HL162925 and 1S10OD021723-01A1. More