More stories

  • in

    Finally solved! The great mystery of quantized vortex motion

    Liquid helium-4, which is in a superfluid state at cryogenic temperatures close to absolute zero (-273°C), has a special vortex called a quantized vortex that originates from quantum mechanical effects. When the temperature is relatively high, the normal fluid exists simultaneously in the superfluid helium, and when the quantized vortex is in motion, mutual friction occurs between it and the normal-fluid. However, it is difficult to explain precisely how a quantized vortex interacts with a normal-fluid in motion. Although several theoretical models have been proposed, it has not been clear which model is correct.

    A research group led by Professor Makoto Tsubota and Specially Appointed Assistant Professor Satoshi Yui, from the Graduate School of Science and the Nambu Yoichiro Institute of Theoretical and Experimental Physics, Osaka Metropolitan University respectively in cooperation with their colleagues from Florida State University and Keio University, investigated numerically the interaction between a quantized vortex and a normal-fluid. Based on the experimental results, researchers decided on the most consistent of several theoretical models. They found that a model that accounts for changes in the normal-fluid and incorporates more theoretically accurate mutual friction is the most compatible with the experimental results.
    “The subject of this study, the interaction between a quantized vortex and a normal-fluid, has been a great mystery since I began my research in this field 40 years ago,” stated Professor Tsubota. “Computational advances have made it possible to handle this problem, and the brilliant visualization experiment by our collaborators at Florida State University has led to a breakthrough. As is often the case in science, subsequent developments in technology have made it possible to elucidate, and this study is a good example of this.”
    Their findings were published in Nature Communications. More

  • in

    Tiny video capsule shows promise as an alternative to endoscopy

    While indigestible video capsule endoscopes have been around for many years, the capsules have been limited by the fact that they could not be controlled by physicians. They moved passively, driven only by gravity and the natural movement of the body. Now, according to a first-of-its-kind research study at George Washington University, physicians can remotely drive a miniature video capsule to all regions of the stomach to visualize and photograph potential problem areas. The new technology uses an external magnet and hand-held video game style joysticks to move the capsule in three-dimensions in the stomach. This new technology comes closer to the capabilities of a traditional tube-based endoscopy.
    “A traditional endoscopy is an invasive procedure for patients, not to mention it is costly due to the need for anesthesia and time off work,” Andrew Meltzer, a professor of Emergency Medicine at the GW School of Medicine & Health Sciences, said. “If larger studies can prove this method is sufficiently sensitive to detect high-risk lesions, magnetically controlled capsules could be used as a quick and easy way to screen for health problems in the upper GI tract such as ulcers or stomach cancer.”
    More than 7 million traditional endoscopies of the stomach and upper part of the intestine are performed every year in the United States to help doctors investigate and treat stomach pain, nausea, bleeding and other symptoms of disease, including cancer. Despite the benefits of traditional endoscopies, studies suggest some patients have trouble accessing the procedure.
    In fact, Meltzer got interested in the magnetically controlled capsule endoscopy after seeing patients in the emergency room with stomach pain or suspected upper GI bleeding who faced barriers to getting a traditional endoscopy as an outpatient.
    “I would have patients who came to the ER with concerns for a bleeding ulcer and, even if they were clinically stable, I would have no way to evaluate them without admitting them to the hospital for an endoscopy. We could not do an endoscopy in the ER and many patients faced unacceptable barriers to getting an outpatient endoscopy, a crucial diagnostic tool to preventing life-threatening hemorrhage,” Meltzer said. “To help address this problem, I started looking for less invasive ways to visualize the upper gastrointestinal tract for patients with suspected internal bleeding.”
    The study is the first to test magnetically controlled capsule endoscopy in the United States. For patients who come to the ER or a doctor’s office with severe stomach pain, the ability to swallow a capsule and get a diagnosis on the spot — without a second appointment for a traditional endoscopy — is a real plus, not to mention potentially life-saving, says Meltzer. An external magnet allows the capsule to be painlessly driven to visualize all anatomic areas of the stomach and record video and photograph any possible bleeding, inflammatory or malignant lesions.
    While using the joystick requires additional time and training, software is being developed that will use artificial intelligence to self-drive the capsule to all parts of the stomach with a push of the button and record any potential risky abnormalities. That would make it easier to use the system as a diagnostic tool or screening test. In addition, the videos can be easily transmitted for off-site review if a gastroenterologist is not on-site to over-read the images.
    Meltzer and colleagues conducted a study of 40 patients at a physician office building using the magnetically controlled capsule endoscopy. They found that the doctor could direct the capsule to all major parts of the stomach with a 95 percent rate of visualization. Capsules were driven by the ER physician and then the study reports were reviewed by an attending gastroenterologist who was physically off-site.
    To see how the new method compared with a traditional endoscopy, participants in the study also received a follow up endoscopy. No high-risk lesions were missed with the new method and 80 percent of the patients preferred the capsule method to the traditional endoscopy. The team found no safety problems associated with the new method.
    Yet, Meltzer cautions that the study is a pilot and a much bigger trial with more patients must be conducted to make sure the method does not miss important lesions and can be used in place of an endoscopy. A major limitation of the capsule includes the inability to perform biopsies of lesions that are detected.
    The study, “Magnetically Controlled Capsule for Assessment of the Gastric Mucosa in Symptomatic Patients (MAGNET): A Prospective, Single-Arm, Single-Center, Comparative Study,” was published in iGIE, the open-access, online journal of the American Society for Gastrointestinal Endoscopy.
    The medical technology company AnX Robotica funded the research and is the creator of the capsule endoscopy system used in the study, called NaviCam®. More

  • in

    New method improves efficiency of ‘vision transformer’ AI systems

    Vision transformers (ViTs) are powerful artificial intelligence (AI) technologies that can identify or categorize objects in images — however, there are significant challenges related to both computing power requirements and decision-making transparency. Researchers have now developed a new methodology that addresses both challenges, while also improving the ViT’s ability to identify, classify and segment objects in images.
    Transformers are among the most powerful existing AI models. For example, ChatGPT is an AI that uses transformer architecture, but the inputs used to train it are language. ViTs are transformer-based AI that are trained using visual inputs. For example, ViTs could be used to detect and categorize objects in an image, such as identifying all of the cars or all of the pedestrians in an image.
    However, ViTs face two challenges.
    First, transformer models are very complex. Relative to the amount of data being plugged into the AI, transformer models require a significant amount of computational power and use a large amount of memory. This is particularly problematic for ViTs, because images contain so much data.
    Second, it is difficult for users to understand exactly how ViTs make decisions. For example, you might have trained a ViT to identify dogs in an image. But it’s not entirely clear how the ViT is determining what is a dog and what is not. Depending on the application, understanding the ViT’s decision-making process, also known as its model interpretability, can be very important.
    The new ViT methodology, called “Patch-to-Cluster attention” (PaCa), addresses both challenges.

    “We address the challenge related to computational and memory demands by using clustering techniques, which allow the transformer architecture to better identify and focus on objects in an image,” says Tianfu Wu, corresponding author of a paper on the work and an associate professor of electrical and computer engineering at North Carolina State University. “Clustering is when the AI lumps sections of the image together, based on similarities it finds in the image data. This significantly reduces computational demands on the system. Before clustering, computational demands for a ViT are quadratic. For example, if the system breaks an image down into 100 smaller units, it would need to compare all 100 units to each other — which would be 10,000 complex functions.
    “By clustering, we’re able to make this a linear process, where each smaller unit only needs to be compared to a predetermined number of clusters. Let’s say you tell the system to establish 10 clusters; that would only be 1,000 complex functions,” Wu says.
    “Clustering also allows us to address model interpretability, because we can look at how it created the clusters in the first place. What features did it decide were important when lumping these sections of data together? And because the AI is only creating a small number of clusters, we can look at those pretty easily.”
    The researchers did comprehensive testing of PaCa, comparing it to two state-of-the-art ViTs called SWin and PVT.
    “We found that PaCa outperformed SWin and PVT in every way,” Wu says. “PaCa was better at classifying objects in images, better at identifying objects in images, and better at segmentation — essentially outlining the boundaries of objects in images. It was also more efficient, meaning that it was able to perform those tasks more quickly than the other ViTs.
    “The next step for us is to scale up PaCa by training on larger, foundational data sets.”
    The paper, “PaCa-ViT: Learning Patch-to-Cluster Attention in Vision Transformers,” will be presented at the IEEE/CVF Conference on Computer Vision and Pattern Recognition, being held June 18-22 in Vancouver, Canada. First author of the paper is Ryan Grainger, a Ph.D. student at NC State. The paper was co-authored by Thomas Paniagua, a Ph.D. student at NC State; Xi Song, an independent researcher; and Naresh Cuntoor and Mun Wai Lee of BlueHalo.
    The work was done with support from the Office of the Director of National Intelligence, under contract number 2021-21040700003; the U.S. Army Research Office, under grants W911NF1810295 and W911NF2210010; and the National Science Foundation, under grants 1909644, 1822477, 2024688 and 2013451. More

  • in

    Reading between the cracks: Artificial intelligence can identify patterns in surface cracking to assess damage in reinforced concrete structures

    Recent structural collapses, including tragedies in Surfside, Florida, Pittsburgh, and New York City, have centered the need for more frequent and thorough inspections of aging buildings and infrastructure across the country. But inspections are time-consuming, and often inconsistent, processes, heavily dependent on the judgment of inspectors. Researchers at Drexel University and the State University of New York at Buffalo are trying to make the process more efficient and definitive by using artificial intelligence, combined with a classic mathematical method for quantifying web-like networks, to determine how damaged a concrete structure is, based solely on its pattern of cracking.
    In the paper “A graph-based method for quantifying crack patterns on reinforced concrete shear walls,” which was recently published in the journal Computer-Aided Civil and Infrastructure Engineering, the researchers, led by Arvin Ebrahimkhanlou, PhD, an assistant professor in Drexel’s College of Engineering, and Pedram Bazrafshan, a doctoral student in the College, present a process that could help the country better understand how many of its hundreds of thousands of aging bridges, levees, roadways and buildings are in urgent need of repair.
    “Without an autonomous and objective process for assessing damage to the many reinforced concrete structures that make up our built environment, these tragic structural failures are sure to continue,” Ebrahimkhanlou said. “Our aging infrastructures are being used beyond their design lifespan, and because manual inspections are time-consuming and subjective, indications of structural damage may be missed or underestimated.”
    The current process for inspecting a concrete structure, such as a bridge or a parking deck, involves an inspector visually examining it for cracking, chipping, or water penetration, taking measurements of the cracks, and noting whether or not they have changed in the time between inspections — which may be years. If enough of these conditions are present and appear to be in an advanced state — according to a set of guidelines on a damage index — then the structure could be rated “unsafe.”
    In addition to the time it takes to go through this process for each inspection, there is widespread concern that the process leaves too much room for subjectivity to skew the final assessment.
    “The same crack in a reinforced concrete structure can appear menacing or mundane — depending on who is looking at it,” Bazrafshan said. “A crack can be an innocuous part of a building’s settling process or a telltale sign of structural damage; unfortunately, there is little agreement on precisely when one has progressed from the former to the latter.”
    The first step for Bazrafshan and Ebrahimkhanlou’s group was to eliminate this uncertainty by creating a method to precisely quantify the extent of cracking. To do it, they employed a mathematical method called graph theory, which is used to measure and study networks — most recently, social networks — by pinpointing its graph features, such as the number of times cracks intersect on average.

    Ebrahimkhanlou originally developed the process for using graph features to create a kind of “fingerprint” for each set of cracks in a reinforced concrete structure and — by comparing the prints of newly inspected structures to those of structures with known safety ratings — produce a quick and accurate damage assessment.
    “Creating a mathematical representation of cracking patterns is a novel idea and the key contribution of our recent paper,” Ebrahimkhanlou said. “We find this to be a highly effective way to quantify changes in the patterns of cracking, which enables us to connect the visual appearance of a crack to the level of structural damage in a way that is quantifiable and can be consistently repeated regardless of who is doing the inspection.”
    The team used AI pixel-tracking algorithms to convert images of cracks to their corresponding mathematical representation: a graph.
    “The crack-to-graph conversion and feature-extraction processes take just a minute or so per image, which is a significant improvement by comparison to the inspection process which could take hours or days to make all of the required measurements,” Bazrafshan said. “This is also a promising development for the possibility of automating the entire analysis process in the future.”
    To develop a feature framework for comparison, they had a machine learning program extract graph features from a set of images of reinforced concrete shear wall structures with different height-to-length ratios, that were created to test different behaviors of the walls that could occur in an earthquake.

    Focusing specifically on the group of images that showed moderate cracking — the kind that shows that the safety of the structure is under question — the team trained a second algorithm to correlate the extracted graph features with a tangible scale showing the amount of damage imposed on the structure. For example, the more cracks intersect one another — which corresponds with a higher “average degree” of their graph feature — the more serious the damage to the structure.
    The program assigned a weighted value to each of these features, depending on how closely they correlated with mechanical indicators of damage, to produce a quantitative profile against which the algorithm could measure new samples to determine the extent of their structural damage.
    To test the assessment algorithm, the team used images of three large-scale walls that had been mechanically tested in a lab at the University at Buffalo to determine their conditions. The team used images of one side of each wall as a training set and then tested the model with images of the opposite side to test its ability to predict each sample’s level of damage.
    In each case, the AI program was able to correctly assess the damage with greater than 90% accuracy, indicating that the program would be a highly effective means of rapid damage assessment.
    “This is just the first step in creating a very powerful assessment tool that leverages volumes of research and human knowledge to make faster and more accurate assessments of structures in the built environment,” Ebrahimkhanlou said. “Imposing order on a seemingly chaotic set of features is the essence of scientific discovery. We believe this innovation could go a long way toward identifying problems before they happen and making our infrastructures safer.”
    The group plans to continue its work by training and testing the program against larger and more diverse datasets, including other types of structures. And they are also working toward automating the process so that it could be integrated into structural monitoring systems, as well as the process of collecting photos and video of damaged structures following earthquakes and other natural disasters. More

  • in

    The ‘breath’ between atoms — a new building block for quantum technology

    University of Washington researchers have discovered they can detect atomic “breathing,” or the mechanical vibration between two layers of atoms, by observing the type of light those atoms emitted when stimulated by a laser. The sound of this atomic “breath” could help researchers encode and transmit quantum information.
    The researchers also developed a device that could serve as a new type of building block for quantum technologies, which are widely anticipated to have many future applications in fields such as computing, communications and sensor development.
    The researchers published these findings June 1 in Nature Nanotechnology.
    “This is a new, atomic-scale platform, using what the scientific community calls ‘optomechanics,’ in which light and mechanical motions are intrinsically coupled together,” said senior author Mo Li, a UW professor of both electrical and computer engineering and physics. “It provides a new type of involved quantum effect that can be utilized to control single photons running through integrated optical circuits for many applications.”
    Previously, the team had studied a quantum-level quasiparticle called an “exciton.” Information can be encoded into an exciton and then released in the form of a photon — a tiny particle of energy considered to be the quantum unit of light. Quantum properties of each photon emitted — such as the photon’s polarization, wavelength and/or emission timing — can function as a quantum bit of information, or “qubit,” for quantum computing and communication. And because this qubit is carried by a photon, it travels at the speed of light.
    “The bird’s-eye view of this research is that to feasibly have a quantum network, we need to have ways of reliably creating, operating on, storing and transmitting qubits,” said lead author Adina Ripin, a UW doctoral student of physics. “Photons are a natural choice for transmitting this quantum information because optical fibers enable us to transport photons long distances at high speeds, with low losses of energy or information.”
    The researchers were working with excitons in order to create a single photon emitter, or “quantum emitter,” which is a critical component for quantum technologies based on light and optics. To do this, the team placed two thin layers of tungsten and selenium atoms, known as tungsten diselenide, on top of each other.

    When the researchers applied a precise pulse of laser light, they knocked a tungsten diselenide atom’s electron away from the nucleus, which generated an exciton quasiparticle. Each exciton consisted of a negatively charged electron on one layer of the tungsten diselenide and a positively charged hole where the electron used to be on the other layer. And because opposite charges attract each other, the electron and the hole in each exciton were tightly bonded to each other. After a short moment, as the electron dropped back into the hole it previously occupied, the exciton emitted a single photon encoded with quantum information — producing the quantum emitter the team sought to create.
    But the team discovered that the tungsten diselenide atoms were emitting another type of quasiparticle, known as a phonon. Phonons are a product of atomic vibration, which is similar to breathing. Here, the two atomic layers of the tungsten diselenide acted like tiny drumheads vibrating relative to each other, which generated phonons. This is the first time phonons have ever been observed in a single photon emitter in this type of two-dimensional atomic system.
    When the researchers measured the spectrum of the emitted light, they noticed several equally spaced peaks. Every single photon emitted by an exciton was coupled with one or more phonons. This is somewhat akin to climbing a quantum energy ladder one rung at a time, and on the spectrum, these energy spikes were represented visually by the equally spaced peaks.
    “A phonon is the natural quantum vibration of the tungsten diselenide material, and it has the effect of vertically stretching the exciton electron-hole pair sitting in the two layers,” said Li, who is also is a member of the steering committee for the UW’s QuantumX, and is a faculty member of the Institute for Nano-Engineered Systems. “This has a remarkably strong effect on the optical properties of the photon emitted by the exciton that has never been reported before.”
    The researchers were curious if they could harness the phonons for quantum technology. They applied electrical voltage and saw that they could vary the interaction energy of the associated phonons and emitted photons. These variations were measurable and controllable in ways relevant to encoding quantum information into a single photon emission. And this was all accomplished in one integrated system — a device that involved only a small number of atoms.
    Next the team plans to build a waveguide — fibers on a chip that catch single photon emissions and direct them where they need to go — and then scale up the system. Instead of controlling only one quantum emitter at a time, the team wants to be able to control multiple emitters and their associated phonon states. This will enable the quantum emitters to “talk” to each other, a step toward building a solid base for quantum circuitry.
    “Our overarching goal is to create an integrated system with quantum emitters that can use single photons running through optical circuits and the newly discovered phonons to do quantum computing and quantum sensing,” Li said. “This advance certainly will contribute to that effort, and it helps to further develop quantum computing which, in the future, will have many applications.” More

  • in

    Newborn baby inspires sensor design that simulates human touch

    As we move into a world where human-machine interactions are becoming more prominent, pressure sensors that are able to analyze and simulate human touch are likely to grow in demand.
    One challenge facing engineers is the difficulty in making the kind of cost-effective, highly sensitive sensor necessary for applications such as detecting subtle pulses, operating robotic limbs, and creating ultrahigh-resolution scales. However, a team of researchers has developed a sensor capable of performing all of those tasks.
    The researchers, from Penn State and Hebei University of Technology in China, wanted to create a sensor that was extremely sensitive and reliably linear over a broad range of applications, had high pressure resolution, and was able to work under large pressure preloads.
    “The sensor can detect a tiny pressure when large pressure is already applied,” said Huanyu “Larry” Cheng, James L. Henderson Jr. Memorial Associate Professor of Engineering Science and Mechanics at Penn State and co-author of a paper on the work published in Nature Communications. “An analogy I like to use is it’s like detecting a fly on top of an elephant. It can measure the slightest change in pressure, just like our skin does with touch.”
    Cheng was inspired to develop these sensors due to a very personal experience: The birth of his second daughter.  
    Cheng’s daughter lost 10% of her body weight soon after birth, so the doctor asked him to weigh the baby every two days to monitor any additional loss or weight gain. Cheng tried to do this by weighing himself on a regular home weight scale and then weighing himself holding his daughter to measure the baby’s weight.  
    “I noticed that when I put down my daughter in her blanket, when I was no longer holding her, you didn’t see the change in weight,” Cheng said. “So, we learned that trying to use a commercial scale doesn’t work, it didn’t detect the change in pressure.” 

    After trying many different approaches, they found that using a pressure sensor consisting of gradient micro-pyramidal structures and an ultrathin ionic layer to give a capacitive response was the most promising.
    However, there was a continued issue they faced. The high sensitivity of the microstructures would decrease as the pressure increased, and the random microstructures that were templated from natural objects resulted in uncontrollable deformation and a narrow linear range. In simple terms, when pressure was applied to the sensor, it would change the sensor’s shape and therefore alter the contact area between the microstructures and throw off the readings.
    To address these challenges, the scientists designed microstructure patterns that could increase the linear range without decreasing the sensitivity — they essentially made it flexible, so it could still function in the gradience of pressures that exist in the real world. Their study explored the use of a CO2 laser with a Gaussian beam to fabricate programmable structures such as gradient pyramidal microstructures (GPM) for iontronic sensors, which are soft electronics that can mimic the perception functions of human skin. This process reduces the cost and process complexity compared with photolithography, the method commonly used to prepare delicate microstructure patterns for sensors.
    “I think in the future it is possible to further improve the model and be able to account for more complex systems and then we can certainly understand how to make even better sensors.”
    Huanyu “Larry” Cheng, James L. Henderson Jr. Memorial Associate Professor of Engineering Science and Mechanics, Penn State
    Cheng credits Ruoxi Yang, a graduate student in his lab and first author of the study, as the driver of this solution. 

    “Yang is a very smart student who introduced the idea to solve this sensor issue, which is really something like a combination of many small pieces, smartly engineered together,” Cheng said. “We know the structure must be microscale and must have a delicate design. But it is challenging to design or optimize the structure, and she worked with the laser system we have in our lab to make this possible. She has been working very hard in the past few years and was able to explore all these different parameters and be able to quickly screen throughout this parameter space to find and improve the performance.” 
    This optimized sensor had rapid response and recovery times and excellent repeatability, which the team tested by detecting subtle pulses, operating interactive robotic hands, and creating ultrahigh-resolution, smart weight scales and chairs. The scientists also found that the proposed fabrication approaches and design toolkit from this work could be leveraged to easily tune the pressure sensor performance for varying target applications and open opportunities to create other iontronic sensors, the range of sensors that use ionic liquids such as an ultrathin ionic layer. Along with enabling a future scale where it would be easier for parents to weigh their baby, these sensors would have other uses as well.  
    “We were also able to detect not only the pulse from the wrist but also from the other distal vascular structures like the eyebrow and the fingertip,” Cheng said. “In addition, we combine that with the control system to show that this is possible to use for the future of human robotic interactional collaboration. Also, we envision other healthcare uses, such as someone who has lost a limb and this sensor could be part of a system to help them control a robotic limb.” 
    Cheng noted other potential uses, such as sensors to measure a person’s pulse during high-stress work situations such as search-and-rescue after an earthquake or carrying out difficult, dangerous tasks in a construction site.  
    The research team used computer simulations and computer-aided design to help them explore ideas for these novel sensors, which Cheng notes is challenging work given all the possible sensor solutions. This electronic assistance will continue to push the research forward.  
    “I think in the future it is possible to further improve the model and be able to account for more complex systems and then we can certainly understand how to make even better sensors,” Cheng said.  
    Aside from Cheng and Yang, other authors on the study from Penn State include Ankan Dutta, Bowen Li, Naveen Tiwari, Wanqing Zhang, Zhenyuan Niu, Yuyan Gao, Daniel Erdely and Xin Xin, and from Hebei University, Tiejun Li.   More

  • in

    Metal shortage could put the brakes on electrification

    As more and more electric cars are travelling on the roads of Europe, this is leading to an increase in the use of the critical metals required for components such as electric motors and electronics. With the current raw material production levels there will not be enough of these metals in future — not even if recycling increases. This is revealed by the findings of a major survey led by Chalmers University of Technology, Sweden, on behalf of the European Commission.
    Electrification and digitalisation are leading to a steady increase in the need for critical metals in the EU’s vehicle fleet. Moreover, only a small proportion of the metals are currently recycled from end-of-life vehicles. The metals that are highly sought after, such as dysprosium, neodymium, manganese and niobium, are of great economic importance to the EU, while their supply is limited and it takes time to scale up raw material production. Our increasing dependence on them is therefore problematic for several reasons.
    “The EU is heavily dependent on imports of these metals because extraction is concentrated in a few countries such as China, South Africa and Brazil. The lack of availability is both an economic and an environmental problem for the EU, and risks delaying the transition to electric cars and environmentally sustainable technologies. In addition, since many of these metals are scarce, we also risk making access to them difficult for future generations if we are unable to use what is already in circulation,” says Maria Ljunggren, Associate Professor in Sustainable Materials Management at Chalmers University of Technology.
    A serious situation, but Swedish deposit offers hope
    Ljunggren points out that the serious situation affecting Europe’s critical and strategic raw materials is underlined in the Critical Raw Materials Act recently put forward by the European Commission. The Act emphasises the need to enhance cooperation with reliable external trading partners and for member states to improve the recycling of both critical and strategic raw materials. It also stresses the importance of European countries exploring their own geological resources.
    In Sweden the state-owned mining company LKAB reported on significant deposits of rare earth metals in Kiruna at the start of the year. Successful exploration enabled the company to identify mineral resources of more than a million tonnes of oxides — which they now describe as the largest known deposit of its kind in Europe.

    “This is extremely interesting, especially the discovery of neodymium which, among other things, is used in magnets in electric motors. The hope is that it will help make us less dependent on imports in the long run,” she says.
    Substantial increase in the use of critical metals
    Together with the Swiss Federal Laboratories for Materials Science and Technology, EMPA, Ljunggren has surveyed the metals that are currently in use in Europe’s vehicle fleet. The assignment comes from the European Commission’s Joint Research Centre (JRC), and has resulted in an extensive database that shows the presence over time of metals in new vehicles, vehicles in use and vehicles that are recycled.
    The survey, which goes back as far as 2006, shows that the proportion of critical metals has increased significantly in vehicles, a development which the researchers believe will continue. Several of the rare earth elements are among the metals that have increased the most.
    “Neodymium and dysprosium usage has increased by around 400 and 1,700 percent respectively in new cars over the period, and this is even before electrification had taken off. Gold and silver, which are not listed as critical metals but have great economic value, have increased by around 80 percent,” says Ljunggren.

    The idea behind the survey and the database is to provide decision-makers, companies and organisations with an evidence base to support a more sustainable use of the EU’s critical metals. A major challenge is that these materials, which are found in very small concentrations in each car, are economically difficult to recycle.
    Recycling fails to meet requirements
    “If recycling is to increase, cars need to be designed to enable these metals to be recovered, while incentives and flexible processes for more recycling need to be put in place. But that’s not the current reality,” says Ljunggren, who stresses that a range of measures are needed to deal with the situation.
    “It is important to increase recycling. At the same time, it is clear that an increase in recycling alone cannot meet requirements in the foreseeable future, just because the need for critical metals in new cars is increasing so much. Therefore there needs to be a greater focus on how we can substitute other materials for these metals. But in the short term it will be necessary to increase extraction in mines if electrification is not to be held back,” she says.
    More about the survey and the database The survey of metals in the EU’s vehicle fleet has been carried out by Maria Ljunggren at Chalmers in collaboration with the Swiss Federal Laboratories for Materials Science and Technology, EMPA, on behalf of the European Commission’s Joint Research Centre (JRC). The results are set out in the Raw Materials in Vehicles database which covers 60 vehicle types under 3.5 tonnes from all the EU member states. The survey covers eleven different metals in new vehicles, vehicles in use and vehicles that are recycled. It covers the period from 2006 to 2023, with the last three years being a forecast. The research is also described in the report Material composition trends in vehicles: critical raw materials and other relevant metals. Maria Ljunggren is also involved in an ongoing EU project on critical raw materials, FutuRaM, (Future availability of raw materials), which will enhance knowledge about the potential supply of recycled critical raw materials by the year 2050. More

  • in

    Understanding the tantalizing benefits of tantalum for improved quantum processors

    Whether it’s baking a cake, building a house, or developing a quantum device, the quality of the end product significantly depends on its ingredients or base materials. Researchers working to improve the performance of superconducting qubits, the foundation of quantum computers, have been experimenting using different base materials in an effort to increase the coherent lifetimes of qubits. The coherence time is a measure of how long a qubit retains quantum information, and thus a primary measure of performance. Recently, scientists discovered that using tantalum in superconducting qubits makes them perform better, but no one has been able to determine why — until now.
    Scientists from the Center for Functional Nanomaterials (CFN), the National Synchrotron Light Source II (NSLS-II), the Co-design Center for Quantum Advantage (C2QA), and Princeton University investigated the fundamental reasons that these qubits perform better by decoding the chemical profile of tantalum. The results of this work, which were recently published in the journal Advanced Science, will provide key knowledge for designing even better qubits in the future. CFN and NSLS-II are U.S. Department of Energy (DOE) Office of Science User Facilities at DOE’s Brookhaven National Laboratory. C2QA is a Brookhaven-led national quantum information science research center, of which Princeton University is a key partner.
    Finding the right ingredient
    Tantalum is a unique and versatile metal. It’s dense, hard, and easy to work with. Tantalum also has a high melting point and is resistant to corrosion, making it useful in many commercial applications. In addition, tantalum is a superconductor, which means it has no electrical resistance when cooled to sufficiently low temperatures, and consequently can carry current without any energy loss.
    Tantalum-based superconducting qubits have demonstrated record-long lifetimes of more than half a millisecond. That is five times longer than the lifetimes of qubits made with niobium and aluminum, which are currently deployed in large-scale quantum processors.
    These properties make tantalum an excellent candidate material for building better qubits. Still, the goal of improving superconducting quantum computers has been hindered by a lack of understanding as to what is limiting qubit lifetimes, a process known as decoherence. Noise and microscopic sources of dielectric loss are generally thought to contribute; however, scientists are unsure exactly why and how.

    “The work in this paper is one of two parallel studies aiming to address a grand challenge in qubit fabrication,” explained Nathalie de Leon, an associate professor of electrical and computer engineering at Princeton University and the materials thrust leader for C2QA. “Nobody has proposed a microscopic, atomistic model for loss that explains all the observed behavior and then was able to show that their model limits a particular device. This requires measurement techniques that are precise and quantitative, as well as sophisticated data analysis.”
    Surprising results
    To get a better picture of the source of qubit decoherence, scientists at Princeton and CFN grew and chemically processed tantalum films on sapphire substrates. They then took these samples to the Spectroscopy Soft and Tender Beamlines (SST-1 and SST-2) at NSLS-II to study the tantalum oxide that formed on the surface using x-ray photoelectron spectroscopy (XPS). XPS uses x-rays to kick electrons out of the sample and provides clues about the chemical properties and electronic state of atoms near the sample surface. The scientists hypothesized that the thickness and chemical nature of this tantalum oxide layer played a role in determining the qubit coherence, as tantalum has a thinner oxide layer compared to the niobium more typically used in qubits.
    “We measured these materials at the beamlines in order to better understand what was happening,” explained Andrew Walter, a lead beamline scientist in NSLS-II’s soft x-ray scattering & spectroscopy program. “There was an assumption that the tantalum oxide layer was fairly uniform, but our measurements showed that it’s not uniform at all. It’s always more interesting when you uncover an answer you don’t expect, because that’s when you learn something.”
    The team found several different kinds of tantalum oxides at the surface of the tantalum, which has prompted a new set of questions on the path to creating better superconducting qubits. Can these interfaces be modified to improve overall device performance, and which modifications would provide the most benefit? What kinds of surface treatments can be used to minimize loss?

    Embodying the spirit of codesign
    “It was inspiring to see experts of very different backgrounds coming together to solve a common problem,” said Mingzhao Liu, a materials scientist at CFN and the materials subthrust leader in C2QA. “This was a highly collaborative effort, pooling together the facilities, resources, and expertise shared between all of our facilities. From a materials science standpoint, it was exciting to create these samples and be an integral part of this research.”
    Walter said, “Work like this speaks to the way C2QA was built. The electrical engineers from Princeton University contributed a lot to device management, design, data analysis, and testing. The materials group at CFN grew and processed samples and materials. My group at NSLS-II characterized these materials and their electronic properties.”
    Having these specialized groups come together not only made the study move smoothly and more efficiently, but it gave the scientists an understanding of their work in a larger context. Students and postdocs were able to get invaluable experience in several different areas and contribute to this research in meaningful ways.
    “Sometimes, when materials scientists work with physicists, they’ll hand off their materials and wait to hear back regarding results,” said de Leon, “but our team was working hand-in-hand, developing new methods along the way that could be broadly used at the beamline going forward.” More