More stories

  • in

    Manipulating the geometry of ‘electron universe’ in magnets

    Researchers at Tohoku University and the Japan Atomic Energy Agency have developed fundamental experiments and theories to manipulate the geometry of the ‘electron universe,’ which describes the structure of electronic quantum states in a manner mathematically similar to the actual universe, within a magnetic material under ambient conditions.
    The investigated geometric property — i.e., the quantum metric — was detected as an electric signal distinct from ordinary electrical conduction. This breakthrough reveals the fundamental quantum science of electrons and paves the way for designing innovative spintronic devices utilizing the unconventional conduction emerging from the quantum metric.
    Details were published in the journal Nature Physics on April 22, 2024.
    Electric conduction, which is crucial for many devices, follows Ohm’s law: a current responds proportionally to applied voltage. But to realize new devices, scientists have had to find a means to go beyond this law. Here is where quantum mechanics come in. A unique quantum geometry known as the quantum metric can generate non-Ohmic conduction. This quantum metric is a property inherent to the material itself, suggesting that it’s a fundamental characteristic of the material’s quantum structure.
    The term ‘quantum metric’ draws its inspiration from the ‘metric’ concept in general relativity, which explains how the geometry of the universe distorts under the influence of intense gravitational forces, such as those around black holes. Similarly, in the pursuit of designing non-Ohmic conduction within materials, comprehending and harnessing the quantum metric becomes imperative. This metric delineates the geometry of the ‘electron universe,’ analogous to the physical universe. Specifically, the challenge lies in manipulating the quantum-metric structure within a single device and discerning its impact on electrical conduction at room temperature.
    The research team reported successful manipulation of the quantum-metric structure at room temperature in a thin-film heterostructure comprising an exotic magnet, Mn3Sn, and a heavy metal, Pt. Mn3Sn exhibits essential magnetic texture when adjacent to Pt, which is drastically modulated by an applied magnetic field. They detected and magnetically controlled a non-Ohmic conduction termed the second-order Hall effect, where voltage responds orthogonally and quadratically to the applied electric current. Through theoretical modeling, they confirmed that the observations can be exclusively described by the quantum metric.
    “Our second-order Hall effect arises from the quantum-metric structure that couples with the specific magnetic texture at the Mn3Sn/Pt interface. Hence, we can flexibly manipulate the quantum metric by modifying the magnetic structure of the material through spintronic approaches and verify such manipulation in the magnetic control of the second-order Hall effect,” explained Jiahao Han, the lead author of this study.
    The main contributor to the theoretical analysis, Yasufumi Araki, added, “Theoretical predictions posit the quantum metric as a fundamental concept that connects the material properties measured in experiments to the geometric structures studied in mathematical physics. However, confirming its evidence in experiments has remained challenging. I hope that our experimental approach to accessing the quantum metric will advance such theoretical studies.”
    Principal investigator Shunsuke Fukami further added, “Until now, the quantum metric has been believed to be inherent and uncontrollable, much like the universe, but we now need to change this perception. Our findings, particularly the flexible control at room temperature, may offer new opportunities to develop functional devices such as rectifiers and detectors in the future.” More

  • in

    Unlocking spin current secrets: A new milestone in spintronics

    Spintronics is a field garnering immense attention for its range of potential advantages for conventional electronics. These include reducing power consumption, high-speed operation, non-volatility, and the potential for new functionalities.
    Spintronics exploits the intrinsic spin of electrons, and fundamental to the field is controlling the flows of the spin degree of freedom, i.e., spin currents. Scientists are constantly looking at ways to create, remove, and control them for future applications.
    Detecting spin currents is no easy feat. It requires the use of macroscopic voltage measurement, which looks at the overall voltage changes across a material. However, a common stumbling block has been a lack of understanding into how this spin current actually moves or propagates within the material itself.
    “Using neutron scattering and voltage measurements, we demonstrated that the magnetic properties of the material can predict how a spin current changes with temperature,” points out Yusuke Nambu, co-author of the paper and an associate professor at Tohoku University’s Institute for Materials Research (IMR).”
    Nambu and his colleagues discovered that the spin current signal changes direction at a specific magnetic temperature and decreases at low temperatures. Additionally, they found that the spin direction, or magnon polarization, flips both above and below this critical magnetic temperature. This change in magnon polarization correlates with the spin current’s reversal, shedding light on its propagation direction.
    Furthermore, the material studied displayed magnetic behaviors with distinct gap energies. This suggests that below the temperature linked to this gap energy, spin current carriers are absent, leading to the observed decrease in the spin current signal at lower temperatures. Remarkably, the spin current’s temperature dependence follows an exponential decay, mirroring the neutron scattering results.
    Nambu emphasizes that their findings underscore the significance of understanding microscopic details in spintronics research. “By clarifying the magnetic behaviors and their temperature variations, we can gain a comprehensive understanding of spin currents in insulating magnets, paving the way for predicting spin currents more accurately and potentially developing advanced materials with enhanced performance.” More

  • in

    Perfecting the view on a crystal’s imperfection

    Single-photon emitters (SPEs) are akin to microscopic lightbulbs that emit only one photon (a quantum of light) at a time. These tiny structures hold immense importance for the development of quantum technology, particularly in applications such as secure communications and high-resolution imaging. However, many materials that contain SPEs are impractical for use in mass manufacturing due to their high cost and the difficulty of integrating them into complex devices.
    In 2015, scientists discovered SPEs within a material called hexagonal boron nitride (hBN). Since then, hBN has gained widespread attention and application across various quantum fields and technologies, including sensors, imaging, cryptography, and computing, thanks to its layered structure and ease of manipulation.
    The emergence of SPEs within hBN stems from imperfections in the material’s crystal structure, but the precise mechanisms governing their development and function have remained elusive. Now, a new study published in Nature Materials reveals significant insights into the properties of hBN, offering a solution to discrepancies in previous research on the proposed origins of SPEs within the material.
    The study involves a collaborative effort spanning three major institutions: the Advanced Science Research Center at the CUNY Graduate Center (CUNY ASRC); the National Synchrotron Light Source II (NSLS-II) user facility at Brookhaven National Laboratory; and the National Institute for Materials Science. Gabriele Grosso, a professor with the CUNY ASRC’s Photonics Initiative and the CUNY Graduate Center’s Physics program, and Jonathan Pelliciari, a beamline scientist at NSLS-II, led the study.
    The collaboration was sparked by a conversation at the annual NSLS-II and Center for Functional Nanomaterials Users’ Meeting when researchers from CUNY ASRC and NSLS-II realized how their unique expertise, skills, and resources could uncover some novel insights, sparking the idea for the hBN experiment. The work brought together physicists with diverse areas of expertise and instrumentation skillsets who rarely collaborate in such a close manner.
    Using advanced techniques based on X-ray scattering and optical spectroscopy, the research team uncovered a fundamental energy excitation occurring at 285 millielectron volts. This excitation triggers the generation of harmonic electronic states that give rise to single photons — similar to how musical harmonics produce notes across multiple octaves.
    Intriguingly, these harmonics correlate with the energies of SPEs observed across numerous experiments conducted worldwide. The discovery connects previous observations and provides an explanation for the variability observed in earlier findings. Identification of this harmonic energy scale points to a common underlying origin and reconciles the diverse reports on hBN properties over the last decade.

    “Everyone was reporting different properties and different energies of the single photons that seemed to contradict each other,” said Grosso. “The beauty of our findings is that with a single energy scale and harmonics, we can organize and connect all of these findings that were thought to be completely disconnected. Using the music analogy, the single photon properties people reported were basically different notes on the same music sheet.”
    While the defects in hBN give rise to its distinctive quantum emissions, they also present a significant challenge in research efforts to understand them.
    “Defects are one of the most difficult physical phenomena to study, because they are very localized and hard to replicate,” explained Pelliciari. “Think of it this way; if you want to make a perfect circle, you can calculate a way to always replicate it. But if you want to replicate an imperfect circle, that’s much harder.”
    The implications of the team’s work extend far beyond hBN. The researchers say the findings are a stepping stone for studying defects in other materials containing SPEs. Understanding quantum emission in hBN holds the potential to drive advancements in quantum information science and technologies, facilitating secure communications and enabling powerful computation that can vastly expand and expedite research efforts.
    “These results are exciting because they connect measurements across a wide range of optical excitation energies, from single digits to hundreds of electron volts,” said Enrique Mejia, a Ph.D. student in Grosso lab and lead author of the work conducted at the CUNY ASRC. “We can clearly distinguish between samples with and without SPEs, and we can now explain how the observed harmonics are responsible for a wide range of single photon emitters.”
    This work was funded by LDRD, FWP DOE on quantum information science, DOE BES, and DOE ECA. The work at CUNY was supported by the National Science Foundation (NSF), the CUNY Graduate Center Physics Program, the CUNY Advanced Science Research Center, and the CUNY Research Foundation. More

  • in

    AI tool creates ‘synthetic’ images of cells for enhanced microscopy analysis

    Observing individual cells through microscopes can reveal a range of important cell biological phenomena that frequently play a role in human diseases, but the process of distinguishing single cells from each other and their background is extremely time consuming — and a task that is well-suited for AI assistance.
    AI models learn how to carry out such tasks by using a set of data that are annotated by humans, but the process of distinguishing cells from their background, called “single-cell segmentation,” is both time-consuming and laborious. As a result, there are limited amount of annotated data to use in AI training sets. UC Santa Cruz researchers have developed a method to solve this by building a microscopy image generation AI model to create realistic images of single cells, which are then used as “synthetic data” to train an AI model to better carry out single cell-segmentation.
    The new software is described in a new paper published in the journal iScience. The project was led by Assistant Professor of Biomolecular Engineering Ali Shariati and his graduate student Abolfazl Zargari. The model, called cGAN-Seg, is freely available on GitHub.
    “The images that come out of our model are ready to be used to train segmentation models,” Shariati said. “In a sense we are doing microscopy without a microscope, in that we are able to generate images that are very close to real images of cells in terms of the morphological details of the single cell. The beauty of it is that when they come out of the model, they are already annotated and labeled. The images show a ton of similarities to real images, which then allows us to generate new scenarios that have not been seen by our model during the training.”
    Images of individual cells seen through a microscope can help scientists learn about cell behavior and dynamics over time, improve disease detection, and find new medicines. Subcellular details such as texture can help researchers answer important questions, like if a cell is cancerous or not.
    Manually finding and labeling the boundaries of cells from their background is extremely difficult, however, especially in tissue samples where there are many cells in an image. It could take researchers several days to manually perform cell segmentation on just 100 microscopy images.
    Deep learning can speed up this process, but an initial data set of annotated images is needed to train the models — at least thousands of images are needed as a baseline to train an accurate deep learning model. Even if the researchers can find and annotate 1,000 images, those images may not contain the variation of features that appear across different experimental conditions.

    “You want to show your deep learning model works across different samples with different cell types and different image qualities,” Zargari said. “For example if you train your model with high quality images, it’s not going to be able to segment the low quality cell images. We can rarely find such a good data set in the microscopy field.”
    To address this issue, the researchers created an image-to-image generative AI model that takes a limited set of annotated, labeled cell images and generates more, introducing more intricate and varied subcellular features and structures to create a diverse set of “synthetic” images. Notably, they can generate annotated images with a high density of cells, which are especially difficult to annotate by hand and are especially relevant for studying tissues. This technique works to process and generate images of different cell types as well as different imaging modalities, such as those taken using fluorescence or histological staining.
    Zargari, who led the development of the generative model, employed a commonly used AI algorithm called a “cycle generative adversarial network” for creating realistic images. The generative model is enhanced with so-called “augmentation functions” and a “style injecting network,” which helps the generator to create a wide variety of high quality synthetic images that show different possibilities for what the cells could look like. To the researchers’ knowledge, this is the first time style injecting techniques have been used in this context.
    Then, this diverse set of synthetic images created by the generator are used to train a model to accurately carry out cell segmentation on new, real images taken during experiments.
    “Using a limited data set, we can train a good generative model. Using that generative model, we are able to generate a more diverse and larger set of annotated, synthetic images. Using the generated synthetic images we can train a good segmentation model — that is the main idea,” Zagari said.
    The researchers compared the results of their model using synthetic training data to more traditional methods of training AI to carry out cell segmentation across different types of cells. They found that their model produces significantly improved segmentation compared to models trained with conventional, limited training data. This confirms to the researchers that providing a more diverse dataset during training of the segmentation model improves performance.
    Through these enhanced segmentation capabilities, the researchers will be able to better detect cells and study variability between individual cells, especially among stem cells. In the future, the researchers hope to use the technology they have developed to move beyond still images to generate videos, which can help them pinpoint which factors influence the fate of a cell early in its life and predict their future.
    “We are generating synthetic images that can also be turned into a time lapse movie, where we can generate the unseen future of cells,” Shariati said. “With that, we want to see if we are able to predict the future states of a cell, like if the cell is going to grow, migrate, differentiate or divide.” More

  • in

    New sensing checks for 3D printed products could overhaul manufacturing sector

    In the study, published today in the journal Waves in Random and Complex Media, researchers from the University of Bristol have derived a formula that can inform the design boundaries for a given component’s geometry and material microstructure.
    A commercially viable sensing technology and associated imaging algorithm to assess the quality of such components currently does not exist. If the additive manufacturing (3D Printing) of metallic components could satisfy the safety and quality standards in industries there could be significant commercial advantages in the manufacturing sector.
    The key breakthrough is the use of ultrasonic array sensors, which are essentially the same as those used in medical imaging in, for example, creating images of babies in the womb. However, these new laser based versions would not require the sensor to be in contact with the material.
    Author Professor Anthony Mulholland, head of the School of Engineering Maths and Technology, explained: “There is a potential sensing method using a laser based ultrasonic array and we are using mathematical modelling to inform the design of the this equipment ahead of its in situ deployment.”
    The team built a mathematical model that incorporated the physics of ultrasonic waves propagating through a layered (as additively manufactured) metallic material, which took into account the variability one gets between each manufactured component.
    The mathematical formula is made up of the design parameters associated with the ultrasonic laser and the nature of the particular material. The output is a measure of how much information will be produced by the sensor to enable the mechanical integrity of the component to be assessed. The input parameters can then be varied to maximise this information content.
    It is hoped their discovery will accelerate the design and deployment of this proposed solution to this manufacturing opportunity.

    Professor Mullholland added: “We can then work with our industry partners to produce a means of assessing the mechanical integrity of these safety critical components at the manufacturing stage.
    “This could then lead to radically new designs (by taking full advantage of 3D printing), quicker and more cost effective production processes, and significant commercial and economic advantage to UK manufacturing.”
    Now the team plan to use the findings to help their experimental collaborators who are designing and building the laser based ultrasonic arrays.
    These sensors will then be deployed in situ by robotic arms in a controlled additive manufacturing environment. They will maximise the information content in the data produced by the sensor and create bespoke imaging algorithms to generate tomographic images of the interior of components supplied by their industry partners. Destructive means will then be employed to assess the quality of the tomographic images produced.
    Professor Mullholland concluded: “Opening up 3D printing in the manufacture of safety critical components, such as those found in the aerospace industry, would provide significant commercial advantage to UK industry.
    “The lack of a means of assessing the mechanical integrity of such components is the major blockage in taking this exciting opportunity forward. This study has built a mathematical model that simulates the use of a new laser based sensor, that could provide the solution to this problem, and this study will accelerate the sensor’s design and deployment.” More

  • in

    2D materials rotate light polarization

    German and Indian physicists have shown that ultra-thin two-dimensional materials such as tungsten diselenide can rotate the polarisation of visible light by several degrees at certain wavelengths under small magnetic fields suitable for use on chips.
    It has been known for centuries that light exhibits wave-like behaviour in certain situations. Some materials are able to rotate the polarisation, i.e. the direction of oscillation, of the light wave when the light passes through the material. This property is utilised in a central component of optical communication networks known as an “optical isolator” or “optical diode.” This component allows light to propagate in one direction but blocks all light in the other direction. In a recent study, German and Indian physicists have shown that ultra-thin two-dimensional materials such as tungsten diselenide can rotate the polarisation of visible light by several degrees at certain wavelengths under small magnetic fields suitable for use on chips. The scientists from the University of Münster, Germany, and the Indian Institute of Science Education and Research (IISER) in Pune, India, have published their findings in the journal Nature Communications.
    One of the problems with conventional optical isolators is that they are quite large with sizes ranging between several millimetres and several centimetres. As a result, researchers have not yet been able to create miniaturised integrated optical systems on a chip that are comparable to everyday silicon-based electronic technologies. Current integrated optical chips consist of only a few hundred elements on a chip. By comparison, a computer processor chip contains many billions of switching elements. The work of the German-Indian team is therefore a step forward in the development of miniaturised optical isolators. The 2D materials used by the researchers are only a few atomic layers thick and therefore a hundred thousand times thinner than a human hair.
    “In the future, two-dimensional materials could become the core of optical isolators and enable on-chip integration for today’s optical and future quantum optical computing and communication technologies,” says Prof Rudolf Bratschitsch from the University of Münster. Prof Ashish Arora from IISER adds: “Even the bulky magnets, which are also required for optical isolators, could be replaced by atomically thin 2-D magnets.” This would drastically reduce the size of photonic integrated circuits.
    The team deciphered the mechanism responsible for the effect they found: Bound electron-hole pairs, so-called excitons, in 2D semiconductors rotate the polarisation of the light very strongly when the ultra-thin material is placed in a small magnetic field. According to Ashish Arora, “conducting such sensitive experiments on two-dimensional materials is not easy because the sample areas are very small.” The scientists had to develop a new measuring technique that is around 1,000 times faster than previous methods. More

  • in

    Predicting cardiac arrhythmia 30 minutes before it happens

    Atrial fibrillation is the most common cardiac arrhythmia worldwide with around 59 million people concerned in 2019. This irregular heartbeat is associated with increased risks of heart failure, dementia and stroke. It constitutes a significant burden to healthcare systems, making its early detection and treatment a major goal. Researchers from the Luxembourg Centre for Systems Biomedicine (LCSB) of the University of Luxembourg have recently developed a deep-learning model capable of predicting the transition from a normal cardiac rhythm to atrial fibrillation. It gives early warnings on average 30 minutes before onset, with an accuracy of around 80%. These results, published in the scientific journal Patterns, pave the way for integration into wearable technologies, allowing early interventions and better patient outcomes.
    During atrial fibrillation, the heart’s upper chambers beat irregularly and are out of sync with the ventricles. Reverting to a regular rhythm can require intensive interventions, from shocking the heart back to normal sinus rhythm to the removal of a specific area responsible for faulty signals. Being able to predict an episode of atrial fibrillation early enough would allow patients to take preventive measures to keep their cardiac rhythm stable. However, current methods based on the analysis of heart rate and electrocardiogram (ECG) data are only able to detect atrial fibrillation right before its onset and do not provide an early warning.
    “In contrast, our work departs from this approach to a more prospective prediction model,” explains Prof. Jorge Goncalves, head of the Systems Control group at the LCSB. “We used heart rate data to train a deep learning model that can recognise different phases — sinus rhythm, pre-atrial fibrillation and atrial fibrillation — and calculate a “probability of danger” that the patient will have an imminent episode.” When approaching atrial fibrillation, the probability increases until it crosses a specific threshold, providing an early warning.
    This artificial intelligence model, called WARN (Warning of Atrial fibRillatioN), was trained and tested on 24h-recordings collected from 350 patients at Tongji Hospital (Wuhan, China) and gave early warnings, on average 30 minutes before the start of atrial fibrillation, with great accuracy. Compared to previous work on arrhythmia prediction, WARN is the first method to provide a warning far from onset.
    “Another interesting aspect is that our model has a high performance using only R-to-R intervals, basically just heart rate data, that can be acquired from easy-to-wear and affordable pulse signal recorders such as smartwatches,” highlights Dr Marino Gavidia, first author of the publication, who worked on this project during his PhD within the Systems Control group and the Doctoral Training Unit CriTiCS (see box below). “These devices can be used by patients on a daily basis, so our results open possibilities for the development of real-time monitoring and early warnings from comfortable wearable devices,” adds Dr Arthur Montanari, a LCSB researcher involved in the project.
    Additionally, the deep-learning model developed by the researchers could be implemented in smartphones to process the data from a smartwatch. This low computational cost makes it ideal for integration into wearable technologies. The long-term objective is for patients to be able to continuously monitor their cardiac rhythm and receive early warnings that can provide sufficient time to take antiarrhythmic medication or use some targeted treatments to prevent the onset of atrial fibrillation. This in turn would reduce emergency interventions and improve patient outcomes.
    “Moving forward, we will focus on developing personalised models. The daily use of a simple smartwatch constantly provides new information on personal heart dynamics, enabling us to continuously refine and retrain our model for that patient to achieve enhanced performance with even earlier warnings,” concludes Prof. Goncalves. “Eventually, this approach could even lead to new clinical trials and innovative therapeutic interventions.” More

  • in

    AI weather forecasts captured Ciaran’s destructive path

    Artificial intelligence (AI) can quickly and accurately predict the path and intensity of major storms, a new study has demonstrated.
    The research, based on an analysis of November 2023’s Storm Ciaran, suggests weather forecasts that use machine learning can produce predictions of similar accuracy to traditional forecasts faster, cheaper, and using less computational power.
    Published in npj Climate and Atmospheric Science, the University of Reading study highlights the rapid progress and transformative potential of AI in weather prediction.
    Professor Andrew Charlton-Perez, who led the study, said: “AI is transforming weather forecasting before our eyes. Two years ago, modern machine learning techniques were rarely being applied to make weather forecasts. Now we have multiple models that can produce 10-day global forecasts in minutes.
    “There is a great deal we can learn about AI weather forecasts by stress-testing them on extreme events like Storm Ciarán. We can identify their strengths and weaknesses and guide the development of even better AI forecasting technology to help protect people and property. This is an exciting and important time for weather forecasting.”
    Promise and pitfalls
    To understand the effectiveness of AI-based weather models, scientists from the University of Reading compared AI and physics-based forecasts of Storm Ciarán — a deadly windstorm that hit northern and central Europe in November 2023 which claimed 16 lives in northern Europe and left more than a million homes without power in France.
    The researchers used four AI models and compared their results with traditional physics-based models. The AI models, developed by tech giants like Google, Nvidia and Huawei, were able to predict the storm’s rapid intensification and track 48 hours in advance. To a large extent, the forecasts were ‘indistinguishable’ from the performance of conventional forecasting models, the researchers said. The AI models also accurately captured the large-scale atmospheric conditions that fuelled Ciarán’s explosive development, such as its position relative to the jet stream — a narrow corridor of strong high-level winds.
    The machine learning technology underestimated the storm’s damaging winds, however. All four AI systems underestimated Ciarán’s maximum wind speeds, which in reality gusted at speeds of up to 111 knots at Pointe du Raz, Brittany. The authors were able to show that this underestimation was linked to some of the features of the storm, including the temperature contrasts near its centre, that were not well predicted by the AI systems.
    To better protect people from extreme weather like Storm Ciaran, the researchers say further investigation of the use of AI in weather prediction is urgently needed. Development of machine learning models could mean artificial intelligence is routinely used in weather prediction in the near future, saving forecasters time and money. More