More stories

  • in

    Magneto-thermal imaging brings synchrotron capabilities to the lab

    Coming soon to a lab tabletop near you: a method of magneto-thermal imaging that offers nanoscale and picosecond resolution previously available only in synchrotron facilities.
    This innovation in spatial and temporal resolution will give researchers extraordinary views into the magnetic properties of a range of materials, from metals to insulators, all from the comfort of their labs, potentially boosting the development of magnetic storage devices.
    “Magnetic X-ray microscopy is a relatively rare bird,” said Greg Fuchs, associate professor of applied and engineering physics, who led the project. “The magnetic microscopies that can do this sort of spatial and temporal resolution are very few and far between. Normally, you have to pick either spatial or temporal. You can’t get them both. There’s only about four or five places in the world that have that capability. So having the ability to do it on a tabletop is really enabling spin dynamics at nanoscale for research.”
    His team’s paper, “Nanoscale Magnetization and Current Imaging Using Time-Resolved Scanning-Probe Magnetothermal Microscopy,” published June 8 in the American Chemical Society’s journal Nano Letters. The lead author is postdoctoral researcher Chi Zhang.
    The paper is the culmination of a nearly 10-year effort by the Fuchs group to explore magnetic imaging with magneto-thermal microscopy. Instead of blasting a material with light, electrons or X-rays, the researchers use a laser focused onto the scanning probe to apply heat to a microscopic swath of a sample and measure the resulting electrical voltage for local magnetic information.
    Fuchs and his team pioneered this approach and over the years have developed an understanding of how temperature gradients evolve in time and space. More

  • in

    Low-cost imaging technique shows how smartphone batteries could charge in minutes

    Researchers have developed a simple lab-based technique that allows them to look inside lithium-ion batteries and follow lithium ions moving in real time as the batteries charge and discharge, something which has not been possible until now.
    Using the low-cost technique, the researchers identified the speed-limiting processes which, if addressed, could enable the batteries in most smartphones and laptops to charge in as little as five minutes.
    The researchers, from the University of Cambridge, say their technique will not only help improve existing battery materials, but could accelerate the development of next-generation batteries, one of the biggest technological hurdles to be overcome in the transition to a fossil fuel-free world. The results are reported in the journal Nature.
    While lithium-ion batteries have undeniable advantages, such as relatively high energy densities and long lifetimes in comparison with other batteries and means of energy storage, they can also overheat or even explode, and are relatively expensive to produce. Additionally, their energy density is nowhere near that of petrol. So far, this makes them unsuitable for widespread use in two major clean technologies: electric cars and grid-scale storage for solar power.
    “A better battery is one that can store a lot more energy or one that can charge much faster — ideally both,” said co-author Dr Christoph Schnedermann, from Cambridge’s Cavendish Laboratory. “But to make better batteries out of new materials, and to improve the batteries we’re already using, we need to understand what’s going on inside them.”
    To improve lithium-ion batteries and help them charge faster, researchers need to follow and understand the processes occurring in functioning materials under realistic conditions in real time. Currently, this requires sophisticated synchrotron X-ray or electron microscopy techniques, which are time-consuming and expensive. More

  • in

    AI spots healthy stem cells quickly and accurately

    Stem cell therapy is at the cutting edge of regenerative medicine, but until now researchers and clinicians have had to painstakingly evaluate stem cell quality by looking at each cell individually under a microscope. Now, researchers from Japan have found a way to speed up this process, using the power of artificial intelligence (AI).
    In a study published in February in Stem Cells, researchers from Tokyo Medical and Dental University (TMDU) reported that their AI system, called DeepACT, can identify healthy, productive skin stem cells with the same accuracy that a human can.
    Stem cells are able to develop into several different kinds of mature cells, which means they can be used to grow new tissues in cases of injury or disease. Keratinocyte (skin) stem cells are used to treat inherited skin diseases and to grow sheets of skin that is used to repair large burns.
    “Keratinocyte stem cells are one of the few types of adult stem cells that grow well in the lab. The healthiest keratinocytes move more quickly than less healthy cells, so they can be identified by the eye using a microscope,” explains Takuya Hirose, one of the lead authors of the study. “However, this method is time-consuming, labor-intensive, and error-prone.”
    To address this, the researchers aimed to develop a system that would identify and track the movement of these stem cells automatically.
    “We trained this system through a process called ‘deep learning’ using a library of sample images,” says the co-lead author, Jun’ichi Kotoku. “Then we tested it on a new group of images and found that the results were very accurate compared with manual analysis.”
    In addition to detecting individual stem cells, the DeepACT system also calculates the ‘motion index’ of each colony, which indicates how fast thecells at the central region of the colony move compared with those at the marginal region. The colonies with the highest motion index were much more likely than the colonies with lower motion index to grow well, making them good candidates for generating sheets of new skin for transplantation to burn patients.
    “DeepACT is a powerful new way to perform accurate quality control of human keratinocyte stem cells and will make this process both more reliable and more efficient,” states Daisuke Nanba, senior author.
    Given that skin transplants can fail if they contain too many unhealthy or unproductive stem cells, being able to quickly and easily identify the most suitable cells would be a considerable clinical advantage. Automated quality control could also be valuable for industrial stem cell manufacturing, to help ensure a stable cell supply and lower production costs.
    Story Source:
    Materials provided by Tokyo Medical and Dental University. Note: Content may be edited for style and length. More

  • in

    AI to track cognitive deviation in aging brains

    Researchers have developed an artificial intelligence (AI)-based brain age prediction model to quantify deviations from a healthy brain-aging trajectory in patients with mild cognitive impairment, according to a study published in Radiology: Artificial Intelligence. The model has the potential to aid in early detection of cognitive impairment at an individual level.
    Amnestic mild cognitive impairment (aMCI) is a transition phase from normal aging to Alzheimer’s disease (AD). People with aMCI have memory deficits that are more serious than normal for their age and education, but not severe enough to affect daily function.
    For the study, Ni Shu, Ph.D., from State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, in Beijing, China, and colleagues used a machine learning approach to train a brain age prediction model based on the T1-weighted MR images of 974 healthy adults aged from 49.3 to 95.4 years. The trained model was applied to estimate the predicted age difference (predicted age vs. actual age) of aMCI patients in the Beijing Aging Brain Rejuvenation Initiative (616 healthy controls and 80 aMCI patients) and the Alzheimer’s Disease Neuroimaging Initiative (589 healthy controls and 144 aMCI patients) datasets.
    The researchers also examined the associations between the predicted age difference and cognitive impairment, genetic risk factors, pathological biomarkers of AD, and clinical progression in aMCI patients.
    The results showed that aMCI patients had brain-aging trajectories distinct from the typical normal aging trajectory, and the proposed brain age prediction model could quantify individual deviations from the typical normal aging trajectory in these patients. The predicted age difference was significantly associated with individual cognitive impairment of aMCI patients in several domains, specifically including memory, attention and executive function.
    “The predictive model we generated was highly accurate at estimating chronological age in healthy participants based on only the appearance of MRI scans,” the researchers wrote. “In contrast, for aMCI, the model estimated brain age to be greater than 2.7 years older on average than the patient’s chronological age.”
    The model further showed that progressive aMCI patients exhibit more deviations from typical normal aging than stable aMCI patients, and the use of the predicted age difference score along with other AD-specific biomarkers could better predict the progression of aMCI. Apolipoprotein E (APOE) ε4 carriers showed larger predicted age differences than non-carriers, and amyloid-positive patients showed larger predicted age differences than amyloid-negative patients.
    Combining the predicted age difference with other biomarkers of AD showed the best performance in differentiating progressive aMCI from stable aMCI.
    “This work indicates that predicted age difference has the potential to be a robust, reliable and computerized biomarker for early diagnosis of cognitive impairment and monitoring response to treatment,” the authors concluded.
    Story Source:
    Materials provided by Radiological Society of North America. Note: Content may be edited for style and length. More

  • in

    River flow: New machine learning methods could improve environmental predictions

    Machine learning algorithms do a lot for us every day — send unwanted email to our spam folder, warn us if our car is about to back into something, and give us recommendations on what TV show to watch next. Now, we are increasingly using these same algorithms to make environmental predictions for us.
    A team of researchers from the University of Minnesota, University of Pittsburgh, and U.S. Geological Survey recently published a new study on predicting flow and temperature in river networks in the 2021 Society for Industrial and Applied Mathematics (SIAM) International Conference on Data Mining (SDM21) proceedings. The study was funded by the National Science Foundation (NSF).
    The research demonstrates a new machine learning method where the algorithm is “taught” the rules of the physical world in order to make better predictions and steer the algorithm toward physically meaningful relationships between inputs and outputs.
    The study presents a model that can make more accurate river and stream temperature predictions, even when little data is available, which is the case in most rivers and streams. The model can also better generalize to different time periods.
    “Water temperature in streams is a ‘master variable’ for many important aquatic systems, including the suitability of aquatic habitats, evaporation rates, greenhouse gas exchange, and efficiency of thermoelectric energy production,” said Xiaowei Jia, a lead author of the study and assistant professor in the University of Pittsburgh’s Department of Computer Science at University in the School of Computing and Information. “Accurate prediction of water temperature and streamflow also aids in decision making for resource managers, for example helping them to determine when and how much water to release from reservoirs to downstream rivers.
    A common criticism of machine learning is that the predictions aren’t rooted in physical meaning. That is, the algorithms are just finding correlations between inputs and outputs, and sometimes those correlations can be “spurious” or give false results. The model often won’t be able to handle a situation where the relationship between inputs and outputs changes. More

  • in

    Making our computers more secure

    Because corporations and governments rely on computers and the internet to run everything from the electric grid, healthcare, and water systems, computer security is extremely important to all of us. It is increasingly being breached: Numerous security hacks just this past month include the Colonial Pipeline security breach and the JBS Foods ransomware attacks where hackers took over the organization’s computer systems and demanded payment to unlock and release it back to the owners. The White House is strongly urging companies to take ransomware threats seriously and update their systems to protect themselves. Yet these attacks continue to threaten all of us on an almost daily basis.
    Columbia Engineering researchers who are leading experts in computer security recently presented two major papers that make computer systems more secure at the International Symposium on Computer Architecture (ISCA), the premier forum for new ideas and research results in computer architecture. This new research, which has zero to little effect on system performance, is already being used to create a processor for the Air Force Research Lab.
    “Memory safety has been a problem for nearly 40 years and numerous solutions have been proposed. We believe that memory safety continues to be a problem because it does not distribute the burden in a fair manner among software engineers and end-users,” said Simha Sethumadhavan, associate professor of computer science, whose research focuses on how computer architecture can be used to improve computer security. “With these two papers, we believe we have found the right balance of burdens.”
    Computer security has been a long-standing issue, with many proposed systems workable in research settings but not in real-world situations. Sethumadhavan believes that the way to secure a system is to first start with the hardware and then, in turn, the software. The urgency of his research is underscored by the fact that he has significant grants from both the Office of Naval Research and the U.S. Airforce, and his PhD students have received a Qualcomm Innovation Fellowship to create practical security solutions.
    Sethumadhavan’s group noticed that most security issues occur within a computer’s memory, specifically pointers. Pointers are used for managing memory and can lead to memory corruption that can open up the system to hackers who hijack the program. Current techniques to mitigate memory attacks use up a lot of energy and can break software. These methods also greatly affect a system’s performance — cellphone batteries drain quickly, apps run slowly, and computers crash.
    The team set out to address these issues and created a security solution that protects memory without affecting a system’s performance. They call their novel memory security solution, ZeRØ: Zero-Overhead Resilient Operation Under Pointer Integrity Attacks. More

  • in

    Mining precious rare-earth elements from coal fly ash with a reusable ionic liquid

    Rare-earth elements are in many everyday products, such as smart phones, LED lights and batteries. However, only a few locations have large enough deposits worth mining, resulting in global supply chain tensions. So, there’s a push toward recycling them from non-traditional sources, such as waste from burning coal — fly ash. Now, researchers in ACS’ Environmental Science & Technology report a simple method for recovering these elements from coal fly ash using an ionic liquid.
    While rare-earth elements aren’t as scarce as their name implies, major reserves are either in politically sensitive locations, or they are widely dispersed, which makes mining them challenging. So, to ensure their supply, some people have turned to processing other enriched resources. For instance, the ash byproduct from coal-fired power plants has similar elemental concentrations to raw ores. Yet, current methods to extract these precious materials from coal fly ash are hazardous and require several purification steps to get a usable product. A potential solution could be ionic liquids, which are considered to be environmentally benign and are reusable. One in particular, betainium bis(trifluoromethylsulfonyl)imide or [Hbet][Tf2N], selectively dissolves rare-earth oxides over other metal oxides. This ionic liquid also uniquely dissolves into water when heated and then separates into two phases when cooled. So, Ching-Hua Huang, Laura Stoy and colleagues at Georgia Tech wanted to see if it would efficiently and preferentially pull the desired elements out of coal fly ash and whether it could be effectively cleaned, creating a process that is safe and generates little waste.
    The researchers pretreated coal fly with an alkaline solution and dried it. Then, they heated ash suspended in water with [Hbet][Tf2N], creating a single phase. When cooled, the solutions separated. The ionic liquid extracted more than 77% of the rare-earth elements from fresh material, and it extracted an even higher percentage (97%) from weathered ash that had spent years in a storage pond. Finally, rare-earth elements were stripped from the ionic liquid with dilute acid. The researchers found that adding betaine during the leaching step increased the amounts of rare-earth elements extracted. The team tested the ionic liquid’s reusability by rinsing it with cold water to remove excess acid, finding no change in its extraction efficiency through three leaching-cleaning cycles. The researchers say that this low-waste approach produces a solution rich in rare-earth elements, with limited impurities, and could be used to recycle precious materials from the abundance of coal fly ash held in storage ponds.
    Story Source:
    Materials provided by American Chemical Society. Note: Content may be edited for style and length. More

  • in

    Using virtual populations for clinical trials

    A study involving virtual rather than real patients was as effective as traditional clinical trials in evaluating a medical device used to treat brain aneurysms, according to new research.
    The findings are proof of concept for what are called in-silico trials, where instead of recruiting people to a real-life clinical trial, researchers build digital simulations of patient groups, loosely akin to the way virtual populations are built in The Sims computer game.
    In-silico trials could revolutionise the way clinical trials are conducted, reducing the time and costs of getting new medical devices and medicines developed, while reducing human and animal harm in testing.
    The virtual patient populations are developed from clinical databases to reflect age, sex and ethnicity but they also simulate the way disease affects the human body: for example, the interactions between anatomy, physics, physiology, and blood biochemistry. Those simulations are then used to model the impact of therapies and interventions.
    The international research, led by the University of Leeds and reported today (23 June) in the journal Nature Communications, investigated whether an in-silico trial could replicate the results of three, real-life clinical trials that assessed the effectiveness of a device called a flow diverter, used in the treatment of brain aneurysms, a disease where the wall of a blood vessel weakens and begins to bulge.
    Flow diverter reduces blood flow into the aneurysm
    A flow diverter is a small, flexible mesh tube which is guided to the site of the aneurysm by a doctor using a catheter. Once in place, the flow diverter directs blood along the blood vessel and reduces flow into the aneurysm, initiating a clotting process that eventually cuts the aneurysm off from blood circulation, thus healing it. More