More stories

  • in

    AI to track cognitive deviation in aging brains

    Researchers have developed an artificial intelligence (AI)-based brain age prediction model to quantify deviations from a healthy brain-aging trajectory in patients with mild cognitive impairment, according to a study published in Radiology: Artificial Intelligence. The model has the potential to aid in early detection of cognitive impairment at an individual level.
    Amnestic mild cognitive impairment (aMCI) is a transition phase from normal aging to Alzheimer’s disease (AD). People with aMCI have memory deficits that are more serious than normal for their age and education, but not severe enough to affect daily function.
    For the study, Ni Shu, Ph.D., from State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, in Beijing, China, and colleagues used a machine learning approach to train a brain age prediction model based on the T1-weighted MR images of 974 healthy adults aged from 49.3 to 95.4 years. The trained model was applied to estimate the predicted age difference (predicted age vs. actual age) of aMCI patients in the Beijing Aging Brain Rejuvenation Initiative (616 healthy controls and 80 aMCI patients) and the Alzheimer’s Disease Neuroimaging Initiative (589 healthy controls and 144 aMCI patients) datasets.
    The researchers also examined the associations between the predicted age difference and cognitive impairment, genetic risk factors, pathological biomarkers of AD, and clinical progression in aMCI patients.
    The results showed that aMCI patients had brain-aging trajectories distinct from the typical normal aging trajectory, and the proposed brain age prediction model could quantify individual deviations from the typical normal aging trajectory in these patients. The predicted age difference was significantly associated with individual cognitive impairment of aMCI patients in several domains, specifically including memory, attention and executive function.
    “The predictive model we generated was highly accurate at estimating chronological age in healthy participants based on only the appearance of MRI scans,” the researchers wrote. “In contrast, for aMCI, the model estimated brain age to be greater than 2.7 years older on average than the patient’s chronological age.”
    The model further showed that progressive aMCI patients exhibit more deviations from typical normal aging than stable aMCI patients, and the use of the predicted age difference score along with other AD-specific biomarkers could better predict the progression of aMCI. Apolipoprotein E (APOE) ε4 carriers showed larger predicted age differences than non-carriers, and amyloid-positive patients showed larger predicted age differences than amyloid-negative patients.
    Combining the predicted age difference with other biomarkers of AD showed the best performance in differentiating progressive aMCI from stable aMCI.
    “This work indicates that predicted age difference has the potential to be a robust, reliable and computerized biomarker for early diagnosis of cognitive impairment and monitoring response to treatment,” the authors concluded.
    Story Source:
    Materials provided by Radiological Society of North America. Note: Content may be edited for style and length. More

  • in

    River flow: New machine learning methods could improve environmental predictions

    Machine learning algorithms do a lot for us every day — send unwanted email to our spam folder, warn us if our car is about to back into something, and give us recommendations on what TV show to watch next. Now, we are increasingly using these same algorithms to make environmental predictions for us.
    A team of researchers from the University of Minnesota, University of Pittsburgh, and U.S. Geological Survey recently published a new study on predicting flow and temperature in river networks in the 2021 Society for Industrial and Applied Mathematics (SIAM) International Conference on Data Mining (SDM21) proceedings. The study was funded by the National Science Foundation (NSF).
    The research demonstrates a new machine learning method where the algorithm is “taught” the rules of the physical world in order to make better predictions and steer the algorithm toward physically meaningful relationships between inputs and outputs.
    The study presents a model that can make more accurate river and stream temperature predictions, even when little data is available, which is the case in most rivers and streams. The model can also better generalize to different time periods.
    “Water temperature in streams is a ‘master variable’ for many important aquatic systems, including the suitability of aquatic habitats, evaporation rates, greenhouse gas exchange, and efficiency of thermoelectric energy production,” said Xiaowei Jia, a lead author of the study and assistant professor in the University of Pittsburgh’s Department of Computer Science at University in the School of Computing and Information. “Accurate prediction of water temperature and streamflow also aids in decision making for resource managers, for example helping them to determine when and how much water to release from reservoirs to downstream rivers.
    A common criticism of machine learning is that the predictions aren’t rooted in physical meaning. That is, the algorithms are just finding correlations between inputs and outputs, and sometimes those correlations can be “spurious” or give false results. The model often won’t be able to handle a situation where the relationship between inputs and outputs changes. More

  • in

    Making our computers more secure

    Because corporations and governments rely on computers and the internet to run everything from the electric grid, healthcare, and water systems, computer security is extremely important to all of us. It is increasingly being breached: Numerous security hacks just this past month include the Colonial Pipeline security breach and the JBS Foods ransomware attacks where hackers took over the organization’s computer systems and demanded payment to unlock and release it back to the owners. The White House is strongly urging companies to take ransomware threats seriously and update their systems to protect themselves. Yet these attacks continue to threaten all of us on an almost daily basis.
    Columbia Engineering researchers who are leading experts in computer security recently presented two major papers that make computer systems more secure at the International Symposium on Computer Architecture (ISCA), the premier forum for new ideas and research results in computer architecture. This new research, which has zero to little effect on system performance, is already being used to create a processor for the Air Force Research Lab.
    “Memory safety has been a problem for nearly 40 years and numerous solutions have been proposed. We believe that memory safety continues to be a problem because it does not distribute the burden in a fair manner among software engineers and end-users,” said Simha Sethumadhavan, associate professor of computer science, whose research focuses on how computer architecture can be used to improve computer security. “With these two papers, we believe we have found the right balance of burdens.”
    Computer security has been a long-standing issue, with many proposed systems workable in research settings but not in real-world situations. Sethumadhavan believes that the way to secure a system is to first start with the hardware and then, in turn, the software. The urgency of his research is underscored by the fact that he has significant grants from both the Office of Naval Research and the U.S. Airforce, and his PhD students have received a Qualcomm Innovation Fellowship to create practical security solutions.
    Sethumadhavan’s group noticed that most security issues occur within a computer’s memory, specifically pointers. Pointers are used for managing memory and can lead to memory corruption that can open up the system to hackers who hijack the program. Current techniques to mitigate memory attacks use up a lot of energy and can break software. These methods also greatly affect a system’s performance — cellphone batteries drain quickly, apps run slowly, and computers crash.
    The team set out to address these issues and created a security solution that protects memory without affecting a system’s performance. They call their novel memory security solution, ZeRØ: Zero-Overhead Resilient Operation Under Pointer Integrity Attacks. More

  • in

    Mining precious rare-earth elements from coal fly ash with a reusable ionic liquid

    Rare-earth elements are in many everyday products, such as smart phones, LED lights and batteries. However, only a few locations have large enough deposits worth mining, resulting in global supply chain tensions. So, there’s a push toward recycling them from non-traditional sources, such as waste from burning coal — fly ash. Now, researchers in ACS’ Environmental Science & Technology report a simple method for recovering these elements from coal fly ash using an ionic liquid.
    While rare-earth elements aren’t as scarce as their name implies, major reserves are either in politically sensitive locations, or they are widely dispersed, which makes mining them challenging. So, to ensure their supply, some people have turned to processing other enriched resources. For instance, the ash byproduct from coal-fired power plants has similar elemental concentrations to raw ores. Yet, current methods to extract these precious materials from coal fly ash are hazardous and require several purification steps to get a usable product. A potential solution could be ionic liquids, which are considered to be environmentally benign and are reusable. One in particular, betainium bis(trifluoromethylsulfonyl)imide or [Hbet][Tf2N], selectively dissolves rare-earth oxides over other metal oxides. This ionic liquid also uniquely dissolves into water when heated and then separates into two phases when cooled. So, Ching-Hua Huang, Laura Stoy and colleagues at Georgia Tech wanted to see if it would efficiently and preferentially pull the desired elements out of coal fly ash and whether it could be effectively cleaned, creating a process that is safe and generates little waste.
    The researchers pretreated coal fly with an alkaline solution and dried it. Then, they heated ash suspended in water with [Hbet][Tf2N], creating a single phase. When cooled, the solutions separated. The ionic liquid extracted more than 77% of the rare-earth elements from fresh material, and it extracted an even higher percentage (97%) from weathered ash that had spent years in a storage pond. Finally, rare-earth elements were stripped from the ionic liquid with dilute acid. The researchers found that adding betaine during the leaching step increased the amounts of rare-earth elements extracted. The team tested the ionic liquid’s reusability by rinsing it with cold water to remove excess acid, finding no change in its extraction efficiency through three leaching-cleaning cycles. The researchers say that this low-waste approach produces a solution rich in rare-earth elements, with limited impurities, and could be used to recycle precious materials from the abundance of coal fly ash held in storage ponds.
    Story Source:
    Materials provided by American Chemical Society. Note: Content may be edited for style and length. More

  • in

    Using virtual populations for clinical trials

    A study involving virtual rather than real patients was as effective as traditional clinical trials in evaluating a medical device used to treat brain aneurysms, according to new research.
    The findings are proof of concept for what are called in-silico trials, where instead of recruiting people to a real-life clinical trial, researchers build digital simulations of patient groups, loosely akin to the way virtual populations are built in The Sims computer game.
    In-silico trials could revolutionise the way clinical trials are conducted, reducing the time and costs of getting new medical devices and medicines developed, while reducing human and animal harm in testing.
    The virtual patient populations are developed from clinical databases to reflect age, sex and ethnicity but they also simulate the way disease affects the human body: for example, the interactions between anatomy, physics, physiology, and blood biochemistry. Those simulations are then used to model the impact of therapies and interventions.
    The international research, led by the University of Leeds and reported today (23 June) in the journal Nature Communications, investigated whether an in-silico trial could replicate the results of three, real-life clinical trials that assessed the effectiveness of a device called a flow diverter, used in the treatment of brain aneurysms, a disease where the wall of a blood vessel weakens and begins to bulge.
    Flow diverter reduces blood flow into the aneurysm
    A flow diverter is a small, flexible mesh tube which is guided to the site of the aneurysm by a doctor using a catheter. Once in place, the flow diverter directs blood along the blood vessel and reduces flow into the aneurysm, initiating a clotting process that eventually cuts the aneurysm off from blood circulation, thus healing it. More

  • in

    Perovskite memory devices with ultra-fast switching speed

    A research team led by Professor Jang-Sik Lee of Pohang University of Science and Technology (POSTECH) has successfully developed an halide perovskite-based memory with an ultra-fast switching speed. The findings from this study were published in Nature Communications on June 10, 2021.
    Resistive switching memory is a promising contender for next-generation memory device due to its advantages of simple structure and low power consumption. Various materials have been previously studied for resistive switching memory. Among them, halide perovskites are receiving much attention for use in the memory because of low operation voltage and high on/off ratio. However, halide perovskite-based memory devices have limitations of slow switching speed which hinder their practical application in memory devices.
    To this, the researchers at POSTECH (Prof. Jang-Sik Lee, Prof. Donghwa Lee, Youngjun Park, and Seong Hun Kim) have successfully developed ultra-fast switching memory devices using halide perovskites by using a combined method of first-principles calculations and experimental verification. From a total of 696 compounds of halide perovskites candidates, Cs3Sb2I9 with a dimer structure was selected as the best candidate for memory application. To verify the calculation results, memory devices using the dimer-structured Cs3Sb2I9 were fabricated. They were then operated with an ultra-fast switching speed of 20 ns, which was more than 100 times faster than the memory devices that used the layer-structured Cs3Sb2I9. In addition, many of the perovskites contain lead (Pb) in the materials which has been raised as an issue. In this work, however, the use of lead-free perovskite eliminates such environmental problems.
    “This study provides an important step toward the development of resistive switching memory that can be operated at an ultra-fast switching speed,” remarked Professor Lee on the significance of the research. He added, “this work offers an opportunity to design new materials for memory devices based on calculations and experimental verification.”
    Story Source:
    Materials provided by Pohang University of Science & Technology (POSTECH). Note: Content may be edited for style and length. More

  • in

    AR can improve the lives of older adults, so why are apps designed mainly for youngsters?

    Augmented reality (AR) is poised to revolutionise the way people complete essential everyday tasks, yet older adults — who have much to gain from the technology — will be excluded from using it unless more thought goes into designing software that makes sense to them.
    The danger of older adults falling through the gaps has been highlighted by research carried out by scientists at the University of Bath in the UK in collaboration with designers from the Bath-based charity Designability. A paper describing their work has received an honourable mention at this year’s Human Computer Interaction Conference (CHI2021) — the world’s largest conference of its kind.
    The study concludes that adults aged 50+ are more likely to be successful at completing AR-prompted tasks (such as ‘pick up the cube’ followed by ‘move the cube to the blue area’) when the steps are shown by a ‘ghosthand’ demonstrating the action rather than the more commonly used arrow or some other visual aid.
    According to the research team, many manufacturers of AR software are failing to factor the needs and preferences of older people into their application designs.
    “We can’t expect people to benefit from AR technology if they can’t follow the prompts shown to them,” said Dr Christof Lutteroth from the University’s Department of Computer Science.
    Thomas Williams, the Doctor of Engineering student (funded by the EPSRC) who conducted the research from the university’s Centre for Digital Entertainment, said: “A lot more thought needs to go into understanding what older adults need from augmented reality, so users in this group understand the prompts they’re given straight away.”
    He added: “AR technology has great potential for improving the lives of older adults but most AR designers give little or no thought to the kind of augmentations they use for this population.” More

  • in

    Cosmic filaments may be the biggest spinning objects in space

    Moons do it, stars do it, even whole galaxies do it. Now, two teams of scientists say cosmic filaments do it, too. These tendrils stretching hundreds of millions of light-years spin, twirling like giant corkscrews.

    Cosmic filaments are the universe’s largest known structures and contain most of the universe’s mass (SN: 1/20/14). These dense, slender strands of dark matter and galaxies connect the cosmic web, channeling matter toward galaxy clusters at each strand’s end (SN: 7/5/12).

    At the instant of the Big Bang, matter didn’t rotate; then, as stars and galaxies formed, they began to spin. Until now, galaxy clusters were the largest structures known to rotate. “Conventional thinking on the subject said that’s where spin ends. You can’t really generate torques on larger scales,” says Noam Libeskind, cosmologist at the Leibniz Institute for Astrophysics Potsdam in Germany.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    So the discovery that filaments spin — at a scale that makes galaxies look like specks of dust — presents a puzzle. “We don’t have a full theory of how every galaxy comes to rotate, or every filament comes to rotate,” says Mark Neyrinck, cosmologist at University of the Basque Country in Bilbao, Spain.

    To test for rotation, Neyrinck and colleagues used a 3-D cosmological simulation to measure the velocities of dark matter clumps as the clumps moved around a filament. He and his colleagues describe their results in a paper posted in 2020 at arXiv.org and now in press with the Monthly Notices of the Royal Astronomical Society. Meanwhile, Libeskind and colleagues searched for rotation in the real universe, they report June 14 in Nature Astronomy. Using the Sloan Digital Sky Survey, the team mapped galaxies’ motions and measured their velocities perpendicular to filaments’ axes.

    [embedded content]
    A computer simulation shows how a cosmic filament twists galaxies and dark matter into a strand of the cosmic web. Filaments pull matter into rotation and toward clusters at their ends, visualized here with “test particles” shaped like comets.  

    The two teams detected similar rotational velocities for filaments despite differing approaches, Neyrinck says, an “encouraging [indication] that we’re looking at the same thing.”

    Next, researchers want to tackle what makes these giant space structures spin, and how they get started. “What is that process?” Libeskind says. “Can we figure it out?” More