More stories

  • in

    Spintronics: Improving electronics with finer spin control

    Spintronics is an emerging technology for manufacturing electronic devices that take advantage of electron spin and its associated magnetic properties, instead of using the electrical charge of an electron, to carry information. Antiferromagnetic materials are attracting attention in spintronics, with the expectation of spin operations with higher stability. Unlike ferromagnetic materials, in which atoms align along the same direction like in the typical refrigerator magnets, magnetic atoms inside antiferromagnets have antiparallel spin alignments that cancel out the net magnetization.
    Scientists have worked on controlling the alignment of magnetic atoms within antiferromagnetic materials to create magnetic switches. Conventionally, this has been done using a ‘field-cooling’ procedure, which heats and then cools a magnetic system containing an antiferromagnet, while applying an external magnetic field. However, this process is inefficient for use in many micro- or nano- structured spintronics devices because the spatial resolution of the process itself is not high enough to be applied in a micro- or nano-scale devices.
    “We discovered that we can control the antiferromagnetic state by simultaneously applying mechanical vibration and a magnetic field,” says Jung-Il Hong of DGIST’s Spin Nanotech Laboratory. “The process can replace the conventional heating and cooling approach, which is both inconvenient and harmful to the magnetic material. We hope our new procedure will facilitate the integration of antiferromagnetic materials into spintronics-based micro- and nano-devices.”
    Hong and his colleagues combined two layers: a cobalt-iron-boron ferromagnetic film on top of an iridium manganese antiferromagnetic film. The layers were grown on piezoelectric ceramic substrates. Combined application of mechanical vibration and a magnetic field allowed the scientists to control the alignments of magnetic spins repeatedly along any direction desired.
    The team aims to continue the search and development of new magnetic phases beyond conventionally classified magnetic materials. “Historically, new material discovery has led to the development of new technologies,” says Hong. “We want our research work to be a seed for new technologies.”
    Story Source:
    Materials provided by DGIST (Daegu Gyeongbuk Institute of Science and Technology). Note: Content may be edited for style and length. More

  • in

    Engineers harvest WiFi signals to power small electronics

    With the rise of the digital age, the amount of WiFi sources to transmit information wirelessly between devices has grown exponentially. This results in the widespread use of the 2.4GHz radio frequency that WiFi uses, with excess signals available to be tapped for alternative uses.
    To harness this under-utilised source of energy, a research team from the National University of Singapore (NUS) and Japan’s Tohoku University (TU) has developed a technology that uses tiny smart devices known as spin-torque oscillators (STOs) to harvest and convert wireless radio frequencies into energy to power small electronics. In their study, the researchers had successfully harvested energy using WiFi-band signals to power a light-emitting diode (LED) wirelessly, and without using any battery.
    “We are surrounded by WiFi signals, but when we are not using them to access the Internet, they are inactive, and this is a huge waste. Our latest result is a step towards turning readily-available 2.4GHz radio waves into a green source of energy, hence reducing the need for batteries to power electronics that we use regularly. In this way, small electric gadgets and sensors can be powered wirelessly by using radio frequency waves as part of the Internet of Things. With the advent of smart homes and cities, our work could give rise to energy-efficient applications in communication, computing, and neuromorphic systems,” said Professor Yang Hyunsoo from the NUS Department of Electrical and Computer Engineering, who spearheaded the project.
    The research was carried out in collaboration with the research team of Professor Guo Yong Xin, who is also from the NUS Department of Electrical and Computer Engineering, as well as Professor Shunsuke Fukami and his team from TU. The results were published in Nature Communications on 18 May 2021.
    Converting WiFi signals into usable energy
    Spin-torque oscillators are a class of emerging devices that generate microwaves, and have applications in wireless communication systems. However, the application of STOs is hindered due to a low output power and broad linewidth. More

  • in

    When one become two: Separating DNA for more accurate nanopore analysis

    A new software tool developed by Earlham Institute researchers will help bioinformaticians improve the quality and accuracy of their biological data, and avoid mis-assemblies. The fast, lightweight, user-friendly tool visualises genome assemblies and gene alignments from the latest next generation sequencing technologies.
    Called Alvis, the new visualisation tool examines mappings between DNA sequence data and reference genome databases. This allows bioinformaticians to more easily analyse their data generated from common genomics tasks and formats by producing efficient, ready-made vector images.
    First author and post-doctoral scientist at the Earlham Institute Dr Samuel Martin in the Leggett Group, said: “Typically, alignment tools output plain text files containing lists of alignment data. This is great for computer parsing and for being incorporated into a pipeline, but it can be difficult to interpret by humans.
    “Visualisation of alignment data can help us to understand the problem at hand. As a new technology, several new alignment formats have been implemented by new tools that are specific to nanopore sequencing technology.
    “We found that existing visualisation tools were not able to interpret these formats; Alvis can be used with all common alignment formats, and is easily extensible for future ones.”
    A key feature of the new command line tool is its unique ability to automatically highlight chimeric sequences — weak links in the DNA chain. This is where two sequences — from different parts of a genome or different species — are linked together by mistake to make one, affecting the data’s accuracy. More

  • in

    New material could create 'neurons' and 'synapses' for new computers

    Classic computers use binary values (0/1) to perform. By contrast, our brain cells can use more values to operate, making them more energy-efficient than computers. This is why scientists are interested in neuromorphic (brain-like) computing. Physicists from the University of Groningen (the Netherlands) have used a complex oxide to create elements comparable to the neurons and synapses in the brain using spins, a magnetic property of electrons. Their results were published on 18 May in the journal Frontiers in Nanotechnology.
    Although computers can do straightforward calculations much faster than humans, our brains outperform silicon machines in tasks like object recognition. Furthermore, our brain uses less energy than computers. Part of this can be explained by the way our brain operates: whereas a computer uses a binary system (with values 0 or 1), brain cells can provide more analogue signals with a range of values.
    Thin films
    The operation of our brains can be simulated in computers, but the basic architecture still relies on a binary system. That is why scientist look for ways to expand this, creating hardware that is more brain-like, but will also interface with normal computers. ‘One idea is to create magnetic bits that can have intermediate states’, says Tamalika Banerjee, Professor of Spintronics of Functional Materials at the Zernike Institute for Advanced Materials, University of Groningen. She works on spintronics, which uses a magnetic property of electrons called ‘spin’ to transport, manipulate and store information.
    In this study, her PhD student Anouk Goossens, first author of the paper, created thin films of a ferromagnetic metal (strontium-ruthenate oxide, SRO) grown on a substrate of strontium titanate oxide. The resulting thin film contained magnetic domains that were perpendicular to the plane of the film. ‘These can be switched more efficiently than in-plane magnetic domains’, explains Goossens. By adapting the growth conditions, it is possible to control the crystal orientation in the SRO. Previously, out-of-plane magnetic domains have been made using other techniques, but these typically require complex layer structures.
    Magnetic anisotropy
    The magnetic domains can be switched using a current through a platinum electrode on top of the SRO. Goossens: ‘When the magnetic domains are oriented perfectly perpendicular to the film, this switching is deterministic: the entire domain will switch.’ However, when the magnetic domains are slightly tilted, the response is probabilistic: not all the domains are the same, and intermediate values occur when only part of the crystals in the domain have switched.
    By choosing variants of the substrate on which the SRO is grown, the scientists can control its magnetic anisotropy. This allows them to produce two different spintronic devices. ‘This magnetic anisotropy is exactly what we wanted’, says Goossens. ‘Probabilistic switching compares to how neurons function, while the deterministic switching is more like a synapse.’
    The scientists expect that in the future, brain-like computer hardware can be created by combining these different domains in a spintronic device that can be connected to standard silicon-based circuits. Furthermore, probabilistic switching would also allow for stochastic computing, a promising technology which represents continuous values by streams of random bits. Banerjee: ‘We have found a way to control intermediate states, not just for memory but also for computing.’
    Story Source:
    Materials provided by University of Groningen. Note: Content may be edited for style and length. More

  • in

    Mathematical model predicts effect of bacterial mutations on antibiotic success

    Scientists have developed a mathematical model that predicts how the number and effects of bacterial mutations leading to drug resistance will influence the success of antibiotic treatments.
    Their model, described today in the journal eLife, provides new insights on the emergence of drug resistance in clinical settings and hints at how to design novel treatment strategies that help avoid this resistance occurring.
    Antibiotic resistance is a significant public health challenge, caused by changes in bacterial cells that allow them to survive drugs that are designed to kill them. Resistance often occurs through new mutations in bacteria that arise during the treatment of an infection. Understanding how this resistance emerges and spreads through bacterial populations is important to preventing treatment failure.
    “Mathematical models are a crucial tool for exploring the outcome of drug treatment and assessing the risk of the evolution of antibiotic resistance,” explains first author Claudia Igler, Postdoctoral Researcher at ETH Zurich, Switzerland. “These models usually consider a single mutation, which leads to full drug resistance, but multiple mutations that increase antibiotic resistance in bacteria can occur. So there are some mutations that lead to a high level of resistance individually, and some that provide a small level of resistance individually but can accumulate to provide high-level resistance.”
    For their study, Igler and her team gathered experimental evidence that drug resistance evolution follows these two patterns: a single mutation and multiple mutations. They then used this information to create an informed modelling framework which predicts the evolution of ‘single-step’ resistance versus ‘multi-step’ resistance in bacteria cells in response to drug type, pharmacokinetics (how the drug decays in the body), and treatment strategies. They investigated how the risk of treatment failure changes when taking into account multiple mutational steps, instead of a single one, and how many different bacterial lineages (bacteria with different mutations) would emerge during the treatment period.
    Using their model, the team found that the evolution of drug resistance is limited substantially if more than two mutations are required by the bacteria. Additionally, the extent of this limitation, and therefore the probability of treatment failure, depends strongly on the combination of the drug type and the route of administration, such as orally or via IV infusion.
    “Our work provides a crucial step in understanding the emergence of antibiotic resistance in clinically relevant treatment settings,” says senior author Roland Regoes, Group Leader at ETH Zurich. “Together, our findings highlight the importance of measuring the level of antibiotic resistance granted by single mutations to help inform effective antimicrobial treatment strategies.”
    Story Source:
    Materials provided by eLife. Note: Content may be edited for style and length. More

  • in

    Scientists map gene changes underlying brain and cognitive decline in aging

    Alzheimer’s disease shares some key similarities with healthy aging, according to a new mathematical model described today in eLife.
    The model provides unique insights into the multiscale biological alterations in the elderly and neurodegenerative brain, with important implications for identifying future treatment targets for Alzheimer’s disease.
    Researchers developed their mathematical model using a range of biological data — from ‘microscopic’ information using gene activity to ‘macroscopic’ information about the brain’s burden of toxic proteins (tau and amyloid), its neuronal function, cerebrovascular flow, metabolism and tissue structure from molecular PET and MRI scans.
    “In both aging and disease research, most studies incorporate brain measurements at either micro or macroscopic scale, failing to detect the direct causal relationships between several biological factors at multiple spatial resolutions,” explains first author Quadri Adewale, a PhD candidate at the Department of Neurology and Neurosurgery, McGill University, Canada. “We wanted to combine whole-brain gene activity measurements with clinical scan data in a comprehensive and personalised model, which we then validated in healthy aging and Alzheimer’s disease.”
    The study involved 460 people who had at least four different types of brain scan at four different time points as part of the Alzheimer’s Disease Neuroimaging Initiative cohort. Among the 460 participants, 151 were clinically identified as asymptomatic or healthy control (HC), 161 with early mild cognitive impairment (EMCI), 113 with late mild cognitive impairment (LMCI) and 35 with probable Alzheimer’s disease (AD).
    Data from these multimodal scans was combined with data on gene activity from the Allen Human Brain Atlas, which provides detail on whole-brain gene expression for 20,267 genes. The brain was then split into 138 different gray matter regions for the purposes of combining the gene data with the structural and functional data from the scans.
    The team then explored causal relationships between the spatial genetic patterns and information from their scans, and cross-referenced this to age-related changes in cognitive function. They found that the ability of the model to predict the extent of decline in cognitive function was highest for Alzheimer’s disease, followed in order by the less pronounced decline in cognition (LCMI, ECMI) and finally the healthy controls. This shows that the model can reproduce the individual multifactorial changes in the brain’s accumulation of toxic proteins, neuronal function and tissue structure seen over time in the clinical scans.
    Next, the team used the model to look for genes that cause cognitive decline over time during the normal process of healthy aging, using a subset of healthy control participants who remained clinically stable for nearly eight years. Cognitive changes included memory and executive functions such as flexible thinking. They found eight genes which contributed to the imaging dynamics seen in the scans and corresponded with cognitive changes in healthy individuals. Of note, the genes that changed in healthy aging are also known to affect two important proteins in the development of Alzheimer’s disease, called tau and amyloid beta.
    Next, they ran a similar analysis looking for genes that drive the progression of Alzheimer’s disease. Here, they identified 111 genes that were linked with the scan data and with associated cognitive changes in Alzheimer’s disease.
    Finally, they studied the functions of the 111 genes identified, and found that they belonged to 65 different biological processes — with most of them commonly linked to neurodegeneration and cognitive decline.
    “Our study provides unprecedented insight into the multiscale interactions among aging and Alzheimer’s disease-associated biological factors and the possible mechanistic roles of the identified genes,” concludes senior author Yasser Iturria-Medina, Assistant Professor at the Department of Neurology and Neurosurgery at McGill University. “We’ve shown that Alzheimer’s disease and healthy aging share complex biological mechanisms, even though Alzheimer’s disease is a separate entity with considerably more altered molecular and macroscopic pathways. This personalised model offers novel insights into the multiscale alterations in the elderly brain, with important implications for identifying targets for future treatments for Alzheimer’s disease progression.”
    Story Source:
    Materials provided by eLife. Note: Content may be edited for style and length. More

  • in

    Climate change disinformation is evolving. So are efforts to fight back

    Over the last four decades, a highly organized, well-funded campaign powered by the fossil fuel industry has sought to discredit the science that links global climate change to human emissions of carbon dioxide and other greenhouse gases. These disinformation efforts have sown confusion over data, questioned the integrity of climate scientists and denied the scientific consensus on the role of humans.

    Such disinformation efforts are outlined in internal documents from fossil fuel giants such as Shell and Exxon. As early as the 1980s, oil companies knew that burning fossil fuels was altering the climate, according to industry documents reviewed at a 2019 U.S. House of Representatives Committee on Oversight and Reform hearing. Yet these companies, aided by some scientists, set out to mislead the public, deny well-established science and forestall efforts to regulate emissions.

    But the effects of climate change on extreme events such as wildfires, heat waves and hurricanes have become hard to downplay (SN: 12/19/20 & SN: 1/2/21, p. 37). Not coincidentally, climate disinformation tactics have shifted from outright denial to distraction and delay (SN: 1/16/21, p. 28).

    As disinformation tactics evolve, researchers continue to test new ways to combat them. Debunking by fact-checking untrue statements is one way to combat climate disinformation. Another way, increasingly adopted by social media platforms, is to add warning labels flagging messages as possible disinformation, such as the labels Twitter and Facebook (which also owns Instagram) began adding in 2020 regarding the U.S. presidential election and the COVID-19 pandemic.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    At the same time, Facebook was sharply criticized for a change to its fact-checking policies that critics say enables the spread of climate disinformation. In 2019, the social media giant decided to exempt posts that it determines to be opinion or satire from fact-checking, creating a potentially large disinformation loophole.

    In response to mounting criticism, Facebook unveiled a pilot project in February for its users in the United Kingdom, with labels pointing out myths about climate change. The labels also point users to Facebook’s climate science information center.

    For this project, Facebook consulted several climate communication experts. Sander van der Linden, a social psychologist at the University of Cambridge, and cognitive scientist John Cook of George Mason University in Fairfax, Va., helped the company develop a new “myth-busting” unit that debunks common climate change myths — such as that scientists don’t agree that global warming is happening.

    Cook and van der Linden have also been testing ways to get out in front of disinformation, an approach known as prebunking, or inoculation theory. By helping people recognize common rhetorical techniques used to spread climate disinformation — such as logical fallacies, relying on fake “experts” and cherry-picking only the data that support one view — the two hope to build resilience against these tactics.

    This new line of defense may come with a bonus, van der Linden says. Training people in these techniques could build a more general resilience to disinformation, whether related to climate, vaccines or COVID-19.

    Science News asked Cook and van der Linden about debunking conspiracies, collaborating with Facebook and how prebunking is (and isn’t) like getting vaccinated. The conversations, held separately, have been edited for brevity and clarity.

    We’ve seen both misinformation and disinformation used in the climate change denial discussion. What’s the difference?

    van der Linden: Misinformation is any information that’s incorrect, whether due to error or fake news. Disinformation is deliberately intended to deceive. Then there’s propaganda: disinformation with a political agenda. But in practice, it’s difficult to disentangle them. Often, people use misinformation because it’s the broadest category.

    Has there been a change in the nature of climate change denialism in the last few decades?

    Cook: It is shifting. For example, we fed 21 years of [climate change] denial blog posts from the U.K. into a machine learning program. We found that the science denialism misinformation is gradually going down — and solution misinformation [targeting climate policy and renewable energy] is on the rise [as reported online in early March at SocArXiv.org].

    As the science becomes more apparent, it becomes more untenable to attack it. We see spikes in policy misinformation just before the government brings in new science policy, such as a carbon pricing bill. And there was a huge spike before the [2015] Paris climate agreement. That’s what we will see more of over time.

    How do you hope Facebook’s new climate change misinformation project will help?

    Cook: We need tech solutions, like flagging and tagging misinformation, as well as social media platforms downplaying it, so [the misinformation] doesn’t get put on as many people’s feeds. We can’t depend on social media. A look behind the curtain at Facebook showed me the challenge of getting corporations to adequately respond. There are a lot of internal tensions.

    van der Linden: I’ve worked with WhatsApp and Google, and it’s always the same story. They want to do the right thing, but don’t follow through because it hurts engagement on the platform.

    But going from not taking a stance on climate change to taking a stance, that’s a huge win. What Facebook has done is a step forward. They listened to our designs and suggestions and comments on their [pilot] test.

    We wanted more than a neutral [label directing people to Facebook’s information page on climate change], but they wanted to test the neutral post first. That’s all good. It’ll be a few months at least for the testing in the U.K. phase to roll out, but we don’t yet know how many other countries they will roll it out to and when. We all came on board with the idea that they’re going to do more, and more aggressively. I’ll be pleasantly surprised if it rolls out globally. That’s my criteria for success.

    Scientists have been countering climate change misinformation for years, through fact-checking and debunking. It’s a bit like whack-a-mole. You advocate for “inoculating” people against the techniques that help misinformation spread through communities. How can that help?

    van der Linden: Fact-checking and debunking is useful if you do it right. But there’s the issue of ideology, of resistance to fact-checking when it’s not in line with ideology. Wouldn’t life be so much easier if we could prevent [disinformation] in the first place? That’s the whole point of prebunking or inoculation. It’s a multilayer defense system. If you can get there first, that’s great. But that won’t always be possible, so you still have real-time fact-checking. This multilayer firewall is going to be the most useful thing.

    You’ve both developed online interactive tools, games really, to test the idea of inoculating people against disinformation tactics. Sander, you created an online interactive game called Bad News, in which players can invent conspiracies and act as fake news producers. A study of 15,000 participants reported in 2019 in Palgrave Communications showed that by playing at creating misinformation, people got better at recognizing it. But how long does this “inoculation” last?

    van der Linden: That’s an important difference in the viral analogy. Biological vaccines give more or less lifelong immunity, at least for some kinds of viruses. That’s not the case for a psychological vaccine. It wears off over time.

    In one study, we followed up with people [repeatedly] for about three months, during which time they didn’t replay the game. We found no decay of the inoculation effect, which was quite surprising. The inoculation remained stable for about two months. In [a shorter study focused on] climate change misinformation, the inoculation effect also remained stable, for at least one week.

    John, what about your game Cranky Uncle? At first, it focused on climate change denial, but you’ve expanded it to include other types of misinformation, on topics such as COVID-19, flat-earthism and vaccine misinformation. How well do techniques to inoculate against climate change denialism translate to other types of misinformation?

    Cook: The techniques used in climate denial are seen in all forms of misinformation. Working on deconstructing [that] misinformation introduced me to parallel argumentation, which is basically using analogies to combat flawed logic. That’s what late night comedians do: Make what is obviously a ridiculous argument. The other night, for example, Seth Meyers talked about how Texas blaming its [February] power outage on renewable energy was like New Jersey blaming its problems on Boston [clam chowder].

    My main tip is to arm yourself with awareness of misleading techniques. Think of it like a virus spreading: You don’t want to be a superspreader. Make sure that you’re wearing a mask, for starters. And when you see misinformation, call it out. That observational correction — it matters. It makes a difference. More

  • in

    Machine learning (AI) accurately predicts cardiac arrest risk

    A branch of artificial intelligence (AI), called machine learning, can accurately predict the risk of an out of hospital cardiac arrest — when the heart suddenly stops beating — using a combination of timing and weather data, finds research published online in the journal Heart.
    Machine learning is the study of computer algorithms, and based on the idea that systems can learn from data and identify patterns to inform decisions with minimal intervention.
    The risk of a cardiac arrest was highest on Sundays, Mondays, public holidays and when temperatures dropped sharply within or between days, the findings show.
    This information could be used as an early warning system for citizens, to lower their risk and improve their chances of survival, and to improve the preparedness of emergency medical services, suggest the researchers.
    Out of hospital cardiac arrest is common around the world, but is generally associated with low rates of survival. Risk is affected by prevailing weather conditions.
    But meteorological data are extensive and complex, and machine learning has the potential to pick up associations not identified by conventional one-dimensional statistical approaches, say the Japanese researchers. More