More stories

  • in

    Scientists map gene changes underlying brain and cognitive decline in aging

    Alzheimer’s disease shares some key similarities with healthy aging, according to a new mathematical model described today in eLife.
    The model provides unique insights into the multiscale biological alterations in the elderly and neurodegenerative brain, with important implications for identifying future treatment targets for Alzheimer’s disease.
    Researchers developed their mathematical model using a range of biological data — from ‘microscopic’ information using gene activity to ‘macroscopic’ information about the brain’s burden of toxic proteins (tau and amyloid), its neuronal function, cerebrovascular flow, metabolism and tissue structure from molecular PET and MRI scans.
    “In both aging and disease research, most studies incorporate brain measurements at either micro or macroscopic scale, failing to detect the direct causal relationships between several biological factors at multiple spatial resolutions,” explains first author Quadri Adewale, a PhD candidate at the Department of Neurology and Neurosurgery, McGill University, Canada. “We wanted to combine whole-brain gene activity measurements with clinical scan data in a comprehensive and personalised model, which we then validated in healthy aging and Alzheimer’s disease.”
    The study involved 460 people who had at least four different types of brain scan at four different time points as part of the Alzheimer’s Disease Neuroimaging Initiative cohort. Among the 460 participants, 151 were clinically identified as asymptomatic or healthy control (HC), 161 with early mild cognitive impairment (EMCI), 113 with late mild cognitive impairment (LMCI) and 35 with probable Alzheimer’s disease (AD).
    Data from these multimodal scans was combined with data on gene activity from the Allen Human Brain Atlas, which provides detail on whole-brain gene expression for 20,267 genes. The brain was then split into 138 different gray matter regions for the purposes of combining the gene data with the structural and functional data from the scans.
    The team then explored causal relationships between the spatial genetic patterns and information from their scans, and cross-referenced this to age-related changes in cognitive function. They found that the ability of the model to predict the extent of decline in cognitive function was highest for Alzheimer’s disease, followed in order by the less pronounced decline in cognition (LCMI, ECMI) and finally the healthy controls. This shows that the model can reproduce the individual multifactorial changes in the brain’s accumulation of toxic proteins, neuronal function and tissue structure seen over time in the clinical scans.
    Next, the team used the model to look for genes that cause cognitive decline over time during the normal process of healthy aging, using a subset of healthy control participants who remained clinically stable for nearly eight years. Cognitive changes included memory and executive functions such as flexible thinking. They found eight genes which contributed to the imaging dynamics seen in the scans and corresponded with cognitive changes in healthy individuals. Of note, the genes that changed in healthy aging are also known to affect two important proteins in the development of Alzheimer’s disease, called tau and amyloid beta.
    Next, they ran a similar analysis looking for genes that drive the progression of Alzheimer’s disease. Here, they identified 111 genes that were linked with the scan data and with associated cognitive changes in Alzheimer’s disease.
    Finally, they studied the functions of the 111 genes identified, and found that they belonged to 65 different biological processes — with most of them commonly linked to neurodegeneration and cognitive decline.
    “Our study provides unprecedented insight into the multiscale interactions among aging and Alzheimer’s disease-associated biological factors and the possible mechanistic roles of the identified genes,” concludes senior author Yasser Iturria-Medina, Assistant Professor at the Department of Neurology and Neurosurgery at McGill University. “We’ve shown that Alzheimer’s disease and healthy aging share complex biological mechanisms, even though Alzheimer’s disease is a separate entity with considerably more altered molecular and macroscopic pathways. This personalised model offers novel insights into the multiscale alterations in the elderly brain, with important implications for identifying targets for future treatments for Alzheimer’s disease progression.”
    Story Source:
    Materials provided by eLife. Note: Content may be edited for style and length. More

  • in

    Climate change disinformation is evolving. So are efforts to fight back

    Over the last four decades, a highly organized, well-funded campaign powered by the fossil fuel industry has sought to discredit the science that links global climate change to human emissions of carbon dioxide and other greenhouse gases. These disinformation efforts have sown confusion over data, questioned the integrity of climate scientists and denied the scientific consensus on the role of humans.

    Such disinformation efforts are outlined in internal documents from fossil fuel giants such as Shell and Exxon. As early as the 1980s, oil companies knew that burning fossil fuels was altering the climate, according to industry documents reviewed at a 2019 U.S. House of Representatives Committee on Oversight and Reform hearing. Yet these companies, aided by some scientists, set out to mislead the public, deny well-established science and forestall efforts to regulate emissions.

    But the effects of climate change on extreme events such as wildfires, heat waves and hurricanes have become hard to downplay (SN: 12/19/20 & SN: 1/2/21, p. 37). Not coincidentally, climate disinformation tactics have shifted from outright denial to distraction and delay (SN: 1/16/21, p. 28).

    As disinformation tactics evolve, researchers continue to test new ways to combat them. Debunking by fact-checking untrue statements is one way to combat climate disinformation. Another way, increasingly adopted by social media platforms, is to add warning labels flagging messages as possible disinformation, such as the labels Twitter and Facebook (which also owns Instagram) began adding in 2020 regarding the U.S. presidential election and the COVID-19 pandemic.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    At the same time, Facebook was sharply criticized for a change to its fact-checking policies that critics say enables the spread of climate disinformation. In 2019, the social media giant decided to exempt posts that it determines to be opinion or satire from fact-checking, creating a potentially large disinformation loophole.

    In response to mounting criticism, Facebook unveiled a pilot project in February for its users in the United Kingdom, with labels pointing out myths about climate change. The labels also point users to Facebook’s climate science information center.

    For this project, Facebook consulted several climate communication experts. Sander van der Linden, a social psychologist at the University of Cambridge, and cognitive scientist John Cook of George Mason University in Fairfax, Va., helped the company develop a new “myth-busting” unit that debunks common climate change myths — such as that scientists don’t agree that global warming is happening.

    Cook and van der Linden have also been testing ways to get out in front of disinformation, an approach known as prebunking, or inoculation theory. By helping people recognize common rhetorical techniques used to spread climate disinformation — such as logical fallacies, relying on fake “experts” and cherry-picking only the data that support one view — the two hope to build resilience against these tactics.

    This new line of defense may come with a bonus, van der Linden says. Training people in these techniques could build a more general resilience to disinformation, whether related to climate, vaccines or COVID-19.

    Science News asked Cook and van der Linden about debunking conspiracies, collaborating with Facebook and how prebunking is (and isn’t) like getting vaccinated. The conversations, held separately, have been edited for brevity and clarity.

    We’ve seen both misinformation and disinformation used in the climate change denial discussion. What’s the difference?

    van der Linden: Misinformation is any information that’s incorrect, whether due to error or fake news. Disinformation is deliberately intended to deceive. Then there’s propaganda: disinformation with a political agenda. But in practice, it’s difficult to disentangle them. Often, people use misinformation because it’s the broadest category.

    Has there been a change in the nature of climate change denialism in the last few decades?

    Cook: It is shifting. For example, we fed 21 years of [climate change] denial blog posts from the U.K. into a machine learning program. We found that the science denialism misinformation is gradually going down — and solution misinformation [targeting climate policy and renewable energy] is on the rise [as reported online in early March at SocArXiv.org].

    As the science becomes more apparent, it becomes more untenable to attack it. We see spikes in policy misinformation just before the government brings in new science policy, such as a carbon pricing bill. And there was a huge spike before the [2015] Paris climate agreement. That’s what we will see more of over time.

    How do you hope Facebook’s new climate change misinformation project will help?

    Cook: We need tech solutions, like flagging and tagging misinformation, as well as social media platforms downplaying it, so [the misinformation] doesn’t get put on as many people’s feeds. We can’t depend on social media. A look behind the curtain at Facebook showed me the challenge of getting corporations to adequately respond. There are a lot of internal tensions.

    van der Linden: I’ve worked with WhatsApp and Google, and it’s always the same story. They want to do the right thing, but don’t follow through because it hurts engagement on the platform.

    But going from not taking a stance on climate change to taking a stance, that’s a huge win. What Facebook has done is a step forward. They listened to our designs and suggestions and comments on their [pilot] test.

    We wanted more than a neutral [label directing people to Facebook’s information page on climate change], but they wanted to test the neutral post first. That’s all good. It’ll be a few months at least for the testing in the U.K. phase to roll out, but we don’t yet know how many other countries they will roll it out to and when. We all came on board with the idea that they’re going to do more, and more aggressively. I’ll be pleasantly surprised if it rolls out globally. That’s my criteria for success.

    Scientists have been countering climate change misinformation for years, through fact-checking and debunking. It’s a bit like whack-a-mole. You advocate for “inoculating” people against the techniques that help misinformation spread through communities. How can that help?

    van der Linden: Fact-checking and debunking is useful if you do it right. But there’s the issue of ideology, of resistance to fact-checking when it’s not in line with ideology. Wouldn’t life be so much easier if we could prevent [disinformation] in the first place? That’s the whole point of prebunking or inoculation. It’s a multilayer defense system. If you can get there first, that’s great. But that won’t always be possible, so you still have real-time fact-checking. This multilayer firewall is going to be the most useful thing.

    You’ve both developed online interactive tools, games really, to test the idea of inoculating people against disinformation tactics. Sander, you created an online interactive game called Bad News, in which players can invent conspiracies and act as fake news producers. A study of 15,000 participants reported in 2019 in Palgrave Communications showed that by playing at creating misinformation, people got better at recognizing it. But how long does this “inoculation” last?

    van der Linden: That’s an important difference in the viral analogy. Biological vaccines give more or less lifelong immunity, at least for some kinds of viruses. That’s not the case for a psychological vaccine. It wears off over time.

    In one study, we followed up with people [repeatedly] for about three months, during which time they didn’t replay the game. We found no decay of the inoculation effect, which was quite surprising. The inoculation remained stable for about two months. In [a shorter study focused on] climate change misinformation, the inoculation effect also remained stable, for at least one week.

    John, what about your game Cranky Uncle? At first, it focused on climate change denial, but you’ve expanded it to include other types of misinformation, on topics such as COVID-19, flat-earthism and vaccine misinformation. How well do techniques to inoculate against climate change denialism translate to other types of misinformation?

    Cook: The techniques used in climate denial are seen in all forms of misinformation. Working on deconstructing [that] misinformation introduced me to parallel argumentation, which is basically using analogies to combat flawed logic. That’s what late night comedians do: Make what is obviously a ridiculous argument. The other night, for example, Seth Meyers talked about how Texas blaming its [February] power outage on renewable energy was like New Jersey blaming its problems on Boston [clam chowder].

    My main tip is to arm yourself with awareness of misleading techniques. Think of it like a virus spreading: You don’t want to be a superspreader. Make sure that you’re wearing a mask, for starters. And when you see misinformation, call it out. That observational correction — it matters. It makes a difference. More

  • in

    Machine learning (AI) accurately predicts cardiac arrest risk

    A branch of artificial intelligence (AI), called machine learning, can accurately predict the risk of an out of hospital cardiac arrest — when the heart suddenly stops beating — using a combination of timing and weather data, finds research published online in the journal Heart.
    Machine learning is the study of computer algorithms, and based on the idea that systems can learn from data and identify patterns to inform decisions with minimal intervention.
    The risk of a cardiac arrest was highest on Sundays, Mondays, public holidays and when temperatures dropped sharply within or between days, the findings show.
    This information could be used as an early warning system for citizens, to lower their risk and improve their chances of survival, and to improve the preparedness of emergency medical services, suggest the researchers.
    Out of hospital cardiac arrest is common around the world, but is generally associated with low rates of survival. Risk is affected by prevailing weather conditions.
    But meteorological data are extensive and complex, and machine learning has the potential to pick up associations not identified by conventional one-dimensional statistical approaches, say the Japanese researchers. More

  • in

    A newfound quasicrystal formed in the first atomic bomb test

    In an instant, the bomb obliterated everything.

    The tower it sat on and the copper wires strung around it: vaporized. The desert sand below: melted.

    In the aftermath of the first test of an atomic bomb, in July 1945, all this debris fused together, leaving the ground of the New Mexico test site coated with a glassy substance now called trinitite. High temperatures and pressures helped forge an unusual structure within one piece of trinitite, in a grain of the material just 10 micrometers across — a bit longer than a red blood cell.

    That grain contains a rare form of matter called a quasicrystal, born the moment the nuclear age began, scientists report May 17 in Proceedings of the National Academy of Sciences.

    Normal crystals are made of atoms locked in a lattice that repeats in a regular pattern. Quasicrystals have a structure that is orderly like a normal crystal but that doesn’t repeat. This means quasicrystals can have properties that are forbidden for normal crystals. First discovered in the lab in 1980s, quasicrystals also appear in nature in meteorites (SN: 12/8/16).

    Penrose tilings (one shown) are an example of a structure that is ordered but does not repeat. Quasicrystals are a three-dimensional version of this idea.Inductiveload/Wikimedia Commons

    The newly discovered quasicrystal from the New Mexico test site is the oldest one known that was made by humans.

    Trinitite takes its moniker from the nuclear test, named Trinity, in which the material was created in abundance (SN: 4/8/21). “You can still buy lots of it on eBay,” says geophysicist Terry Wallace, a coauthor of the study and emeritus director of Los Alamos National Laboratory in New Mexico.

    But, he notes, the trinitite the team studied was a rarer variety, called red trinitite. Most trinitite has a greenish tinge, but red trinitite contains copper, remnants of the wires that stretched from the ground to the bomb. Quasicrystals tend to be found in materials that have experienced a violent impact and usually involve metals. Red trinitite fit both criteria.

    But first the team had to find some.

    “I was asking around for months looking for red trinitite,” says theoretical physicist Paul Steinhardt of Princeton University. But Steinhardt, who is known for trekking to Siberia to seek out quasicrystals, wasn’t deterred (SN: 2/19/19). Eventually he and his colleagues got some from an expert in trinitite who began collaborating with the team. Then, the painstaking work started, “looking through every little microscopic speck” of the trinitite sample, says Steinhardt. Finally, the researchers extracted the tiny grain. By scattering X-rays through it, the researchers revealed that the material had a type of symmetry found only in quasicrystals.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    The new quasicrystal, formed of silicon, copper, calcium and iron, is “brand new to science,” says mineralogist Chi Ma of Caltech, who was not involved with the study. “It’s a quite cool and exciting discovery,” he says.

    Future searches for quasicrystals could examine other materials that experienced a punishing blow, such as impact craters or fulgurites, fused structures formed when lightning strikes soil (SN: 3/16/21).

    The study shows that artifacts from the birth of the atomic age are still of scientific interest, says materials scientist Miriam Hiebert of the University of Maryland in College Park, who has analyzed materials from other pivotal moments in nuclear history (SN: 5/1/19). “Historic objects and materials are not just curiosities in collectors’ cabinets but can be of real scientific value,” she says. More

  • in

    Archaeologists teach computers to sort ancient pottery

    Archaeologists at Northern Arizona University are hoping a new technology they helped pioneer will change the way scientists study the broken pieces left behind by ancient societies.
    The team from NAU’s Department of Anthropology have succeeded in teaching computers to perform a complex task many scientists who study ancient societies have long dreamt of: rapidly and consistently sorting thousands of pottery designs into multiple stylistic categories. By using a form of machine learning known as Convolutional Neural Networks (CNNs), the archaeologists created a computerized method that roughly emulates the thought processes of the human mind in analyzing visual information.
    “Now, using digital photographs of pottery, computers can accomplish what used to involve hundreds of hours of tedious, painstaking and eye-straining work by archaeologists who physically sorted pieces of broken pottery into groups, in a fraction of the time and with greater consistency,” said Leszek Pawlowicz, adjunct faculty in the Department of Anthropology. He and anthropology professor Chris Downum began researching the feasibility of using a computer to accurately classify broken pieces of pottery, known as sherds, into known pottery types in 2016. Results of their research are reported in the June issue of the peer-reviewed publication Journal of Archaeological Science.
    “On many of the thousands of archaeological sites scattered across the American Southwest, archaeologists will often find broken fragments of pottery known as sherds. Many of these sherds will have designs that can be sorted into previously-defined stylistic categories, called ‘types,’ that have been correlated with both the general time period they were manufactured and the locations where they were made” Downum said. “These provide archaeologists with critical information about the time a site was occupied, the cultural group with which it was associated and other groups with whom they interacted.”
    The research relied on recent breakthroughs in the use of machine learning to classify images by type, specifically CNNs. CNNs are now a mainstay in computer image recognition, being used for everything from X-ray images for medical conditions and matching images in search engines to self-driving cars. Pawlowicz and Downum reasoned that if CNNs can be used to identify things like breeds of dogs and products a consumer might like, why not apply this approach to the analysis of ancient pottery?
    Until now, the process of recognizing diagnostic design features on pottery has been difficult and time-consuming. It could involve months or years of training to master and correctly apply the design categories to tiny pieces of a broken pot. Worse, the process was prone to human error because expert archaeologists often disagree over which type is represented by a sherd, and might find it difficult to express their decision-making process in words. An anonymous peer reviewer of the article called this “the dirty secret in archaeology that no one talks about enough.”
    Determined to create a more efficient process, Pawlowicz and Downum gathered thousands of pictures of pottery fragments with a specific set of identifying physical characteristics, known as Tusayan White Ware, common across much of northeast Arizona and nearby states. They then recruited four of the Southwest’s top pottery experts to identify the pottery design type for every sherd and create a ‘training set’ of sherds from which the machine can learn. Finally, they trained the machine to learn pottery types by focusing on the pottery specimens the archaeologists agreed on. More

  • in

    Algorithm to see inside materials with subatomic particles

    The University of Kent’s School of Physical Sciences, in collaboration with the Science and Technology Facilities Council (STFC) and the Universities of Cardiff, Durham and Leeds, have developed an algorithm to train computers to analyse signals from subatomic particles embedded in advanced electronic materials.
    The particles, called muons, are produced in large particle accelerators and are implanted inside samples of materials in order to investigate their magnetic properties. Muons are uniquely useful as they couple magnetically to individual atoms inside the material and then emit a signal detectable by researchers to obtain information on that magnetism.
    This ability to examine magnetism on the atomic scale makes muon-based measurements one of the most powerful probes of magnetism in electronic materials, including “quantum materials” such as superconductors and other exotic forms of matter.
    As it is not possible to deduce what is going on in the material by simple examination of the signal, researchers normally compare their data to generic models. In contrast, the present team adapted a data-science technique called Principal Component Analysis (PCA), frequently employed in Face Recognition.
    The PCA technique involves a computer being fed many related but distinct images and then running an algorithm identifying a small number “archetypal” images that can be combined to reproduce, with great accuracy, any of the original images. An algorithm trained in this way can then go on to perform tasks such as recognising whether a new image matches a previously-seen one.
    Researchers adapted the PCA technique to analyse the signals sent out by muons embedded in complex materials, training the algorithm for a variety of quantum materials using experimental data obtained at the ISIS Neutron and Muon source of the STFC Rutherford Appleton Laboratory.
    The results showed the new technique is equally as proficient as the standard method at detecting phase transitions and in some cases could detect transitions beyond the capabilities of standard analyses.
    Dr Jorge Quintanilla, Senior Lecturer in Condensed Matter Theory at Kent and leader of the Physics of Quantum Materials research group said: ‘Our research results are exceptional, as this was achieved by an algorithm that knew nothing about the physics of the materials being investigated. This suggests that the new approach might have very broad application and, as such, we have made our algorithms available for use by the worldwide research community.’
    Story Source:
    Materials provided by University of Kent. Original written by Sam Wood. Note: Content may be edited for style and length. More

  • in

    Future sparkles for diamond-based quantum technology

    Marilyn Monroe famously sang that diamonds are a girl’s best friend, but they are also very popular with quantum scientists — with two new research breakthroughs poised to accelerate the development of synthetic diamond-based quantum technology, improve scalability, and dramatically reduce manufacturing costs.
    While silicon is traditionally used for computer and mobile phone hardware, diamond has unique properties that make it particularly useful as a base for emerging quantum technologies such as quantum supercomputers, secure communications and sensors.
    However there are two key problems; cost, and difficulty in fabricating the single crystal diamond layer, which is smaller than one millionth of a metre.
    A research team from the ARC Centre of Excellence for Transformative Meta-Optics at the University of Technology Sydney (UTS), led by Professor Igor Aharonovich, has just published two research papers, in Nanoscale and Advanced Quantum Technologies, that address these challenges.
    “For diamond to be used in quantum applications, we need to precisely engineer ‘optical defects’ in the diamond devices — cavities and waveguides — to control, manipulate and readout information in the form of qubits — the quantum version of classical computer bits,” said Professor Aharonovich.
    “It’s akin to cutting holes or carving gullies in a super thin sheet of diamond, to ensure light travels and bounces in the desired direction,” he said. More

  • in

    Virtual reality warps your sense of time

    Psychology researchers at UC Santa Cruz have found that playing games in virtual reality creates an effect called “time compression,” where time goes by faster than you think. Grayson Mullen, who was a cognitive science undergraduate at the time, worked with Psychology Professor Nicolas Davidenko to design an experiment that tested how virtual reality’s effects on a game player’s sense of time differ from those of conventional monitors. The results are now published in the journal Timing & Time Perception.
    Mullen designed a maze game that could be played in both virtual reality and conventional formats, then the research team recruited 41 UC Santa Cruz undergraduate students to test the game. Participants played in both formats, with researchers randomizing which version of the game each student started with. Both versions were essentially the same, but the mazes in each varied slightly so that there was no repetition between formats.
    Participants were asked to stop playing the game whenever they felt like five minutes had passed. Since there were no clocks available, each person had to make this estimate based on their own perception of the passage of time.
    Prior studies of time perception in virtual reality have often asked participants about their experiences after the fact, but in this experiment, the research team wanted to integrate a time-keeping task into the virtual reality experience in order to capture what was happening in the moment. Researchers recorded the actual amount of time that had passed when each participant stopped playing the game, and this revealed a gap between participants’ perception of time and the reality.
    The study found that participants who played the virtual reality version of the game first played for an average of 72.6 seconds longer before feeling that five minutes had passed than students who started on a conventional monitor. In other words, students played for 28.5 percent more time than they realized in virtual reality, compared to conventional formats.
    This time compression effect was observed only among participants who played the game in virtual reality first. The paper concluded this was because participants based their judgement of time in the second round on whatever initial time estimates they made during the first round, regardless of format. But if the time compression observed in the first round is translatable to other types of virtual reality experiences and longer time intervals, it could be a big step forward in understanding how this effect works. More