More stories

  • in

    A virtual reality 'Shopping Task' could help test for cognitive decline in adults

    New research from the Institute of Psychiatry, Psychology & Neuroscience (IoPPN) at King’s College London suggests that a virtual reality test in which participants “go to the shops” could offer a potentially promising way of effectively assessing functional cognition, the thinking and processing skills needed to accomplish complex everyday activities.
    The research, published in the Journal of Medical Internet Research, uses a novel virtual reality shopping task called “VStore” to measure cognition, which asks participants to take part in tests designed to mirror the real world. Researchers hope that it will be able to test for age-related cognitive decline in the future.
    The trial recruited 142 healthy individuals aged 20-79 years. Each participant was asked to “go to the shops,” first verbally recalling a list of 12 items, before being assessed for the amount of time it took to collect the items, as well as select the corresponding items on a virtual self-checkout machine, pay, and order coffee.
    Cognition tests, such as those used to measure the deficits present in several neuropsychiatric disorders including Alzheimer’s disease, schizophrenia, and depression, are traditionally time-consuming and onerous. Vstore — the technology that the researchers used in this study — is designed to overcome these limitations to provide a more accurate, engaging, and cost-effective process to explore a person’s cognitive health.
    The immersive environment (a virtual shop) mirrored the complexity of everyday life and meant that participants were better able to engage brain structures that are associated with spatial navigation, such as the hippocampus and entorhinal cortex, both of which can be affected in the early stages of Alzheimer disease.
    Researchers were able to establish that Vstore effectively engages a range of key neuropsychological functions simultaneously, suggesting that the functional tasks embedded in virtual reality may engage a greater range of cognitive domains than standard assessments.
    Prof Sukhi Shergill, the study’s lead author from King’s IoPPN and Kent and Medway Medical School (KMMS) said, “Virtual Reality appears to offer us significant advantages over more traditional pen-and-paper methods. The simple act of going to a shop to collect and pay for a list of items is something that we are all familiar with, but also actively engages multiple parts of the brain. Our study suggests that VStore may be suitable for evaluating functional cognition in the future. However, more works needs to be done before we can confirm this.”
    Lilla Porffy, the study’s first author from King’s IoPPN said, “These are promising findings adding to a growing body of evidence showing that virtual reality can be used to measure cognition and related everyday functioning effectively and accurately. The next steps will be to confirm these results and expand research into conditions characterised by cognitive complaints and functional difficulties such as psychosis and Alzheimer’s Disease.”
    This study was possible thanks to funding from the Medical Research Council and the National Institute for Health Research Maudsley Biomedical Research Centre. VStore was designed by Vitae VR.
    Story Source:
    Materials provided by King’s College London. Note: Content may be edited for style and length. More

  • in

    Physicist solves century old problem of radiation reaction

    A Lancaster physicist has proposed a radical solution to the question of how a charged particle, such as an electron, responded to its own electromagnetic field.
    This question has challenged physicists for over 100 years but mathematical physicist Dr Jonathan Gratus has suggested an alternative approach — published in the Journal of Physics A- with controversial implications.
    It is well established that if a point charge accelerates it produces electromagnetic radiation. This radiation has both energy and momentum, which must come from somewhere. It is usually assumed that they come from the energy and momentum of the charged particle, damping the motion.
    The history of attempts to calculate this radiation reaction (also known as radiation damping) date back to Lorentz in 1892. Major contributions were then made by many well known physicists including Plank, Abraham, von Laue, Born, Schott, Pauli, Dirac and Landau. Active research continues to this day with many articles published every year.
    The challenge is that according to Maxwell’s equations, the electric field at the actual point where the point particle is, is infinite. Hence the force on that point particle should also be infinite.
    Various methods have been used to renormalise away this infinity. This leads to the well established Lorentz-Abraham-Dirac equation.
    Unfortunately, this equation has well known pathological solutions. For example, a particle obeying this equation may accelerate forever with no external force or accelerate before any force is applied. There is also the quantum version of radiation damping. Ironically, this is one of the few phenomena where the quantum version occurs at lower energies than the classical one.
    Physicists are actively searching for this effect. This requires `colliding’ very high energy electrons and powerful laser beams, a challenge as the biggest particle accelerators are not situated near the most powerful lasers. However, firing lasers into plasmas will produce high energy electron, which can then interact with the laser beam. This only requires a powerful laser. Current results show that quantum radiation reaction does exist.
    The alternative approach is to consider many charged particles, where each particle responds to the fields of all the other charged particles, but not itself. This approach was hitherto dismissed, since it was assumed that this would not conserve energy and momentum.
    However, Dr Gratus shows that this assumption is false, with the energy and momentum of one particle’s radiation coming from the external fields used to accelerate it.
    He said: “The controversial implications of this result is that there need not be classical radiation reaction at all. We may therefore consider the discovery of quantum radiation reaction as similar to the discovery of Pluto, which was found following predictions based on discrepancies in the motion of Neptune. Corrected calculations showed there were no discrepancies. Similarly radiation reaction was predicted, found and then shown not to be needed.”
    Story Source:
    Materials provided by Lancaster University. Note: Content may be edited for style and length. More

  • in

    Engineers build a molecular framework to bridge experimental and computer sciences for peptide-based materials engineering

    Researchers in the Stephenson School of Biomedical Engineering, Gallogly College of Engineering, at the University of Oklahoma have developed a framework published in Science Advances that solves the challenge of bridging experimental and computer sciences to better predict peptide structures. Peptide-based materials have been used in energy, security and health fields for the past two decades.
    Handan Acar, Ph.D., the Peggy and Charles Stephenson Assistant Professor of Biomedical Engineering at OU, teamed up with Andrew White, Ph.D., an associate professor of chemical engineering at the University of Rochester, to introduce a new strategy to study fundamentals of molecular engineering. Seren Hamsici, a doctoral student in Acar’s lab, is the first author of the study.
    Proteins are responsible for the structure, function and regulation of the body’s organs and tissues. They are formed by amino acids and come together in different interactions, called intermolecular interactions, that are essential to how proteins perform different roles in the body. When these protein interactions behave abnormally, medical issues result, such as when they clump together to form plaques in the brain that leads to Alzheimer’s Disease.
    “In the peptide-engineering field, the general approach is to take those natural proteins and make incremental changes to identify the properties of the end aggregated products, and then find an application for which the identified properties would be useful,” Acar said. “However, there are more than 500 natural and unnatural amino acids. Especially when you consider the size of the peptides, this approach is just not practical.”
    Machine learning has great potential to counter this challenge, but Acar says the complex way peptides assemble and disassemble has prevented artificial intelligence methods from being effective so far.
    “Clearly, computational methods, such as machine learning, are necessary,” she said. “Yet, the peptide aggregation is very complex. It is currently not possible to identify the effects of individual amino acids with computational methods.”
    To counter those challenges, the research team came up with a new approach. They developed a framework that would help bridge materials science and engineering research with computational science to lay the groundwork for artificial intelligence and machine learning advancements. More

  • in

    Simulation models exercise, age effects on plaque formation in arteries

    Plaque formation in the arteries carrying blood to the head and neck is a serious medical problem, potentially leading to strokes and heart attacks. In Physics of Fluids, by AIP Publishing, engineers from China use fluid dynamics simulations to study the effect of exercise at various ages on plaque formation.
    It has been known for years that exercise and age affect the formation of plaques through a process known as atherosclerosis. What has not been fully understood, however, is how the geometrical features of the arteries affect plaque formation, although a dilated region in the inner carotid branch, the sinus, appears to be a vulnerable site.
    “It is commonly accepted that the disturbed flow induces atherosclerosis,” said author Xiaolei Yang.
    To study this, the authors considered two arterial geometries, one with a bulging outer artery and the other without, and modeled the effect of exercise and age on blood flow through the two model arteries.
    Two main arteries carrying blood to the head and neck, known as the carotid arteries, branch off from a single large artery at a position near the thyroid gland. One branch, the internal carotid artery, or ICA, carries blood inside the cranium to the brain, while the external carotid artery remains outside the cranium and brings blood to the neck, face, and scalp.
    Just above the bifurcation, the ICA bulges outward, forming a region known as a sinus that is sensitive to blood pressure changes and helps regulate blood flow and heart rate.
    “Our work investigated the patterns of disturbed blood flow in two different model carotids, one with high risk geometrical factors and the other without,” co-author Xinyi He said.
    She explained high-risk factors include high flare and low proximal curvature in the sinus. Flare is defined as the ratio of the maximum cross section in the sinus bulb to its minimal value, while proximal curvature measures how much the artery curves above the bifurcation point.
    To model exercise, the authors digitized blood flow measurements from individuals in three different age groups: 32-34, 54-55, and 62-63. These digitized flowrates were used as input to their computational model.
    “Overall, the effects of exercise are different for different people. Particularly, we show that exercising decreases the reversed flow volume for the 62-63 age group with the low-risk carotid, which is probably related to the decrease of systolic time interval,” said Yang.
    He said this suggests that evaluating the effect of exercise on atherosclerosis requires consideration of patient-specific geometries and ages.
    “For the current findings to become helpful, the analysis should be coupled to physiological and chemical processes occurring at the cellular level,” Yang said, indicating this would be the subject of the group’s future work.
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    How big does your quantum computer need to be?

    Quantum computers are expected to be disruptive and potentially impact many industry sectors. So researchers in the United Kingdom and the Netherlands decided to explore two very different quantum problems: breaking the encryption of Bitcoin (a digital currency) and simulating the molecule responsible for biological nitrogen fixation.
    In AVS Quantum Science, from AIP Publishing, the researchers describe a tool they created to determine how big a quantum computer needs to be to solve problems like these and how long it will take.
    “The majority of existing work within this realm focuses on a particular hardware platform, superconducting devices, like those IBM and Google are working toward,” said Mark Webber, of the University of Sussex. “Different hardware platforms will vary greatly on key hardware specifications, such as the rate of operations and the quality of control on the qubits (quantum bits).”
    Many of the most promising quantum advantage use cases will require an error-corrected quantum computer. Error correction enables running longer algorithms by compensating for inherent errors inside the quantum computer, but it comes at the cost of more physical qubits.
    Pulling nitrogen out of the air to make ammonia for fertilizers is extremely energy-intensive, and improvements to the process could impact both world food scarcity and the climate crisis. Simulation of relevant molecules is currently beyond the abilities of even the world’s fastest supercomputers but should be within the reach of next-gen quantum computers.
    “Our tool automates the calculation of the error-correction overhead as a function of key hardware specifications,” Webber said. “To make the quantum algorithm run faster, we can perform more operations in parallel by adding more physical qubits. We introduce extra qubits as needed to reach the desired runtime, which is critically dependent on the rate of operations at the physical hardware level.”
    Most quantum computing hardware platforms are limited, because only qubits right next to each other can interact directly. In other platforms, such as some trapped ion designs, the qubits are not in fixed positions and can instead be physically moved around — meaning each qubit can interact directly with a wide set of other qubits. More

  • in

    Using the eye as a window into heart disease

    Scientists have developed an artificial intelligence (AI) system that can analyse eye scans taken during a routine visit to an optician or eye clinic and identify patients at a high risk of a heart attack.
    Doctors have recognised that changes to the tiny blood vessels in the retina are indicators of broader vascular disease, including problems with the heart.
    In the research, led by the University of Leeds, deep learning techniques were used to train the AI system to automatically read retinal scans and identify those people who, over the following year, were likely to have a heart attack.
    Deep learning is a complex series of algorithms that enable computers to identify patterns in data and to make predictions.
    Writing in the journal Nature Machine Intelligence, the researchers report that the AI system had an accuracy of between 70% and 80% and could be used as a second referral mechanism for in-depth cardiovascular investigation.
    The use of deep learning in the analysis of retinal scans could revolutionise the way patients are regularly screened for signs of heart disease. More

  • in

    Southern Ocean storms cause outgassing of carbon dioxide

    Storms over the waters around Antarctica drive an outgassing of carbon dioxide into the atmosphere, according to a new international study with researchers from the University of Gothenburg. The research group used advanced ocean robots for the study, which provides a better understanding of climate change and can lead to better global climate models.
    The world’s southernmost ocean, the Southern Ocean that surrounds Antarctica, plays an important role in the global climate because its waters contain large amounts of carbon dioxide. A new international study, in which researchers from the University of Gothenburg participated, has examined the complex processes driving air-sea fluxes of gasses, such as carbon dioxide.
    Storms bring carbon dioxide-rich waters to the surface
    The research group is now delivering new findings that shed light on the area’s important role in climate change.
    “We show how the intense storms that often occur in the region increase ocean mixing and bring carbon dioxide-rich waters from the deep to the surface. This drives an outgassing of carbon dioxide from the ocean to the atmosphere. There has been a lack of knowledge about these complex processes, so the study is an important key to understanding the Southern Ocean’s significance for the climate and the global carbon budget,” says Sebastiaan Swart, professor of oceanography at the University of Gothenburg and co-author of the study.
    Facilitates better climate models
    Half of all carbon dioxide bound in the world’s oceans is found in the Southern Ocean. At the same time, climate change is expected to result in more intense storms in the future. Therefore, it is vital to understand the storms’ impact on the outgassing of carbon dioxide into the atmosphere, the researchers point out.
    “This knowledge is necessary to be able to make more accurate predictions about future climate change. Currently, these environmental processes are not captured by global climate models,” says Marcel du Plessis at the University of Gothenburg, who also participated in the study.
    Pioneering ocean robotics
    Measuring the inaccessible and stormy waters around Antarctica for a long period of time is a real challenge, which the researchers tackled with the help of unique robot technology. For several months, autonomous ocean robots; drones and ocean gliders, collected data from the surface and through to depths of one kilometer.
    “This pioneering technology gave us the opportunity to collect data with long endurance, which would not have been possible via a research vessel. Thanks to these ocean robots we can now fill important knowledge gaps and gain a better understanding of the importance of the ocean for the climate, says Sebastiaan Swart.
    The contributions to the study from University of Gothenburg have been supported by the Knut and Alice Wallenberg Foundation through the Wallenberg Academy Fellows Program and the Swedish Research Council.
    Story Source:
    Materials provided by University of Gothenburg. Original written by Ulrika Ernström. Note: Content may be edited for style and length. More

  • in

    Studying the Big Bang with artificial intelligence

    It could hardly be more complicated: tiny particles whir around wildly with extremely high energy, countless interactions occur in the tangled mess of quantum particles, and this results in a state of matter known as “quark-gluon plasma.” Immediately after the Big Bang, the entire universe was in this state; today it is produced by high-energy atomic nucleus collisions, for example at CERN.
    Such processes can only be studied using high-performance computers and highly complex computer simulations whose results are difficult to evaluate. Therefore, using artificial intelligence or machine learning for this purpose seems like an obvious idea. Ordinary machine-learning algorithms, however, are not suitable for this task. The mathematical properties of particle physics require a very special structure of neural networks. At TU Wien (Vienna), it has now been shown how neural networks can be successfully used for these challenging tasks in particle physics.
    Neural networks
    “Simulating a quark-gluon plasma as realistically as possible requires an extremely large amount of computing time,” says Dr. Andreas Ipp from the Institute for Theoretical Physics at TU Wien. “Even the largest supercomputers in the world are overwhelmed by this.” It would therefore be desirable not to calculate every detail precisely, but to recognise and predict certain properties of the plasma with the help of artificial intelligence.
    Therefore, neural networks are used, similar to those used for image recognition: Artificial “neurons” are linked together on the computer in a similar way to neurons in the brain — and this creates a network that can recognise, for example, whether or not a cat is visible in a certain picture.
    When applying this technique to the quark-gluon plasma, however, there is a serious problem: the quantum fields used to mathematically describe the particles and the forces between them can be represented in various different ways. “This is referred to as gauge symmetries,” says Ipp. “The basic principle behind this is something we are familiar with: if I calibrate a measuring device differently, for example if I use the Kelvin scale instead of the Celsius scale for my thermometer, I get completely different numbers, even though I am describing the same physical state. It’s similar with quantum theories — except that there the permitted changes are mathematically much more complicated.” Mathematical objects that look completely different at first glance may in fact describe the same physical state.
    Gauge symmetries built into the structure of the network
    “If you don’t take these gauge symmetries into account, you can’t meaningfully interpret the results of the computer simulations,” says Dr. David I. Müller. “Teaching a neural network to figure out these gauge symmetries on its own would be extremely difficult. It is much better to start out by designing the structure of the neural network in such a way that the gauge symmetry is automatically taken into account — so that different representations of the same physical state also produce the same signals in the neural network,” says Müller. “That is exactly what we have now succeeded in doing: We have developed completely new network layers that automatically take gauge invariance into account.” In some test applications, it was shown that these networks can actually learn much better how to deal with the simulation data of the quark-gluon plasma.
    “With such neural networks, it becomes possible to make predictions about the system — for example, to estimate what the quark-gluon plasma will look like at a later point in time without really having to calculate every single intermediate step in time in detail,” says Andreas Ipp. “And at the same time, it is ensured that the system only produces results that do not contradict gauge symmetry — in other words, results which make sense at least in principle.”
    It will be some time before it is possible to fully simulate atomic core collisions at CERN with such methods, but the new type of neural networks provides a completely new and promising tool for describing physical phenomena for which all other computational methods may never be powerful enough.
    Story Source:
    Materials provided by Vienna University of Technology. Note: Content may be edited for style and length. More