More stories

  • in

    New viable means of storing information for quantum technologies?

    Quantum information could be behind the next technological revolution. By analogy with the bit in classical computing, the qubit is the basic element of quantum computing. However, demonstrating the existence of this information storage unit and using it remains complex, and hence limited.
    In a study published on 3 August 2021 in Physical Review X, an international research team consisting of CNRS researcher Fabio Pistolesi1 and two foreign researchers used theoretical calculations to show that it is possible to realize a new type of qubit, in which information is stored in the oscillation amplitude of a carbon nanotube. These nanotubes can perform a large number of oscillations without diminishing, which shows their low level of interaction with the environment, and makes them excellent potential qubits. This property would enable for greater reliability in quantum computation.
    A problem nevertheless persists with regard to the reading and writing of information stored in the first two energy levels2 of these oscillators. Scientists successfully proved that this information could be read by using the coupling between electrons, a negatively charged particle, and the flexural mode of these nanotubes.
    This changes the spacing between the first levels of energy enough to make them accessible independently from other levels, thereby making it possible to read the information they contain. These promising theoretical predictions have not yet been verified experimentally.
    Notes
    1 — Researcher at the Laboratoire ondes et matières d’Aquitaine (CNRS/Université de Bordeaux). He worked with a scientist from the University of Chicago (United States) and from the Institute of Photonic Sciences in Barcelona (Spain).
    2 — The level of energy is a quantity that can describe systems in physics, with a level corresponding to a “state” of the system.
    Story Source:
    Materials provided by CNRS. Note: Content may be edited for style and length. More

  • in

    Using virtual reality to help students understand the brain's complex systems, researchers demonstrate effectiveness of 3D visualization as a learning tool

    Researchers from the Neuroimaging Center at NYU Abu Dhabi (NYUAD) and Wisconsin Institute for the Discovery at University Wisconsin-Madison have developed the UW Virtual Brain Project™, producing unique, interactive, 3D narrated diagrams to help students learn about the structure and function of perceptual systems in the human brain. A new study exploring how students responded to these lessons on desktop PCs and in virtual reality (VR) offers new insights into the benefits of VR as an educational tool.
    Led by Associate Professor and Director of NYUAD’s Neuroimaging Center Bas Rokers and Assistant Professor of Psychology and a Principal Investigator in the Virtual Environments Group at the Wisconsin Institute for Discovery at University of Wisconsin-Madison Karen Schloss, the researchers have published the findings of their work in a new paper, “The UW Virtual Brain Project: An immersive approach to teaching functional neuroanatomy”  in the journal Translational Issues in Psychological Science from the American Psychological Association (APA). In their experiments, the researchers found that participants showed significant content-based learning for both devices, with no significant differences between PC and VR devices for content-based learning outcomes. However, VR far exceeded PC viewing for achieving experience-based learning outcomes — VR was, in other words, more enjoyable and easier to use.
    “Students are enthusiastic about learning in VR,” said Rokers. “However, our findings indicate that learners can have similar access to learning about functional neuroanatomy through multiple platforms, which means that those who don’t have access to VR technology are not at an inherent disadvantage. The power of VR is its ability to transport learners to new environments they might not otherwise be able to explore. But, importantly, VR is not a substitute for real-world interactions with peers and instructors.”
    The 3D narrated videos are already in active use at classes that include neuro-anatomy instruction both at the University of Wisconsin-Madison and at NYUAD.
    Story Source:
    Materials provided by New York University. Note: Content may be edited for style and length. More

  • in

    Decoding how salamanders walk

    Researchers at Tohoku University and the Swiss Federal Institute of Technology in Lausanne, with the support of the Human Frontier Science Program, have decoded the flexible motor control mechanisms underlying salamander walking.
    Their findings were published in the journal Frontiers in Neurorobotics on July 30, 2021.
    Animals with four feet can navigate complex, unpredictable, and unstructured environments. The impressive ability is thanks to their body-limb coordination.
    The salamander is an excellent specimen for studying body-limb coordination mechanisms. It is an amphibian that uses four legs and walks by swaying itself from left to right in a motion known as undulation.
    Their nervous system is simpler than those of mammals, and they change their walking pattern according to the speed at which they are moving.
    To decode the salamander’s movement, researchers led by Professor Akio Ishiguro of the Research Institute of Electrical Communication at Tohoku University modeled the salamander’s nervous system mathematically and physically simulated the model.
    In making the model, the researchers hypothesized that the legs and the body are controlled to support other motions by sharing sensory information. They then reproduced the speed-dependent gait transitions of salamanders through computer simulations.
    “We hope this finding provides insights into the essential mechanism behind the adaptive and versatile locomotion of animals,” said Ishiguro.
    The researchers are confident their discovery will aid the development of robots that can move with high agility and adaptability by flexibly changing body-limb coordination patterns.
    Story Source:
    Materials provided by Tohoku University. Note: Content may be edited for style and length. More

  • in

    Internet CBT for depression reviewed and analyzed

    Internet-based cognitive behavioral therapy (CBT) for depression is often just as effective as traditional CBT. This is clear from an international study involving scientists at the University of Gothenburg. However, some online treatments have components that can be harmful.
    Internet CBT (iCBT) as a method of delivering treatment is on the increase. Nevertheless, it has been unclear to date which parts of the treatment are most helpful against depression, which are less efficacious and which are potentially detrimental to patients.
    In an international study, researchers at the University of Gothenburg participated in a systematic literature review and meta-analysis. The study was based on 76 randomized controlled trials (RCTs) in Sweden and elsewhere. In total, the RCTs included 17,521 patients, 71% of whom were women.
    The study, under the aegis of Kyoto University in Japan, is now published in The Lancet Psychiatry. One coauthor is Cecilia Björkelund, Senior Professor of Family Medicine at the University of Gothenburg’s Sahlgrenska Academy.
    “In mild or moderate depression, the effect of iCBT is as good as that of conventional CBT. For many, it’s a superb way of getting access to therapy without having to go to a therapist. We also saw that it was especially good for the elderly — a finding we didn’t entirely expect,” she says.
    Just as in traditional CBT, its online counterpart involves modifying patients’ thoughts, feelings and behaviors that are obstacles in their lives and impair their mood. During the treatment, which often lasts about ten weeks, they are given tasks and exercises to perform on their own.
    The factor that proved most significant for the prognosis was the depth of depression at the start of treatment. In milder depression, better results were obtained. Therapist support and text-message reminders increased the proportion of patients who completed the therapy.
    “If you’re going to use iCBT in health care, the programs have to be regulated just as well as drugs are, but that’s not the case today. With this study, we’re taking a real step forward. First, the study surveys what’s most effective. Second, it provides knowledge of how to design a program and adapt its composition to patients’ problems,” Björkelund says.
    However, iCBT requires continuous therapeutic contact. One reason is the importance of the therapist being able to see an improvement within three to four weeks, ensuring that the trend is not in the wrong direction. Björkelund stresses the great potential danger of depression. In severe depression, internet-mediated therapy is inappropriate.
    The study shows the danger of using iCBT with programs that include relaxation therapy. Rather than being beneficial, this may have negative effects, exacerbating depressive symptoms and causing “relaxation-induced anxiety.”
    “For a depressed person, it isn’t advisable. Relaxation programs shouldn’t be used as part of depression treatment in health care,” Björkelund says.
    Story Source:
    Materials provided by University of Gothenburg. Note: Content may be edited for style and length. More

  • in

    New research infuses equity principles into the algorithm development process

    In the U.S., the place where one was born, one’s social and economic background, the neighborhoods in which one spends one’s formative years, and where one grows old are factors that account for a quarter to 60% of deaths in any given year, partly because these forces play a significant role in occurrence and outcomes for heart disease, cancer, unintentional injuries, chronic lower respiratory diseases, and cerebrovascular diseases — the five leading causes of death.
    While data on such “macro” factors is critical to tracking and predicting health outcomes for individuals and communities, analysts who apply machine-learning tools to health outcomes tend to rely on “micro” data constrained to purely clinical settings and driven by healthcare data and processes inside the hospital, leaving factors that could shed light on healthcare disparities in the dark.
    Researchers at the NYU Tandon School of Engineering and NYU School of Global Public Health (NYU GPH), in a new perspective, “Machine learning and algorithmic fairness in public and population health,” in Nature Machine Intelligence, aim to activate the machine learning community to account for “macro” factors and their impact on health. Thinking outside the clinical “box” and beyond the strict limits of individual factors, Rumi Chunara, associate professor of computer science and engineering at NYU Tandon and of biostatistics at the NYU GPH, found a new approach to incorporating the larger web of relevant data for predictive modeling for individual and community health outcomes.
    “Research of what causes and reduces equity shows that to avoid creating more disparities it is essential to consider upstream factors as well,” explained Chunara. She noted, on the one hand, the large body of work on AI and machine learning implementation in healthcare in areas like image analysis, radiography, and pathology, and on the other the strong awareness and advocacy focused on such areas as structural racism, police brutality, and healthcare disparities that came to light around the COVID-19 pandemic.
    “Our goal is to take that work and the explosion of data-rich machine learning in healthcare, and create a holistic view beyond the clinical setting, incorporating data about communities and the environment.”
    Chunara, along with her doctoral students Vishwali Mhasawade and Yuan Zhao, at NYU Tandon and NYU GPH, respectively, leveraged the Social Ecological Model, a framework for understanding how the health, habits and behavior of an individual are affected by factors such as public policies at the national and international level and availability of health resources within a community and neighborhood. The team shows how principles of this model can be used in algorithm development to show how algorithms can be designed and used more equitably.
    The researchers organized existing work into a taxonomy of the types of tasks for which machine learning and AI are used that span prediction, interventions, identifying effects and allocations, to show examples of how a multi-level perspective can be leveraged. In the piece, the authors also show how the same framework is applicable to considerations of data privacy, governance, and best practices to move the healthcare burden from individuals, toward improving equity.
    As an example of such approaches, members of the same team recently presented at the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society a new approach to using “causal multi-level fairness,” the larger web of relevant data for assessing fairness of algorithms. This work builds on the field of “algorithmic fairness,” which, to date, is limited by its exclusive focus on individual-level attributes such as gender and race.
    In this work Mhasawade and Chunara formalized a novel approach to understanding fairness relationships using tools from causal inference, synthesizing a means by which an investigator could assess and account for effects of sensitive macro attributes and not merely individual factors. They developed the algorithm for their approach and provided the settings under which it is applicable. They also illustrated their method on data showing how predictions based merely on data points associated with labels like race, income and gender are of limited value if sensitive attributes are not accounted for, or are accounted for without proper context.
    “As in healthcare, algorithmic fairness tends to be focused on labels — men and women, Black versus white, etc. — without considering multiple layers of influence from a causal perspective to decide what is fair and unfair in predictions,” said Chunara. “Our work presents a framework for thinking not only about equity in algorithms but also what types of data we use in them.” More

  • in

    Artificial Intelligence learns better when distracted

    How should you train your AI system? This question is pertinent, because many deep learning systems are still black boxes. Computer scientists from the Netherlands and Spain have now determined how a deep learning system well suited for image recognition learns to recognize its surroundings. They were able to simplify the learning process by forcing the system’s focus toward secondary characteristics.
    Convolutional Neural Networks (CNNs) are a form of bio-inspired deep learning in artificial intelligence. The interaction of thousands of ‘neurons’ mimics the way our brain learns to recognize images. ‘These CNNs are successful, but we don’t fully understand how they work’, says Estefanía Talavera Martinez, lecturer and researcher at the Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence of the University of Groningen in the Netherlands.
    Food
    She has made use of CNNs herself to analyse images made by wearable cameras in the study of human behaviour. Among other things, Talavera Martinez has been studying our interactions with food, so she wanted the system to recognize the different settings in which people encounter food. ‘I noticed that the system made errors in the classification of some pictures and needed to know why this happened.’
    By using heat maps, she analysed which parts of the images were used by the CNNs to identify the setting. ‘This led to the hypothesis that the system was not looking at enough details’, she explains. For example, if an AI system has taught itself to use mugs to identify a kitchen, it will wrongly classify living rooms, offices and other places where mugs are used. The solution that was developed by Talavera Martinez and her colleagues David Morales (Andalusian Research Institute in Data Science and Computational Intelligence, University of Granada) and Beatriz Remeseiro (Department of Computer Science, Universidad de Oviedo), both in Spain, is to distract the system from their primary targets.
    Blurred
    They trained CNNs using a standard image set of planes or cars and identified through heat maps which parts of the images were used for classification. Then, these parts were blurred in the image set, which was then used for a second round of training. ‘This forces the system to look elsewhere for identifiers. And by using this extra information, it becomes more fine-grained in its classification.’
    The approach worked well in the standard image sets, and was also successful in the images Talavera Martinez had collected herself from the wearable cameras. ‘Our training regime gives us results similar to other approaches, but is much simpler and requires less computing time.’ Previous attempts to increase fine-grained classification included combining different sets of CNNs. The approach developed by Talavera Martinez and her colleagues is much more lightweight. ‘This study gave us a better idea of how these CNNs learn, and that has helped us to improve the training program.’
    Story Source:
    Materials provided by University of Groningen. Note: Content may be edited for style and length. More

  • in

    New information storage and processing device

    A team of scientists has developed a means to create a new type of memory, marking a notable breakthrough in the increasingly sophisticated field of artificial intelligence.
    “Quantum materials hold great promise for improving the capacities of today’s computers,” explains Andrew Kent, a New York University physicist and one of the senior investigators. “The work draws upon their properties in establishing a new structure for computation.”
    The creation, designed in partnership with researchers from the University of California, San Diego (UCSD) and the University of Paris-Saclay, is reported in Scientific Reports.
    “Since conventional computing has reached its limits, new computational methods and devices are being developed,” adds Ivan Schuller, a UCSD physicist and one of the paper’s authors. “These have the potential of revolutionizing computing and in ways that may one day rival the human brain.”
    In recent years, scientists have sought to make advances in what is known as “neuromorphic computing” — a process that seeks to mimic the functionality of the human brain. Because of its human-like characteristics, it may offer more efficient and innovative ways to process data using approaches not achievable using existing computational methods.
    In the Scientific Reports work, the researchers created a new device that marks major progress already made in this area.
    To do so, they built a nanoconstriction spintronic resonator to manipulate known physical properties in innovative ways.
    Resonators are capable of generating and storing waves of well-defined frequencies — akin to the box of a string instrument. Here, the scientists constructed a new type of resonator — capable of storing and processing information similar to synapses and neurons in the brain. The one described in Scientific Reports combines the unique properties of quantum materials together with that of spintronic magnetic devices.
    Spintronic devices are electronics that use an electron’s spin in addition to its electrical charge to process information in ways that reduce energy while increasing storage and processing capacity relative to more traditional approaches. A broadly used such device, a “spin torque oscillator,” operates at a specific frequency. Combining it with a quantum material allows tuning this frequency and thus broadening its applicability considerably.
    “This is a fundamental advance that has applications in computing, particularly in neuromorphic computing, where such resonators can serve as connections among computing components,” observes Kent.
    Story Source:
    Materials provided by New York University. Note: Content may be edited for style and length. More

  • in

    Adapting roots to a hotter planet could ease pressure on food supply

    The shoots of plants get all of the glory, with their fruit and flowers and visible structure. But it’s the portion that lies below the soil — the branching, reaching arms of roots and hairs pulling up water and nutrients — that interests plant physiologist and computer scientist, Alexander Bucksch, associate professor of Plant Biology at the University of Georgia.
    The health and growth of the root system has deep implications for our future.
    Our ability to grow enough food to support the population despite a changing climate, and to fix carbon from the atmosphere in the soil are critical to our, and other species’, survival. The solutions, Bucksch believes, lie in the qualities of roots.
    “When there is a problem in the world, humans can move. But what does the plant do?” he asked. “It says, ‘Let’s alter our genome to survive.’ It evolves.”
    Until recently, farmers and plant breeders didn’t have a good way to gather information about the root system of plants, or make decisions about the optimal seeds to grow deep roots.
    In a paper published this month in Plant Physiology, Bucksch and colleagues introduce DIRT/3D (Digital Imaging of Root Traits), an image-based 3D root phenotyping platform that can measure 18 architecture traits from mature field-grown maize root crowns excavated using the Shovelomics technique. More