More stories

  • in

    Solving algorithm 'amnesia' reveals clues to how we learn

    A discovery about how algorithms can learn and retain information more efficiently offers potential insight into the brain’s ability to absorb new knowledge. The findings by researchers at the University of California, Irvine School of Biological Sciences could aid in combatting cognitive impairments and improving technology. Their study appears in Proceedings of the National Academy of Sciences.
    The scientists focused on artificial neural networks, known as ANNs, which are algorithms designed to emulate the behavior of brain neurons. Like human minds, ANNs can absorb and classify vast quantities of information. Unlike our brains, however, ANNs tend to forget what they already know when fresh knowledge is introduced too fast, a phenomenon known as catastrophic forgetting.
    Researchers have long theorized that our ability to learn new concepts stems from the interplay between the brain’s hippocampus and the neocortex. The hippocampus captures fresh information and replays it during rest and sleep. The neocortex grabs the new material and reviews its existing knowledge so it can interleave, or layer, the fresh material into similar categories developed from the past.
    However, there has been some question about this process, given the excessive amount of time it would take the brain to sort through the whole trove of information it has gathered during a lifetime. This pitfall could explain why ANNs lose long-term knowledge when absorbing new data too quickly.
    Traditionally, the solution used in deep machine learning has been to retrain the network on the entire set of past data, whether or not it was closely related to the new information, a very time-consuming process. The UCI scientists decided to examine the issue in greater depth and made a notable discovery.
    “We found that when ANNs interleaved a much smaller subset of old information, including mainly items that were similar to the new knowledge they were acquiring, they learned it without forgetting what they already knew,” said graduate student Rajat Saxena, the paper’s first author. Saxena spearheaded the project with assistance from Justin Shobe, an assistant project scientist. Both members of the laboratory of Bruce McNaughton, Distinguished Professor of neurobiology & behavior.
    “It allowed ANNs to take in fresh information very efficiently, without having to review everything they had previously acquired,” Saxena said. “These findings suggest a brain mechanism for why experts at something can learn new things in that area much faster than non-experts. If the brain already has a cognitive framework related to the new information, the new material can be absorbed more quickly because changes are only needed in the part of brain’s network that encodes the expert knowledge.”
    The discovery holds potential for tackling cognitive issues, according to McNaughton. “Understanding the mechanisms behind learning is essential for making progress,” he said. “It gives us insights into what’s going on when brains don’t work the way they are supposed to. We could develop training strategies for people with memory problems from aging or those with brain damage. It could also lead to the ability to manipulate brain circuits so people can overcome these deficits.”
    The findings offer possibilities as well for making algorithms in machines such as medical diagnostic equipment, autonomous cars and many others more precise and efficient.
    Funding for the research was provided by a Defense Advanced Research Projects Agency Grant in support of basic research of potential benefit to humankind and by the National Institutes of Health.
    Story Source:
    Materials provided by University of California – Irvine. Note: Content may be edited for style and length. More

  • in

    A four-stroke engine for atoms

    If you switch a bit in the memory of a computer and then switch it back again, you have restored the original state. There are only two states that can be called “0 and 1.”
    However, an amazing effect has now been discovered at TU Wien (Vienna): In a crystal based on oxides of gadolinium and manganese, an atomic switch was found that has to be switched back and forth not just once, but twice, until the original state is reached again. During this double switching-on and switching-off process, the spin of gadolinium atoms performs one full rotation. This is reminiscent of a crankshaft, in which an up-and-down movement is converted into a circular movement.
    This new phenomenon opens up interesting possibilities in material physics, even information could be stored with such systems. The strange atomic switch has now been presented in the scientific journal Nature.
    Coupling of electrical and magnetic properties
    Normally, a distinction is made between the electrical and magnetic properties of materials. Electrical properties are based on the fact that charge carriers move — for example electrons that travel through a metal or ions whose position is shifted.
    Magnetic properties, on the other hand, are closely related to the spin of atoms — the particle’s intrinsic angular momentum, which can point in a very specific direction, much like the Earth’s axis of rotation points in a very specific direction. More

  • in

    Physicists see electron whirlpools

    Though they are discrete particles, water molecules flow collectively as liquids, producing streams, waves, whirlpools, and other classic fluid phenomena.
    Not so with electricity. While an electric current is also a construct of distinct particles — in this case, electrons — the particles are so small that any collective behavior among them is drowned out by larger influences as electrons pass through ordinary metals. But, in certain materials and under specific conditions, such effects fade away, and electrons can directly influence each other. In these instances, electrons can flow collectively like a fluid.
    Now, physicists at MIT and the Weizmann Institute of Science have observed electrons flowing in vortices, or whirlpools — a hallmark of fluid flow that theorists predicted electrons should exhibit, but that has never been seen until now.
    “Electron vortices are expected in theory, but there’s been no direct proof, and seeing is believing,” says Leonid Levitov, professor of physics at MIT. “Now we’ve seen it, and it’s a clear signature of being in this new regime, where electrons behave as a fluid, not as individual particles.”
    The observations, reported in the journal Nature, could inform the design of more efficient electronics.
    “We know when electrons go in a fluid state, [energy] dissipation drops, and that’s of interest in trying to design low-power electronics,” Levitov says. “This new observation is another step in that direction.”
    Levitov is a co-author of the new paper, along with Eli Zeldov and others at the Weizmann Institute for Science in Israel and the University of Colorado at Denver. More

  • in

    Scientists demonstrate machine learning tool to efficiently process complex solar data

    Big data has become a big challenge for space scientists analyzing vast datasets from increasingly powerful space instrumentation. To address this, a Southwest Research Institute team has developed a machine learning tool to efficiently label large, complex datasets to allow deep learning models to sift through and identify potentially hazardous solar events. The new labeling tool can be applied or adapted to address other challenges involving vast datasets.
    As space instrument packages collect increasingly complex data in ever-increasing volumes, it is becoming more challenging for scientists to process and analyze relevant trends. Machine learning (ML) is becoming a critical tool for processing large complex datasets, where algorithms learn from existing data to make decisions or predictions that can factor more information simultaneously than humans can. However, to take advantage of ML techniques, humans need to label all the data first — often a monumental endeavor.
    “Labeling data with meaningful annotations is a crucial step of supervised ML. However, labeling datasets is tedious and time consuming,” said Dr. Subhamoy Chatterjee, a postdoctoral researcher at SwRI specializing in solar astronomy and instrumentation and lead author of a paper about these findings published in the journal Nature Astronomy. “New research shows how convolutional neural networks (CNNs), trained on crudely labeled astronomical videos, can be leveraged to improve the quality and breadth of data labeling and reduce the need for human intervention.”
    Deep learning techniques can automate processing and interpret large amounts of complex data by extracting and learning complex patterns. The SwRI team used videos of the solar magnetic field to identify areas where strong, complex magnetic fields emerge on the solar surface, which are the main precursor of space weather events.
    “We trained CNNs using crude labels, manually verifying only our disagreements with the machine,” said co-author Dr. Andrés Muñoz-Jaramillo, an SwRI solar physicist with expertise in machine learning. “We then retrained the algorithm with the corrected data and repeated this process until we were all in agreement. While flux emergence labeling is typically done manually, this iterative interaction between the human and ML algorithm reduces manual verification by 50%.”
    Iterative labeling approaches such as active learning can significantly save time, reducing the cost of making big data ML ready. Furthermore, by gradually masking the videos and looking for the moment where the ML algorithm changes its classification, SwRI scientists further leveraged the trained ML algorithm to provide an even richer and more useful database.
    “We created an end-to-end, deep-learning approach for classifying videos of magnetic patch evolution without explicitly supplying segmented images, tracking algorithms or other handcrafted features,” said SwRI’s Dr. Derek Lamb, a co-author specializing in evolution of magnetic fields on the surface of the Sun. “This database will be critical in the development of new methodologies for forecasting the emergence of the complex regions conducive to space weather events, potentially increasing the lead time we have to prepare for space weather.”
    Story Source:
    Materials provided by Southwest Research Institute. Note: Content may be edited for style and length. More

  • in

    Physicists work to shrink microchips with first one-dimensional helium model system

    Physicists at Indiana University and the University of Tennessee have cracked the code to making microchips smaller, and the key is helium.
    Microchips are everywhere, running computers and cars, and even helping people find lost pets. As microchips grow smaller, faster and capable of doing more things, the wires that conduct electricity to them must follow suit. But there’s a physical limit to how small they can become — unless they are designed differently.
    “In a traditional system, as you put more transistors on, the wires get smaller,” said Paul Sokol, a professor in the IU Bloomington College of Arts and Sciences’ Department of Physics. “But under newly designed systems, it’s like confining the electrons in a one-dimensional tube, and that behavior is quite different from a regular wire.”
    To study the behavior of particles under these circumstances, Sokol collaborated with a physics professor at the University of Tennessee, Adrian Del Maestro, to create a model system of electronics packed into a one-dimensional tube.
    Their findings were recently published in Nature Communications.
    The pair used helium to create a model system for their study because its interactions with electrons are well known, and it can be made extremely pure, Sokol said. However, there were issues with using helium in a one-dimensional space, the first being that no one had ever done it before. More

  • in

    How to build better ice towers for drinking water and irrigation

    There’s a better way to build a glacier.

    During winter in India’s mountainous Ladakh region, some farmers use pipes and sprinklers to construct building-sized cones of ice. These towering, humanmade glaciers, called ice stupas, slowly release water as they melt during the dry spring months for communities to drink or irrigate crops. But the pipes often freeze when conditions get too cold, stifling construction.

    Now, preliminary results show that an automated system can erect an ice stupa while avoiding frozen pipes, using local weather data to control when and how much water is spouted. What’s more, the new system uses roughly a tenth the amount of water that the conventional method uses, researchers reported June 23 at the Frontiers in Hydrology meeting in San Juan, Puerto Rico.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    “This is one of the technological steps forward that we need to get this innovative idea to the point where it’s realistic as a solution,” says glaciologist Duncan Quincey of the University of Leeds in England who was not involved in the research. Automation could help communities build larger, longer-lasting ice stupas that provide more water during dry periods, he says.

    Ice stupas emerged in 2014 as a means for communities to cope with shrinking alpine glaciers due to human-caused climate change (SN: 5/29/19). Typically, high-mountain communities in India, Kyrgyzstan and Chile pipe glacial meltwater into gravity-driven fountains that sprinkle continuously in the winter. Cold air freezes the drizzle, creating frozen cones that can store millions of liters of water.

    The process is simple, though inefficient. More than 70 percent of the spouted water may flow away instead of freezing, says glaciologist Suryanarayanan Balasubramanian of the University of Fribourg in Switzerland.

    So Balasubramanian and his team outfitted an ice stupa’s fountain with a computer that automatically adjusted the spout’s flow rate based on local temperatures, humidity and wind speed. Then the scientists tested the system by building two ice stupas in Guttannen, Switzerland — one using a continuously spraying fountain and one using the automated system.

    After four months, the team found that the continuously sprinkling fountain had spouted about 1,100 cubic meters of water and amassed 53 cubic meters of ice, with pipes freezing once. The automated system sprayed only around 150 cubic meters of water but formed 61 cubic meters of ice, without any frozen pipes.

    The researchers are now trying to simplify their prototype to make it more affordable for high-mountain communities around the world. “We eventually want to reduce the cost so that it is within two months of salary of the farmers in Ladakh,” Balasubramanian says. “Around $200 to $400.” More

  • in

    COVID-19 virus spike protein flexibility improved by human cell's own modifications

    When the coronavirus causing COVID-19 infects human cells, the cell’s protein-processing machinery makes modifications to the spike protein that render it more flexible and mobile, which could increase its ability to infect other cells and to evade antibodies, a new study from the University of Illinois Urbana-Champaign found.
    The researchers created an atomic-level computational model of the spike protein and ran multiple simulations to examine the protein’s dynamics and how the cell’s modifications affected those dynamics. This is the first study to present such a detailed picture of the protein that plays a key role in COVID-19 infection and immunity, the researchers said.
    Biochemistry professor Emad Tajkhorshid, postdoctoral researcher Karan Kapoor and graduate student Tianle Chen published their findings in the Proceedings of the National Academy of Sciences.
    “The dynamics of a spike are very important — how much it moves and how flexible it is to search for and bind to receptors on the host cell,” said Tajkhorshid, who also is a member of the Beckman Institute for Advanced Science and Technology. “In order to have a realistic representation, you have to look at the protein at the atomic level. We hope that the results of our simulations can be used for developing new treatments. Instead of using one static structure of the protein to search for drug-binding pockets, we want to reproduce its movements and use all of the relevant shapes it adopts to provide a more complete platform for screening drug candidates instead of just one structure.”
    The spike protein of SARS-CoV-2, the virus that causes COVID-19, is the protein that juts out from the surface of the virus and binds to receptors on the surface of human cells to infect them. It also is the target of antibodies in those who have been vaccinated or recovered from infection.
    Many studies have looked at the spike protein and its amino acid sequence, but knowledge of its structure has largely relied on static images, Tajkhorshid said. The atomistic simulations give researchers a glimpse of dynamics that affect how the protein interacts with receptors on cells it seeks to infect and with antibodies that seek to bind to it. More

  • in

    Using big data to better understand cancerous mutations

    Artificial intelligence and machine learning are among the latest tools being used by cancer researchers to aid in detection and treatment of the disease.
    One of the scientists working in this new frontier of cancer research is University of Colorado Cancer Center member Ryan Layer, PhD, who recently published a study detailing his research that uses big data to find cancerous mutations in cells.
    “Identifying the genetic changes that cause healthy cells to become malignant can help doctors select therapies that specifically target the tumor,” says Layer, an assistant professor of computer science at CU Boulder. “For example, about 25% of breast cancers are HER2-positive, meaning the cells in this type of tumor have mutations that cause them to produce more of a protein called HER2 that helps them grow. Treatments that specifically target HER2 have dramatically increased survival rates for this type of breast cancer.”
    Scientists can evaluate cell DNA to identify mutations, Layer says, but the challenge is that the human genome is massive, and mutations are a normal part of evolution.
    “The human genome is long enough to fill a 1.2 million-page book, and any two people can have about 3 million genetic differences,” he says. “Finding one cancer-driving mutation in a tumor is like finding a needle in a stack of needles.”
    Scanning the data
    The ideal method of determining what type of cancer mutation a patient has is to compare two samples from the same patient, one from the tumor and one from healthy tissue. Such tests can be complicated and costly, however, so Layer hit upon another idea — using massive public DNA databases to look for common cell mutations that tend to be benign, so that researchers can identify rarer mutations that have the potential to be cancerous. More