More stories

  • in

    BioAFMviewer software for simulated atomic force microscopy of biomolecules

    Atomic force microscopy (AFM) allows to obtain images and movies showing proteins at work, however with limited resolution. The developed BioAFMviewer software opens the opportunity to use the enormous amount of available high-resolution protein data to better understand experiments. Within an interactive interface with rich functionality, the BioAFMviewer computationally emulates tip-scanning of any biomolecular structure to generate simulated AFM graphics and movies. They greatly help in the interpretation of e.g., high-speed AFM observations. More

  • in

    High-five or thumbs-up? New device detects which hand gesture you want to make

    Berkeley — Imagine typing on a computer without a keyboard, playing a video game without a controller or driving a car without a wheel.
    That’s one of the goals of a new device developed by engineers at the University of California, Berkeley, that can recognize hand gestures based on electrical signals detected in the forearm. The system, which couples wearable biosensors with artificial intelligence (AI), could one day be used to control prosthetics or to interact with almost any type of electronic device.
    “Prosthetics are one important application of this technology, but besides that, it also offers a very intuitive way of communicating with computers.” said Ali Moin, who helped design the device as a doctoral student in UC Berkeley’s Department of Electrical Engineering and Computer Sciences. “Reading hand gestures is one way of improving human-computer interaction. And, while there are other ways of doing that, by, for instance, using cameras and computer vision, this is a good solution that also maintains an individual’s privacy.”
    Moin is co-first author of a new paper describing the device, which appeared online Dec. 21 in the journal Nature Electronics.
    To create the hand gesture recognition system, the team collaborated with Ana Arias, a professor of electrical engineering at UC Berkeley, to design a flexible armband that can read the electrical signals at 64 different points on the forearm. The electrical signals are then fed into an electrical chip, which is programmed with an AI algorithm capable of associating these signal patterns in the forearm with specific hand gestures.
    The team succeeded in teaching the algorithm to recognize 21 individual hand gestures, including a thumbs-up, a fist, a flat hand, holding up individual fingers and counting numbers.

    advertisement

    “When you want your hand muscles to contract, your brain sends electrical signals through neurons in your neck and shoulders to muscle fibers in your arms and hands,” Moin said. “Essentially, what the electrodes in the cuff are sensing is this electrical field. It’s not that precise, in the sense that we can’t pinpoint which exact fibers were triggered, but with the high density of electrodes, it can still learn to recognize certain patterns.”
    Like other AI software, the algorithm has to first “learn” how electrical signals in the arm correspond with individual hand gestures. To do this, each user has to wear the cuff while making the hand gestures one by one.
    However, the new device uses a type of advanced AI called a hyperdimensional computing algorithm, which is capable of updating itself with new information.
    For instance, if the electrical signals associated with a specific hand gesture change because a user’s arm gets sweaty, or they raise their arm above their head, the algorithm can incorporate this new information into its model.
    “In gesture recognition, your signals are going to change over time, and that can affect the performance of your model,” Moin said. “We were able to greatly improve the classification accuracy by updating the model on the device.”
    Another advantage of the new device is that all of the computing occurs locally on the chip: No personal data are transmitted to a nearby computer or device. Not only does this speed up the computing time, but it also ensures that personal biological data remain private.
    “When Amazon or Apple creates their algorithms, they run a bunch of software in the cloud that creates the model, and then the model gets downloaded onto your device,” said Jan Rabaey, the Donald O. Pedersen Distinguished Professor of Electrical Engineering at UC Berkeley and senior author of the paper. “The problem is that then you’re stuck with that particular model. In our approach, we implemented a process where the learning is done on the device itself. And it is extremely quick: You only have to do it one time, and it starts doing the job. But if you do it more times, it can get better. So, it is continuously learning, which is how humans do it.”
    While the device is not ready to be a commercial product yet, Rabaey said that it could likely get there with a few tweaks.
    “Most of these technologies already exist elsewhere, but what’s unique about this device is that it integrates the biosensing, signal processing and interpretation, and artificial intelligence into one system that is relatively small and flexible and has a low power budget,” Rabaey said. More

  • in

    Traditional model for disease spread may not work in COVID-19

    A mathematical model that can help project the contagiousness and spread of infectious diseases like the seasonal flu may not be the best way to predict the continuing spread of the novel coronavirus, especially during lockdowns that alter the normal mix of the population, researchers report.
    Called the R-naught, or basic reproductive number, the model predicts the average number of susceptible people who will be infected by one infectious person. It’s calculated using three main factors — the infectious period of the disease, how the disease spreads and how many people an infected individual will likely come into contact with.
    Historically, if the R-naught is larger than one, infections can become rampant and an epidemic or more widespread pandemic is likely. The COVID-19 pandemic had an early R-naught between two and three.
    In a letter published in Infection Control and Hospital Epidemiology, corresponding author Dr. Arni S.R. Srinivasa Rao, a mathematical modeler at the Medical College of Georgia at Augusta University, argues that while it’s never possible to track down every single case of an infectious disease, the lockdowns that have become necessary to help mitigate the COVID-19 pandemic have complicated predicting the disease’s spread.
    Rao and his co-authors instead suggest more of a dynamic, moment in time approach using a model called the geometric mean. That model uses today’s number to predict tomorrow’s numbers. Current number of infections — in Augusta today, for example — is divided by the number of predicted infections for tomorrow to develop a more accurate and current reproductive rate.
    While this geometric method can’t predict long term trends, it can more accurately predict likely numbers for the short term.
    “The R-naught model can’t be changed to account for contact rates that can change from day to day when lockdowns are imposed,” Rao explains. “In the initial days of the pandemic, we depended on these traditional methods to predict the spread, but lockdowns change the way people have contact with each other.”
    A uniform R-naught is also not possible since the COVID-19 pandemic has varied widely in different areas of the country and world. Places have different rates of infection, on different timelines — hotspots like New York and California would have higher R-naughts. The R-naught also did not predict the current third wave of the COVID-19 pandemic.
    “Different factors continuously alter ground-level basic reproductive numbers, which is why we need a better model,” Rao says. Better models have implications for mitigating the spread of COVID-19 and for future planning, the authors say.
    “Mathematical models must be used with care and their accuracy must be carefully monitored and quantified,” the authors write. “Any alternative course of action could lead to wrong interpretation and mismanagement of the disease with disastrous consequences.”
    Rao’s co-authors include Dr. Steven Krantz, a professor of mathematics and statistics at Washington University and Dr. Michael Bonsall, a professor in the Mathematical Ecology Research Group, at the University of Oxford.

    Story Source:
    Materials provided by Medical College of Georgia at Augusta University. Original written by Jennifer Hilliard Scott. Note: Content may be edited for style and length. More

  • in

    Citizens versus the internet: Confronting digital challenges with cognitive tools

    Access to the Internet is essential for economic development, education, global communications, and countless other applications. For all its benefits, however, the Internet has a darker side. It has emerged as a conduit for spreading misinformation, stoking tensions, and promoting extremist ideologies. Yet there is hope.
    In the latest issue of Psychological Science in the Public Interest, a team of researchers recommend ways that psychological and behavioral sciences can help decrease the negative consequences of Internet use. These recommendations emphasize helping people gain greater control over their digital environments.
    “Psychological science can help to inform policy interventions in the digital world,” said Anastasia Kozyreva, a researcher at the Center for Adaptive Rationality at the Max Planck Institute for Human Development in Germany and author on the paper. “It is crucial that psychological and behavioral sciences are employed to ensure users are not manipulated for financial gain and are empowered to detect and resist manipulation.”
    Specifically, psychological and cognitive sciences can complement interventions by other fields, such as law and ethics, which develops guidelines and regulations; education, which can provide curricula for digital information literacy; and technology, which can provide automated detection of harmful materials and help implement more ethically designed online choice architectures.
    Although there is no silver bullet that could solve all the problems of the digital world, Kozyreva and her colleagues describe three approaches to help mitigate the negative consequences. The first is to design Internet infrastructures that “nudge” people’s behavior toward more positive outcomes, such as systems with default privacy-respecting settings. The second is relying more on “technocognition,” which are technological solutions informed by psychological principles, such as creating obstacles to sharing offensive material online.
    The final approach is improving people’s cognitive and motivational competencies through “boosts,” which are tactics that enhance people’s agency in their digital environments and improve reasoning and resilience to manipulation. Examples of a boost include preventively inoculating users against most common manipulative practices and providing easy-to-use rules for digital literacy.
    “These cognitive tools are designed to foster the civility of online discourse and protect reason and human autonomy against manipulative choice architectures, attention-grabbing techniques, and the spread of false information,” said Kozyreva.

    Story Source:
    Materials provided by Association for Psychological Science. Note: Content may be edited for style and length. More

  • in

    Big step with small whirls

    Many of us may still be familiar with the simple physical principles of magnetism from school. However, this general knowledge about north and south poles quickly becomes very complex when looking at what happens down to the atomic level. The magnetic interactions between atoms at such minute scales can create unique states, such as skyrmions.
    Skyrmions have very special properties and can exist in certain material systems, such as a “stack” of different sub-nanometer-thick metal layers. Modern computer technology based on skyrmions — which are only a few nanometers in size — promises to enable an extremely compact and ultrafast way of storing and processing data. As an example, one concept for data storage with skyrmions could be that the bits “1” and “0” are represented by the presence and absence of a given skyrmion. This concept could thus be used in “racetrack” memories (see info box). However, it is a prerequisite that the distance between the skyrmion for the value “1” and the skyrmion gap for the value “0” remains constant when moving during the data transport, otherwise large errors could occur.
    As a better alternative, skyrmions having different sizes can be used for the representation of “0” and “1.” These could then be transported like pearls on a string without the distances between the pearls playing a big role. The existence of two different types of skyrmions (skyrmion and skyrmion bobber) has so far only been predicted theoretically and has only be shown experimentally in a specially-grown monocrystalline material. In these experiments, however, the skyrmions exist only at extremely low temperatures. These limitations make this material unsuitable for practical applications.
    Experience with ferromagnetic multilayer systems and magnetic force microscopy
    The research group led by Hans Josef Hug at Empa has now succeeded in solving this problem: “We have produced a multilayer system consisting of various sub-nanometer-thick ferromagnetic, noble metal and rare-earth metal layers, in which two different skyrmion states can coexist at room temperature,” says Hug. His team had been studying skyrmion properties in ultra-thin ferromagnetic multilayer systems using the magnetic force microscope that they developed at Empa. For their latest experiments, they fabricated material layers made from the following metals: iridium (Ir), iron (Fe), cobalt (Co), platinum (Pt) and the rare-earth metals terbium (Tb) and gadolinium (Gd).
    Between the two ferromagnetic multilayers that generate skyrmions — in which the combination of Ir/Fe/Co/Pt layers is overlaid five times — the researchers inserted a ferrimagnetic multilayer consisting of a TbGd alloy layer and a Co layer. The special feature of this layer is that it cannot generate skyrmions on its own. The outer two layers, on the other hand, generate skyrmions in large numbers.

    advertisement

    The researchers adjusted the mixing ratio of the two metals Tb and Gd and the thicknesses of the TbGd and Co layers in the central layer in such a way that its magnetic properties can be influenced by the outer layers: the ferromagnetic layers “force” skyrmions into the central ferrimagnetic layer. This results in a multilayer system where two different types of skyrmions exist.
    Experimental and theoretical evidence
    The two types of skyrmions can easily be distinguished from each other with the magnetic force microscope due to their different sizes and intensities. The larger skyrmion, which also creates a stronger magnetic field, penetrates the entire multilayer system, i.e. also the middle ferrimagnetic multilayer. The smaller, weaker skyrmion, on the other hand only exists in the two outer multilayers. This is the great significance of the latest results with regard to a possible use of skyrmions in data processing: if binary data — 0 and 1 — are to be stored and read, they must be clearly distinguishable, which would be possible here by means of the two different types of skyrmions.
    Using the magnetic force microscope, individual parts of these multilayers were compared with each other. This allowed Hug’s team to determine in which layers the different skyrmions occur. Furthermore, micromagnetic computer simulations confirmed the experimental results. These simulations were carried out in collaboration with theoreticians from the universities of Vienna and Messina.
    Empa researcher Andrada-Oana Mandru, the first author of the study, is hopeful that a major challenge towards practical applications has been overcome: “The multilayers we have developed using sputtering technology can in principle also be produced on an industrial scale,” she said. In addition, similar systems could possibly be used in the future to build three-dimensional data storage devices with even greater storage density. The team recently published their work in the renowned journal Nature Communications.
    Racetrack Memory
    The concept of such a memory was designed in 2004 at IBM. It consists of writing information in one place by means of magnetic domains — i.e. magnetically aligned areas — and then moving them quickly within the device by means of currents. One bit corresponds to such a magnetic domain. This task could be performed by a skyrmion, for example. The carrier material of these magnetic information units are nanowires, which are more than a thousand times thinner than a human hair and thus promise an extremely compact form of data storage. The transport of data along the wires also works extremely fast, about 100,000 times faster than in a conventional flash memory and with a much lower energy consumption. More

  • in

    Developing smarter, faster machine intelligence with light

    Researchers at the George Washington University, together with researchers at the University of California, Los Angeles, and the deep-tech venture startup Optelligence LLC, have developed an optical convolutional neural network accelerator capable of processing large amounts of information, on the order of petabytes, per second. This innovation, which harnesses the massive parallelism of light, heralds a new era of optical signal processing for machine learning with numerous applications, including in self-driving cars, 5G networks, data-centers, biomedical diagnostics, data-security and more.

    advertisement

    Global demand for machine learning hardware is dramatically outpacing current computing power supplies. State-of-the-art electronic hardware, such as graphics processing units and tensor processing unit accelerators, help mitigate this, but are intrinsically challenged by serial data processing that requires iterative data processing and encounters delays from wiring and circuit constraints. Optical alternatives to electronic hardware could help speed up machine learning processes by simplifying the way information is processed in a non-iterative way. However, photonic-based machine learning is typically limited by the number of components that can be placed on photonic integrated circuits, limiting the interconnectivity, while free-space spatial-light-modulators are restricted to slow programming speeds.
    To achieve a breakthrough in this optical machine learning system, the researchers replaced spatial light modulators with digital mirror-based technology, thus developing a system over 100 times faster. The non-iterative timing of this processor, in combination with rapid programmability and massive parallelization, enables this optical machine learning system to outperform even the top-of-the-line graphics processing units by over one order of magnitude, with room for further optimization beyond the initial prototype.
    Unlike the current paradigm in electronic machine learning hardware that processes information sequentially, this processor uses the Fourier optics, a concept of frequency filtering which allows for performing the required convolutions of the neural network as much simpler element-wise multiplications using the digital mirror technology. 
    “Optics allows for processing large-scale matrices in a single time-step, which allows for new scaling vectors of performing convolutions optically. This can have significant potential for machine learning applications as demonstrated here.”  said Puneet Gupta, professor & vice chair of computer engineering at UCLA.

    make a difference: sponsored opportunity

    Story Source:
    Materials provided by George Washington University. Note: Content may be edited for style and length.

    Journal Reference:
    Mario Miscuglio, Zibo Hu, Shurui Li, Jonathan K. George, Roberto Capanna, Hamed Dalir, Philippe M. Bardet, Puneet Gupta, Volker J. Sorger. Massively parallel amplitude-only Fourier neural network. Optica, 2020; 7 (12): 1812 DOI: 10.1364/OPTICA.408659

    Cite This Page:

    George Washington University. “Developing smarter, faster machine intelligence with light: Researchers invent an optical convolutional neural network accelerator for machine learning.” ScienceDaily. ScienceDaily, 18 December 2020. .
    George Washington University. (2020, December 18). Developing smarter, faster machine intelligence with light: Researchers invent an optical convolutional neural network accelerator for machine learning. ScienceDaily. Retrieved December 19, 2020 from www.sciencedaily.com/releases/2020/12/201218131856.htm
    George Washington University. “Developing smarter, faster machine intelligence with light: Researchers invent an optical convolutional neural network accelerator for machine learning.” ScienceDaily. www.sciencedaily.com/releases/2020/12/201218131856.htm (accessed December 19, 2020). More