More stories

  • in

    Research team makes considerable advance in brain-inspired computing

    While AI is often perceived by the public to be affiliated with software, researchers in Han Wang’s Emerging Nanoscale Materials and Device Lab at USC Ming Hsieh Department of Electrical and Computer Engineering and the Mork Family Department of Chemical, focus on improving AI and machine learning performance through hardware. The lab, whose work is concentrated on neuromorphic computing or brain-inspired computing, has new research that introduces hardware improvements by harnessing a quality known as “randomness” or “stochasticity.” Their research now published in Nature Communications, contradicts the perception of randomness as a quality that will negatively impact computation results and demonstrates the utilization of finely controlled stochastic features in semiconductor devices to improve performing optimization.
    In the brain, randomness plays an important role in human thought or computation. It is born from billions of neurons that spike in response to input stimuli and generate a lot of signals that may or may not be relevant. The decision-making process perhaps is the best-studied example of how our brain makes use of randomness. It allows the brain to take a detour from past experiences and explore a new solution when making a decision, especially to a challenging and unpredictable situation.
    “Neurons exhibit stochastic behavior, which can help certain computational functions” said a USC PhD student Jiahui Ma and a lead author Xiaodong Yan (both equally contributed as first authors). The team wanted to emulate neurons as much as possible and designed a circuit to solve combinatorial optimization problems, which are one of the most important tasks for computers to complete.
    The thinking is that for computers to do this efficiently, they need to behave more like the human brain (on super steroids) in terms of how they process stimuli and information, as well as make decisions.
    In much simpler terms, we need computers to converge on the best solution among all possibilities. Says the researchers, “The randomness introduced in the new device demonstrated in this work can prevent it from getting stuck at a not-so-viable solution, and instead continue to search until it finds a close-to-optimal result.” This is particularly important for optimization problems, says corresponding author Professor Wang, “If one can dynamically tune the randomness features, the machine for performing optimization can work more efficiently as we desire.”
    The researchers achieve this dynamic “tuning” by creating a specialized device, a hetero-memristor. Unlike transistors which are logic switches inside a regular computer chip, the hetero-memristor combines memory and computation together. Memristors have been developed prior, normally with two-terminal structure. The Viterbi team’s innovation is in adding a third electrical terminal and modulating its voltage to activate the neuron-like device and to dynamically tune the stochastic features in its output, much like one heats up a pot of water and dynamically adjusts the temperature to control the activity of the water molecules, hence enabling the so-called simulated “cooling.” This provides a level of control that earlier memristors do not have.
    The researchers say, “This method emulates the stochastic properties of neuron activity.” In fact, neuron activity is perceived to be random, but may follow a certain probability pattern. The hetero-memristors they developed introduce such probability-governed randomness into a neuromorphic computing circuit by the reconfigurable tuning of the device’s intrinsic stochastic property.
    This is thus a more sophisticated building block for creating computers that can tackle sophisticated optimization problems, which can potentially be more efficient. What’s more they can consume less power.
    The full research team includes Xiaodong Yan, Jiahui, Ma Tong Wu, Aoyang Zhang, Jiangbin Wu, Matthew Chin, Zhihan Zhang, Madan Dubey, Wei Wu, Mike Shuo-Wei Chen, Jing Guo, & Han Wang.
    Research was done in collaboration with the Army Research Laboratory, the University of Florida and Georgia Tech.
    Story Source:
    Materials provided by University of Southern California. Original written by Amy Blumenthal. Note: Content may be edited for style and length. More

  • in

    Big data privacy for machine learning just got 100 times cheaper

    Rice University computer scientists have discovered an inexpensive way for tech companies to implement a rigorous form of personal data privacy when using or sharing large databases for machine learning.
    “There are many cases where machine learning could benefit society if data privacy could be ensured,” said Anshumali Shrivastava, an associate professor of computer science at Rice. “There’s huge potential for improving medical treatments or finding patterns of discrimination, for example, if we could train machine learning systems to search for patterns in large databases of medical or financial records. Today, that’s essentially impossible because data privacy methods do not scale.”
    Shrivastava and Rice graduate student Ben Coleman hope to change that with a new method they’ll present this week at CCS 2021, the Association for Computing Machinery’s annual flagship conference on computer and communications security. Using a technique called locality sensitive hashing, Shirvastava and Coleman found they could create a small summary of an enormous database of sensitive records. Dubbed RACE, their method draws its name from these summaries, or “repeated array of count estimators” sketches.
    Coleman said RACE sketches are both safe to make publicly available and useful for algorithms that use kernel sums, one of the basic building blocks of machine learning, and for machine-learning programs that perform common tasks like classification, ranking and regression analysis. He said RACE could allow companies to both reap the benefits of large-scale, distributed machine learning and uphold a rigorous form of data privacy called differential privacy.
    Differential privacy, which is used by more than one tech giant, is based on the idea of adding random noise to obscure individual information.
    “There are elegant and powerful techniques to meet differential privacy standards today, but none of them scale,” Coleman said. “The computational overhead and the memory requirements grow exponentially as data becomes more dimensional.”
    Data is increasingly high-dimensional, meaning it contains both many observations and many individual features about each observation.
    RACE sketching scales for high-dimensional data, he said. The sketches are small and the computational and memory requirements for constructing them are also easy to distribute.
    “Engineers today must either sacrifice their budget or the privacy of their users if they wish to use kernel sums,” Shrivastava said. “RACE changes the economics of releasing high-dimensional information with differential privacy. It’s simple, fast and 100 times less expensive to run than existing methods.”
    This is the latest innovation from Shrivasta and his students, who have developed numerous algorithmic strategies to make machine learning and data science faster and more scalable. They and their collaborators have: found a more efficient way for social media companies to keep misinformation from spreading online, discovered how to train large-scale deep learning systems up to 10 times faster for “extreme classification” problems, found a way to more accurately and efficiently estimate the number of identified victims killed in the Syrian civil war, showed it’s possible to train deep neural networks as much as 15 times faster on general purpose CPUs (central processing units) than GPUs (graphics processing units), and slashed the amount of time required for searching large metagenomic databases.
    The research was supported by the Office of Naval Research’s Basic Research Challenge program, the National Science Foundation, the Air Force Office of Scientific Research and Adobe Inc.
    Story Source:
    Materials provided by Rice University. Original written by Jade Boyd. Note: Content may be edited for style and length. More

  • in

    Researchers train computers to predict the next designer drugs

    UBC researchers have trained computers to predict the next designer drugs before they are even on the market, technology that could save lives.
    Law enforcement agencies are in a race to identify and regulate new versions of dangerous psychoactive drugs such as bath salts and synthetic opioids, even as clandestine chemists work to synthesize and distribute new molecules with the same psychoactive effects as classical drugs of abuse.
    Identifying these so-called “legal highs” within seized pills or powders can take months, during which time thousands of people may have already used a new designer drug.
    But new research is already helping law enforcement agencies around the world to cut identification time down from months to days, crucial in the race to identify and regulate new versions of dangerous psychoactive drugs.
    “The vast majority of these designer drugs have never been tested in humans and are completely unregulated. They are a major public health concern to emergency departments across the world,” says UBC medical student Dr. Michael Skinnider, who completed the research as a doctoral student at UBC’s Michael Smith Laboratories.
    A Minority Report for new designer drugs
    Dr. Skinnider and his colleagues used a database of known psychoactive substances contributed by forensic laboratories around the world to train an artificial intelligence algorithm on the structures of these drugs. The algorithm they used, known as a deep neural network, is inspired by the structure and function of the human brain. More

  • in

    Ultra-large single-crystal WS2 monolayer

    As silicon based semiconducting technology is approaching the limit of its performance, new materials that may replace or partially replace silicon in technology is highly desired. Recently, the emergence of graphene and other two-dimensional (2D) materials offers a new platform for building next generation semiconducting technology. Among them, transition metal dichalcogenides (TMDs), such as MoS2, WS2, MoSe2, WSe2, as most appealing 2D semiconductors.
    A prerequisite of building ultra-large-scale high-performance semiconducting circuits is that the base materials must be a single-crystal of wafer-scale, just like the silicon wafer used today. Although great efforts have been dedicated to the growth of wafer-scale single-crystals of TMDs, the success was very limited until now.
    Distinguished Professor Feng Ding and his research team from the Center for Multidimensional CarbonMaterials (CMCM), within the Institute for Basic Science (IBS) at UNIST, in cooperation with researcher at Peking University (PKU), Beijing Institute of Technology, and Fudan University, reported the direct growth of 2-inch single-crystal WS2 monolayer films very recently. Besides the WS2, the research team also demonstrated the growth of single-crystal MoS2, WSe2, and MoSe2 in wafer scale as well.
    The key technology of epitaxially grown a large sing-crystal is to ensure that all small single-crystal grown on a substrate are uniformly aligned. Because TMDs has non-centrosymmetric structure or the mirror image of a TMD with respect to an edge of it has opposite alignment, we must break such a symmetry by carefully design the substrate. Based on theoretical calculations, the authors proposed a mechanisms of “dual-coupling-guided epitaxy growth” for experimental design. The WS2-sapphireplane interaction as the first driving force, leading to two preferred antiparallel orientations of the WS2 islands. The coupling between WS2 and sapphire step-edge is the second driving force and it will break the degeneracy of the two antiparallel orientations. Then all the TMD single crystals grown on a substrate with step edges are all unidirectional aligned and finally, the coalescence of these small single-crystals leads to a large single-crystal of the same size of the substrate.
    “This new dual-coupling epitaxy growth mechanism is new for controllable materials growth. In principle, it allows us realize to grow all 2D materials into large-area single crystals if proper substrate was found.” Says Dr. Ting Cheng, the co-first author of the study. “We have considered how to choose proper substrates theoretically. First, the substrate should have a low symmetry and, secondly, more step edges are preferred.” emphasizes Professor Feng Ding, the corresponding author of the study.
    “This is a major step forward in the area of 2D materials based device. As the successful growth of wafer-scale single-crystal 2D TMDs on insulators beyond graphene and hBN on transition metal substrates, our study provide the required keystone of 2D semiconductors in high-end applications of electronic and optical devices,” explains professor Feng Ding.
    Story Source:
    Materials provided by Institute for Basic Science. Note: Content may be edited for style and length. More

  • in

    AI helps design the perfect chickpea

    A massive international research effort has led to development of a genetic model for the ‘ultimate’ chickpea, with the potential to lift crop yields by up to 12 per cent.
    The research consortium genetically mapped thousands of chickpea varieties, and the UQ team then used this information to identify the most valuable gene combinations using artificial intelligence (AI).
    Professor Ben Hayes led the UQ component of the project with Professor Kai Voss-Fels and Associate Professor Lee Hickey, to develop a ‘haplotype’ genomic prediction crop breeding strategy, for enhanced performance for seed weight.
    “Most crop species only have a few varieties sequenced, so it was a massive undertaking by the international team to analyse more than 3000 cultivated and wild varieties,” Professor Hayes said.
    The landmark international study was led by Dr Rajeev Varshney from the International Crops Research Institute for the Semi-Arid Tropics in Hyderabad, India. The study confirmed chickpea’s origin in the Fertile Crescent and provides a complete picture of genetic variation within chickpea.
    “We identified 1,582 novel genes and established the pan-genome of chickpea, which will serve as a foundation for breeding superior chickpea varieties with enhanced yield, higher resistance to drought, heat and diseases,” Dr Varshney said. More

  • in

    Machine learning refines earthquake detection capabilities

    Researchers at Los Alamos National Laboratory are applying machine learning algorithms to help interpret massive amounts of ground deformation data collected with Interferometric Synthetic Aperture Radar (InSAR) satellites; the new algorithms will improve earthquake detection.
    “Applying machine learning to InSAR data gives us a new way to understand the physics behind tectonic faults and earthquakes,” said Bertrand Rouet-Leduc, a geophysicist in Los Alamos’ Geophysics group. “That’s crucial to understanding the full spectrum of earthquake behavior.”
    New satellites, such as the Sentinel 1 Satellite Constellation and the upcoming NISAR Satellite, are opening a new window into tectonic processes by allowing researchers to observe length and time scales that were not possible in the past. However, existing algorithms are not suited for the vast amount of InSAR data flowing in from these new satellites, and even more data will be available in the near future.
    In order to process all of this data, the team at Los Alamos developed the first tool based on machine learning algorithms to extract ground deformation from InSAR data, which enables the detection of ground deformation automatically — without human intervention — at a global scale. Equipped with autonomous detection of deformation on faults, this tool can help close the gap in existing detection capabilities and form the foundations for a systematic exploration of the properties of active faults.
    Systematically characterizing slip behavior on active faults is key to unraveling the physics of tectonic faulting, and will help researchers understand the interplay between slow earthquakes, which gently release stress, and fast earthquakes, which quickly release stress and can cause significant damage to surrounding communities.
    The team’s new methodology enables the detection of ground deformation automatically at a global scale, with a much finer temporal resolution than existing approaches, and a detection threshold of a few millimeters. Previous detection thresholds were in the centimeter range.
    In preliminary results of the approach, applied to data over the North Anatolian Fault, the method reaches two millimeter detection, revealing a slow earthquakes twice as extensive as previously recognized.
    This work was funded through Los Alamos National Laboratory’s Laboratory Directed Research and Development Office.
    Story Source:
    Materials provided by DOE/Los Alamos National Laboratory. Note: Content may be edited for style and length. More

  • in

    First quantum simulation of baryons

    A team of researchers led by an Institute for Quantum Computing (IQC) faculty member performed the first-ever simulation of baryons — fundamental quantum particles — on a quantum computer.
    With their results, the team has taken a step towards more complex quantum simulations that will allow scientists to study neutron stars, learn more about the earliest moments of the universe, and realize the revolutionary potential of quantum computers.
    “This is an important step forward — it is the first simulation of baryons on a quantum computer ever,” Christine Muschik, an IQC faculty member, said. “Instead of smashing particles in an accelerator, a quantum computer may one day allow us to simulate these interactions that we use to study the origins of the universe and so much more.”
    Muschik, also a physics and astronomy professor at the University of Waterloo and associate faculty member at the Perimeter Institute, leads the Quantum Interactions Group, which studies the quantum simulation of lattice gauge theories. These theories are descriptions of the physics of reality, including the Standard Model of particle physics. The more inclusive a gauge theory is of fields, forces, particles, spatial dimensions and other parameters, the more complex it is — and the more difficult it is for a classical supercomputer to model.
    Non-Abelian gauge theories are particularly interesting candidates for simulations because they are responsible for the stability of matter as we know it. Classical computers can simulate the non-Abelian matter described in these theories, but there are important situations — such as matter with high densities — that are inaccessible for regular computers. And while the ability to describe and simulate non-Abelian matter is fundamental for being able to describe our universe, none has ever been simulated on a quantum computer.
    Working with Randy Lewis from York University, Muschik’s team at IQC developed a resource-efficient quantum algorithm that allowed them to simulate a system within a simple non-Abelian gauge theory on IBM’s cloud quantum computer paired with a classical computer.
    With this landmark step, the researchers are blazing a trail towards the quantum simulation of gauge theories far beyond the capabilities and resources of even the most powerful supercomputers in the world.
    “What’s exciting about these results for us is that the theory can be made so much more complicated,” Jinglei Zhang, a postdoctoral fellow at IQC and the University of Waterloo Department of Physics and Astronomy, said. “We can consider simulating matter at higher densities, which is beyond the capability of classical computers.”
    As scientists develop more powerful quantum computers and quantum algorithms, they will be able to simulate the physics of these more complex non-Abelian gauge theories and study fascinating phenomena beyond the reach of our best supercomputers.
    This breakthrough demonstration is an important step towards a new era of understanding the universe based on quantum simulation.
    Story Source:
    Materials provided by University of Waterloo. Note: Content may be edited for style and length. More

  • in

    A personalized exosuit for real-world walking

    People rarely walk at a constant speed and a single incline. We change speed when rushing to the next appointment, catching a crosswalk signal, or going for a casual stroll in the park. Slopes change all the time too, whether we’re going for a hike or up a ramp into a building. In addition to environmental variably, how we walk is influenced by sex, height, age, and muscle strength, and sometimes by neural or muscular disorders such as stroke or Parkinson’s Disease.
    This human and task variability is a major challenge in designing wearable robotics to assist or augment walking in real-world conditions. To date, customizing wearable robotic assistance to an individual’s walking requires hours of manual or automatic tuning — a tedious task for healthy individuals and often impossible for older adults or clinical patients.
    Now, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a new approach in which robotic exosuit assistance can be calibrated to an individual and adapt to a variety of real-world walking tasks in a matter of seconds. The bioinspired system uses ultrasound measurements of muscle dynamics to develop a personalized and activity-specific assistance profile for users of the exosuit.
    “Our muscle-based approach enables relatively rapid generation of individualized assistance profiles that provide real benefit to the person walking,” said Robert D. Howe, the Abbott and James Lawrence Professor of Engineering, and co-author of the paper.
    The research is published in Science Robotics.
    Previous bioinspired attempts at developing individualized assistance profiles for robotic exosuits focused on the dynamic movements of the limbs of the wearer. The SEAS researchers took a different approach. The research was a collaboration between Howe’s Harvard Biorobotics Laboratory, which has extensive experience in ultrasound imaging and real-time image processing, and the Harvard Biodesign Lab, run by Conor J. Walsh, the Paul A. Maeder Professor of Engineering and Applied Sciences at SEAS, which develops soft wearable robots for augmenting and restoring human performance. More