More stories

  • in

    Solving complex learning tasks in brain-inspired computers

    Developing a machine that processes information as efficiently as the human brain has been a long-standing research goal towards true artificial intelligence. An interdisciplinary research team at Heidelberg University and the University of Bern (Switzerland) led by Dr Mihai Petrovici is tackling this problem with the help of biologically-inspired artificial neural networks. Spiking neural networks, which mimic the structure and function of a natural nervous system, represent promising candidates because they are powerful, fast, and energy-efficient. One key challenge is how to train such complex systems. The German-Swiss research team has now developed and successfully implemented an algorithm that achieves such training.
    The nerve cells (or neurons) in the brain transmit information using short electrical pulses known as spikes. These spikes are triggered when a certain stimulus threshold is exceeded. Both the frequency with which a single neuron produces such spikes and the temporal sequence of the individual spikes are critical for the exchange of information. “The main difference of biological spiking networks to artificial neural networks is that, because they are using spike-based information processing, they can solve complex tasks such as image recognition and classification with extreme energy efficiency,” states Julian Göltz, a doctoral candidate in Dr Petrovici’s research group.
    Both the human brain and the architecturally similar artificial spiking neural networks can only perform at their full potential if the individual neurons are properly connected to one another. But how can brain-inspired — that is, neuromorphic — systems be adjusted to process spiking input correctly? “This question is fundamental for the development of powerful artificial networks based on biological models,” stresses Laura Kriener, also a member of Dr Petrovici’s research team. Special algorithms are required to guarantee that the neurons in a spiking neural network fire at the correct time. These algorithms adjust the connections between the neurons so that the network can perform the required task, such as classifying images with high precision.
    The team under the direction of Dr Petrovici developed just such an algorithm. “Using this approach, we can train spiking neural networks to code and transmit information exclusively in single spikes. They thereby produce the desired results especially quickly and efficiently,” explains Julian Göltz. Moreover, the researchers succeeded in implementing a neural network trained with this algorithm on a physical platform — the BrainScaleS-2 neuromorphic hardware platform developed at Heidelberg University.
    According to the researchers, the BrainScaleS system processes information up to a thousand times faster than the human brain and needs far less energy than conventional computer systems. It is part of the European Human Brain Project, which integrates technologies like neuromorphic computing into an open platform called EBRAINS. “However, our work is not only interesting for neuromorphic computing and biologically inspired hardware. It also acknowledges the demand from the scientific community to transfer so-called Deep Learning approaches to neuroscience and thereby further unveil the secrets of the human brain,” emphasises Dr Petrovici.
    The research was funded by the Manfred Stärk Foundation and the Human Brain Project — one of three European flagship initiatives in Future and Emerging Technologies supported under the European Union’s Horizon 2020 Framework Programme. The research results were published in the journal “Nature Machine Intelligence.”
    Story Source:
    Materials provided by Heidelberg University. Note: Content may be edited for style and length. More

  • in

    Technology’s impact on worker well-being

    In the traditional narrative of the evolving 21st century workplace, technological substitution of human employees is treated as a serious concern. But technological complementarity — the use of automation and artificial intelligence to complement workers, rather than replace them — is viewed optimistically as a good thing, improving productivity and wages for those who remain employed.
    That’s the story two graduate researchers from the Georgia Institute of Technology and Georgia State University kept reading, from policymakers and other scholars, as they began their own study of technology’s impact on the workplace. But there was another, more nuanced story that ultimately informed their research.
    “We saw these images on the internet of worker strikes that were happening all around the world in different cities and realized that there was more going on, something beyond the usual optimistic discourse around this topic,” said Daniel Schiff, a Ph.D. candidate in the School of Public Policy at Georgia Tech.
    The photos and stories of unhappy workers protesting conditions in their modern, technologically-enhanced workplaces inspired Schiff and Luísa Nazareno, a graduate researcher in the Andrew Young School of Policy Studies at Georgia State, to dig a little deeper. The result is a new study, “The impact of automation and artificial intelligence on worker well-being,” in the journal Technology in Society.
    The paper presents a more surprising, dynamic, and complex picture of the recent history and likely future of automation and AI in the workplace. Schiff and Nazareno have incorporated multiple disciplines — economics, sociology, psychology, policy, even ethics — to reframe the conversation around automation and AI in the workplace and perhaps help decision makers and researchers think a bit deeper and more broadly about the human component.
    “The well-being of the worker has implications for all of society, for families, even for productivity,” Nazareno said. “If we are really interested in productivity, worker well-being is something that must be taken into account.”
    Changing the Discourse More

  • in

    New study solves energy storage and supply puzzle

    Curtin University research has found a simple and affordable method to determine which chemicals and types of metals are best used to store and supply energy, in a breakthrough for any battery-run devices and technologies reliant on the fast and reliable supply of electricity, including smart phones and tablets.
    Lead author Associate Professor Simone Ciampi from Curtin’s School of Molecular and Life Sciences said this easy, low-cost method of determining how to produce and retain the highest energy charge in a capacitor, could be of great benefit to all scientists, engineers and start-ups looking to solve the energy storage challenges of the future.
    “All electronic devices require an energy source. While a battery needs to be recharged over time, a capacitor can be charged instantaneously because it stores energy by separating charged ions, found in ionic liquids,” Associate Professor Ciampi said.
    “There are thousands of types of ionic liquids, a type of “liquid salt,” and until now, it was difficult to know which would be best suited for use in a capacitor. What our team has done is devise a quick and easy test, able to be performed in a basic lab, which can measure both the ability to store charge when a solid electrode touches a given ionic liquid — a simple capacitor — as well as the stability of the device when it’s charged.
    “The study has also been able to unveil a model that can predict which ionic liquid is likely to be the best performing for fast charging and long-lasting energy storage.”
    Research co-author PhD student Mattia Belotti, also from Curtin’s School of Molecular and Life Sciences said the test simply required a relatively basic and affordable piece of equipment, called a potentiostat.
    “The simplicity of this test means anyone can apply it without the need for expensive equipment. Using this method, our research found that charging the device for 60 seconds produced a full charge, which did not ‘leak’ and begin to diminish for at least four days,” Mr Belotti said.
    “The next step will be to use this new screening method to find ionic liquid/electrode combinations with an even longer duration in the charged state and larger energy density.”
    Funded by the Australian Research Council, the study was led by Curtin University and done in collaboration with the Australian National University and Monash University.
    Other Curtin authors include Mr Xin Lyu, Dr Nadim Darwish, Associate Professor Debbie Silvester and Dr Ching Goh, all from the School of Molecular and Life Sciences.
    Story Source:
    Materials provided by Curtin University. Note: Content may be edited for style and length. More

  • in

    Computer scientists developed method for identifying disease biomarkers with high accuracy

    Researchers are developing a deep learning network capable of detecting disease biomarkers with a much higher degree of accuracy.
    Experts at the University of Waterloo’s Cheriton School of Computer Science have created a deep neural network that achieves 98 per cent detection of peptide features in a dataset. That means scientists and medical practitioners have a greater chance of discovering possible diseases through tissue sample analysis.
    There are multiple existing techniques for detecting diseases by analyzing the protein structure of bio-samples. Computer programs increasingly play a part in this process by examining the large amount of data produced in such tests to pinpoint specific markers of disease.
    “But existing programs are often inaccurate or can be limited by human error in their underlying functions,” said Fatema Tuz Zohora, a PhD researcher in the Cheriton School of Computer Science.
    “What we’ve done in our research is to create a deep neural network that achieves 98 percent detection of peptide features in a dataset. We’re working to make disease detection more accurate to provide healthcare practitioners with the best tools.”
    Peptides are the chains of amino acids that make up proteins in human tissue. It is these small chains that often display the specific markers of disease. Having better testing means it will be possible to detect diseases earlier and with greater accuracy.
    Zohora’s team calls their new deep learning network PointIso. It is a form of machine learning or artificial intelligence that was trained on an enormous database of existing sequences from bio-samples.
    “Other methods for disease biomarker detections usually have lots of parameters which have to be manually set by field experts,” Zohora said. “But our deep neural network learns the parameters itself, which is more accurate, and makes the disease biomarker discovery approach automated.”
    The new program is also unique in that it is not trained to only look for one kind of disease but to identify the biomarkers associated with a range of diseases, including heart disease, cancer and even COVID-19.
    “It’s applicable for any kind of disease biomarker discovery,” Zohora said. “And because it is essentially a pattern recognition model, it can be used for detection of any small objects within a large amount of data. There are so many applications for medicine and science; it’s exciting to see the possibilities opening up through this research and how it can help people.”
    Story Source:
    Materials provided by University of Waterloo. Note: Content may be edited for style and length. More

  • in

    A mathematical model to help optimize vaccine development

    When it comes to the design of a novel vaccine against viral infection, vaccine developers have to make several major decisions. One of them is the choice of what type of immune response they wish to induce.
    In a recent Forum article in Trends in Immunology a group of researchers at UPF and the Marchuk Institute of Numerical Mathematics in Moscow, Russia, led by Andreas Meyerhans and Gennady Bocharov, provides a theoretical paper that might help with this issue. The researchers have used a mathematical model to better understand the immune response to vaccines. This could help improve vaccine design and simplify the associated technical challenges.
    Viruses are intracellular parasites that need host cells to multiply. Thus, for a virus to infect a human, it has to get access to some of the body’s cells that will enable viruses to multiply. Progeny viruses will be assembled within the infected cells and, upon release, will infect other target cells in the surroundings. Without any immune response to counteract the virus, it will continue to spread and may cause organ damage.
    The researchers have used a mathematical model to better understand the immune response to vaccines. This could help improve vaccine design and simplify the associated technical challenges.
    Vaccines are the most cost-effective way to provide a host with virus-specific immunity that will then help it to keep an infectious virus below pathogenic levels. To do so, vaccines may induce antibodies that help to neutralize assembled free viruses and virus-specific cytotoxic T cells that will kill infected cells and thus reduce the number of virus-producing cells.
    While both arms of the immune response are considered of major importance for vaccine efficacy, the question is how do they cooperate? Are their actions simply additive or more than additive? The researchers have now addressed these fundamental questions by examining the contribution of antibodies and cytotoxic T cells using a model based on virus infection dynamics. They show that these two primary control factors of virus infection are cooperating multiplicatively rather than additively. While this relationship might appear rather abstract, it has very practical consequences for vaccine development.
    For example, f to be efficient a virus vaccine needs to increase the basic immune response by a factor of 10,000, this may be achieved in two ways. Either antibodies or cytotoxic T cells are increased by a factor of 10,000 or each of these responses is increased by only a factor of 100. The latter might be easier to obtain in practical terms and thus provide vaccine developers with different options for their design.
    Although these considerations are based only on theoretical grounds and require experimental validation, the first data in this direction are emerging. “We hope that our conceptional work will positively help with vaccine design,” says Bocharov. And Meyerhans, the last author of the study, adds that “our considerations may help to simplify the technical challenges for novel vaccines and thus be of some practical use for healthcare.”
    Story Source:
    Materials provided by Universitat Pompeu Fabra – Barcelona. Note: Content may be edited for style and length. More

  • in

    All about Eve, sophisticated AI

    No two human beings are the same, a biologic singularity encoded in the unique arrangement of the molecules that make up our individual DNA.
    Variation is a cardinal feature of biology, the driver of diversity, and the engine of evolution, but it has a dark side. Alterations in DNA sequences and the resulting proteins that build our cells can sometimes lead to profound disruptions in physiologic function and cause disease.
    But which gene alterations are normal or at least inconsequential, and which ones portend disease?
    The answer is clear for a handful of well-known genetic mutations, yet despite dramatic leaps in genome sequencing technology over the past 20 years, our ability to interpret the meaning of millions of genetic variations identified through such sequencing still lags behind.
    To make sense of it all, researchers at Harvard Medical School and Oxford University have designed an AI tool called EVE (Evolutionary model of Variant Effect), which uses a sophisticated type of machine learning to detect patterns of genetic variation across hundreds of thousands of nonhuman species and then use them to make predictions about the meaning of variations in human genes.
    In an analysis published Oct. 27 in Nature, the researchers used EVE to assess 36 million protein sequences and 3,219 disease-associated genes across multiple species. More

  • in

    How robots can rule roads

    An ethical framework developed by government, road users and other stakeholders must steer the introduction of new road rules for connected and automated vehicles (CAVs), international experts say.
    They warn that strictly forbidding CAVs of various kinds to break existing traffic rules may hamper road safety, contrary to what most people may claim. However, this requires close scrutiny so these high-tech vehicles can meet their potential to reduce road casualties.
    “While they promise to minimise road safety risk, CAVs like hybrid AI systems can still create collision risk due to technological and human-system interaction issues, the complexity of traffic, interaction with other road users and vulnerable road users,” says UK transport consultant Professor Nick Reed, from Reed Mobility, in a new paper in Ethics and Information Technology.
    “Ethical goal functions for CAVs would enable developers to optimise driving behaviours for safety under conditions of uncertainty while allowing for differentiation of products according to brand values.”
    This part is important since it does not state that all vehicle brands should drive in exactly the same manner, which still allows brand differentiation, researchers say.
    Around the world, transport services are already putting CAVs, including driverless cars, on the road to deliver new services and freight options to improve road safety, alleviate congestion and increase drive comfort and transport system productivity. More

  • in

    A novel solution to a combinatorial optimization problem in bicycle sharing systems

    Traffic congestion has been worsening since the 1950s in large cities thanks to the exorbitant number of cars sold each year. Unfortunately, the figurative price tag attached to excessive traffic includes higher carbon dioxide emissions, more collectively wasted time, and exacerbated health problems. Many municipalities have tackled the problem of traffic by implementing bicycle sharing systems, in which people can borrow bikes from strategically placed ports and ride wherever they want, as long as they eventually return the bikes to a port, although not necessarily the one from where the bike was originally obtained.
    As one may or may not immediately notice, this last permission creates a new problem by itself. Whenever someone borrows a bike and does not make a round trip with it, an additional bike crops up at the destination port just as there’s a loss of one bike at the origin port. As time passes, the distribution of bikes across ports becomes unbalanced, causing both an excessive accumulation of bikes at certain ports and a dearth of bikes in others. This issue is generally addressed by periodically sending out a fleet of vehicles capable of transporting multiple bikes in order to restore ports to their ‘ideal’ number of bikes.
    Much research has been dedicated to the bicycle rebalancing problem using a fleet of vehicles. Finding the optimal routing paths for the vehicles is in and of itself a highly complex mathematical problem in the field of combinatorial optimization. One must make sure that the optimization algorithms used can reach a good-enough solution in a reasonable time for a realistically large number of ports and vehicles. Many methods, however, fail to find feasible solutions when multiple constrains are considered simultaneously, such as time, capacity, and loading/unloading constraints for the vehicles.
    But what if we allowed the optimization strategy to change the strategies a little bit to make the best out of difficult situations? In a recent study published in MDPI’s Applied Sciences, a team of scientists suggested an innovative twist to the routing problem of bicycle sharing systems using this concept. Led by Professor Tohru Ikeguchi of Tokyo University of Science, the team comprising PhD student Honami Tsushima from Tokyo University of Science and Associate Professor Takafumi Matsuura from Nippon Institute of Technology, Japan, proposed a new formulation of the routing problem in which the constraints imposed on the routings can be violated. This enabled using the optimization algorithm for exploring what is known as the space of “infeasible solutions.” Prof. Ikeguchi explains their reasoning, “In real life, if a work can be completed through overtime within a few minutes, we would work beyond the time limit. Similarly, if we are only carrying four bikes and need to supply five, we would still supply the four we have.”
    Following this line of thought, the researchers formulated the “soft constraints” variant of the routing problem in bicycle rebalancing. Using this approach, instead of outright excluding solutions that violate constraints, these can be considered valid paths that incur dynamically adjusted penalties and taken into consideration when assessing possible routings. This approach enabled the team to devise an algorithm that can make use of the space of infeasible solutions to speed up the search for optimal or near-optimal solutions.
    The researchers evaluated the performance of their method through numerical experiments with benchmark problems including up to 50 ports and three vehicles. The results show that their strategy could find optimal or near-optimal solutions in all cases, and that the algorithm could search both the feasible and infeasible solution spaces efficiently. This paints a brighter future for people in cities with congested traffic in which bicycle sharing systems could become an attractive solution. As Prof. Ikeguchi remarks, “It is likely that bike sharing systems will spread worldwide in the future, and we believe that the routing problem in bicycle rebalancing is an important issue to be solved in modern societies.”
    Hopefully, further efforts to improve bicycle sharing systems will alleviate traffic congestion and make people’s lives in big cities healthier and more enjoyable.
    Story Source:
    Materials provided by Tokyo University of Science. Note: Content may be edited for style and length. More