More stories

  • in

    Robotic AI learns to be spontaneous

    Autonomous functions for robots, such as spontaneity, are highly sought after. Many control mechanisms for autonomous robots are inspired by the functions of animals, including humans. Roboticists often design robot behaviors using predefined modules and control methodologies, which makes them task-specific, limiting their flexibility. Researchers offer an alternative machine learning-based method for designing spontaneous behaviors by capitalizing on complex temporal patterns, like neural activities of animal brains. They hope to see their design implemented in robotic platforms to improve their autonomous capabilities.
    Robots and their control software can be classified as a dynamical system, a mathematical model that describes the ever-changing internal states of something. There is a class of dynamical system called high-dimensional chaos, which has attracted many researchers as it is a powerful way to model animal brains. However, it is generally hard to gain control over high-dimensional chaos owing to the complexity of the system parameters and its sensitivity to varying initial conditions, a phenomenon popularized by the term “butterfly effect.” Researchers from the Intelligent Systems and Informatics Laboratory and the Next Generation Artificial Intelligence Research Center at the University of Tokyo explore novel ways for exploiting the dynamics of high-dimensional chaos to implement humanlike cognitive functions.
    “There is an aspect of high-dimensional chaos called chaotic itinerancy (CI) which can explain brain activity during memory recall and association,” said doctoral student Katsuma Inoue. “In robotics, CI has been a key tool for implementing spontaneous behavioral patterns. In this study, we propose a recipe for implementing CI in a simple and systematic fashion only using complicated time-series patterns generated by high-dimensional chaos. We felt our approach holds potential for more robust and versatile applications when it comes to designing cognitive architectures. It allows us to design spontaneous behaviors without any predefined explicit structures in the controller, which would otherwise serve as a hindrance.”
    Reservoir computing (RC) is a machine learning technique that builds on dynamical systems theory and provides the basis of the team’s approach. RC is used to control a type of neural network called a recurrent neural network (RNN). Unlike other machine learning approaches that tune all neural connections within a neural network, RC only tweaks some parameters while keeping all other connections of an RNN fixed, which makes it possible to train the system faster. When the researchers applied principles of RC to a chaotic RNN, it exhibited the kind of spontaneous behavioral patterns they were hoping for. For some time, this has proven a challenging task in the field of robotics and artificial intelligence. Furthermore, the training for the network takes place prior to execution and in a short amount of time.
    “Animal brains yield high-dimensional chaos in their activities, but how and why they utilize chaos remains unexplained. Our proposed model could offer insight into how chaos contributes to information processing in our brains,” said Associate Professor Kohei Nakajima. “Also, our recipe would have a broader impact outside the field of neuroscience since it can potentially be applied to other chaotic systems too. For example, next-generation neuromorphic devices inspired by biological neurons potentially exhibit high-dimensional chaos and would be excellent candidates for implementing our recipe. I hope we will see artificial implementations of brain functions before too long.”

    Story Source:
    Materials provided by University of Tokyo. Note: Content may be edited for style and length. More

  • in

    Turning heat into electric power with efficient organic thermoelectric material

    Thermoelectric materials can turn a temperature difference into electricity. Organic thermoelectric materials could be used to power wearable electronics or sensors; however, the power output is still very low. An international team led by Jan Anton Koster, Professor of Semiconductor Physics at the University of Groningen, has now produced an n-type organic semiconductor with superior properties that brings these applications a big step closer. Their results were published in the journal Nature Communications on 10 November.
    The thermoelectric generator is the only human-made power source outside our solar system: both Voyager space probes, which were launched in 1977 and are now in interstellar space, are powered by generators that convert heat (in this case, provided by a radioactive source) into an electric current. ‘The great thing about such generators is that they are solid-state devices, without any moving parts,’ explains Koster.
    Conductivity
    However, the inorganic thermoelectric material used in the Voyager’s generators is not suitable for more mundane applications. These inorganic materials contain toxic or very rare elements. Furthermore, they are usually rigid and brittle. ‘That is why interest in organic thermoelectric materials is increasing,’ says Koster. Yet, these materials have their own problems. The optimal thermoelectric material is a phonon glass, which has a very low thermal conductivity (so that it can maintain a temperature difference) and also an electron crystal with high electrical conductivity (to transport the generated current). Koster: ‘The problem with organic semiconductors is that they usually have a low electrical conductivity.’
    Nevertheless, over a decade of experience in developing organic photovoltaic materials at the University of Groningen has led the team on a path to a better organic thermoelectric material. They focused their attention on an n-type semiconductor, which carries a negative charge. For a thermoelectric generator, both n-type and p-type (carrying positive charge) semiconductors are needed, although the efficiency of organic p-type semiconductors is already quite good.
    Buckyballs
    The team used fullerenes (buckyballs, made up of sixty carbon atoms) with a double-triethylene glycol-type side-chain added to them. To increase the electrical conductivity, an n-dopant was added. ‘The fullerenes already have a low thermal conductivity, but adding the side chains makes it even lower, so the material is a very good phonon glass,’ says Koster. ‘Furthermore, these chains also incorporate the dopant and create a very ordered structure during annealing.’ The latter makes the material an electric crystal, with an electrical conductivity similar to that of pure fullerenes.
    ‘We have now made the first organic phonon glass electric crystal,’ Koster says. ‘But the most exciting part for me is its thermoelectric properties.’ These are expressed by the ZT value. The T refers to the temperature at which the material operates, while Z incorporates the other material properties. The new material increases the highest ZT value in its class from 0.2 to over 0.3, a sizeable improvement.
    Sensors
    ‘A ZT value of 1 is considered a commercially viable efficiency, but we believe that our material could already be used in applications that require a low output,’ says Koster. To power sensors, for example, a few microwatts of power are required and these could be produced by a couple of square centimetres of the new material. ‘Our collaborators in Milan are already creating thermoelectric generators using fullerenes with a single side chain, which have a lower ZT value than we now have.’
    The fullerenes, side chain and dopant are all readily available and the production of the new material can likely be scaled up without too many problems, according to Koster. He is extremely happy with the results of this study. ‘The paper has twenty authors from nine different research groups. We used our combined knowledge of synthetic organic chemistry, organic semiconductors, molecular dynamics, thermal conductivity and X-ray structural studies to get this result. And we already have some ideas on how to further increase the efficiency.’

    Story Source:
    Materials provided by University of Groningen. Note: Content may be edited for style and length. More

  • in

    Sorting out viruses with machine learning

    The ongoing global pandemic has created an urgent need for rapid tests that can diagnose the presence of the SARS-CoV-2 virus, the pathogen that causes COVID-19, and distinguish it from other respiratory viruses. Now, researchers from Japan have demonstrated a new system for single-virion identification of common respiratory pathogens using a machine learning algorithm trained on changes in current across silicon nanopores. This work may lead to fast and accurate screening tests for diseases like COVID-19 and influenza.
    In a study published this month in ACS Sensors scientists at Osaka University have introduced a new system using silicon nanopores sensitive enough to detect even a single virus particle when coupled with a machine learning algorithm.
    In this method, a silicon nitride layer just 50 nm thick suspended on a silicon wafer has tiny nanopores added, which are themselves only 300 nm in diameter. When a voltage difference is applied to the solution on either side of the wafer, ions travel through the nanopores in a process called electrophoresis.
    The motion of the ions can be monitored by the current they generate, and when a viral particle enters a nanopore, it blocks some of the ions from passing through, leading to a transient dip in current. Each dip reflects the physical properties of the particle, such as volume, surface charge, and shape, so they can be used to identify the kind of virus.
    The natural variation in the physical properties of virus particles had previously hindered implementation of this approach, however, using machine learning, the team built a classification algorithm trained with signals from known viruses to determine the identity of new samples. “By combining single-particle nanopore sensing with artificial intelligence, we were able to achieve highly accurate identification of multiple viral species,” explains senior author Makusu Tsutsui.
    The computer can discriminate the differences in electrical current waveforms that cannot be identified by human eyes, which enables highly accurate virus classification. In addition to coronavirus, the system was tested with similar pathogens — respiratory syncytial virus, adenovirus, influenza A, and influenza B.
    The team believes that coronaviruses are especially well suited for this technique since their spiky outer proteins may even allow different strains to be classified separately. “This work will help with the development of a virus test kit that outperforms conventional viral inspection methods,” says last author Tomoji Kawai.
    Compared with other rapid viral tests like polymerase chain reaction or antibody-based screens, the new method is much faster and does not require costly reagents, which may lead to improved diagnostic tests for emerging viral particles that cause infectious diseases such as COVID-19.

    Story Source:
    Materials provided by Osaka University. Note: Content may be edited for style and length. More

  • in

    AI speeds up development of new high-entropy alloys

    Developing new materials takes a lot of time, money and effort. Recently, a POSTECH research team has taken a step closer to creating new materials by applying AI to develop high-entropy alloys (HEAs) which are coined as “alloy of alloys.”
    A joint research team led by Professor Seungchul Lee, Ph.D. candidate Soo Young Lee, Professor Hyungyu Jin and Ph.D. candidate Seokyeong Byeon of the Department of Mechanical Engineering along with Professor Hyoung Seop Kim of the Department of Materials Science and Engineering have together developed a technique for phase prediction of HEAs using AI. The findings from the study were published in the latest issue of Materials and Design, an international journal on materials science.
    Metal materials are conventionally made by mixing the principal element for the desired property with two or three auxiliary elements. In contrast, HEAs are made with equal or similar proportions of five or more elements without a principal element. The types of alloys that can be made like this are theoretically infinite and have exceptional mechanical, thermal, physical, and chemical properties. Alloys resistant to corrosion or extremely low temperatures, and high-strength alloys have already been discovered.
    However, until now, designing new high-entropy alloy materials was based on trial and error, thus requiring much time and budget. It was even more difficult to determine in advance the phase and the mechanical and thermal properties of the high-entropy alloy being developed.
    To this, the joint research team focused on developing prediction models on HEAs with enhanced phase prediction and explainability using deep learning. They applied deep learning in three perspectives: model optimization, data generation and parameter analysis. In particular, the focus was on building a data-enhancing model based on the conditional generative adversarial network. This allowed AI models to reflect samples of HEAs that have not yet been discovered, thus improving the phase prediction accuracy compared to the conventional methods.
    In addition, the research team developed a descriptive AI-based HEA phase prediction model to provide interpretability to deep learning models, which acts as a black box, while also providing guidance on key design parameters for creating HEAs with certain phases.
    “This research is the result of drastically improving the limitations of existing research by incorporating AI into HEAs that have recently been drawing much attention,” remarked Professor Seungchul Lee. He added, “It is significant that the joint research team’s multidisciplinary collaboration has produced the results that can accelerate AI-based fabrication of new materials.”
    Professor Hyungyu Jin also added, “The results of the study are expected to greatly reduce the time and cost required for the existing new material development process, and to be actively used to develop new high-entropy alloys in the future.”

    Story Source:
    Materials provided by Pohang University of Science & Technology (POSTECH). Note: Content may be edited for style and length. More

  • in

    Survey of COVID-19 research provides fresh overview

    Researchers at Karolinska Institutet in Sweden have explored all COVID-19 research published during the initial phase of the pandemic. The results, which were achieved by using a machine learning-based approach and published in the Journal of Medical Internet Research, will make it easier to direct future research to where it is most needed.
    In the wake of the rapid spread of COVID-19, research on the disease has escalated dramatically. Over 60,000 COVID-19-related articles have been indexed to date in the medical database PubMed. This body of research is too large to be assessed by traditional methods, such as systematic and scoping reviews, which makes it difficult to gain a comprehensive overview of the science.
    “Despite COVID-19 being a novel disease, several systematic reviews have already been published,” says Andreas Älgå, medical doctor and researcher at the Department of Clinical Science and Education, Sodersjukhuset at Karolinska Institutet. “However, such reviews are extremely time- and resource-consuming, generally lag far behind the latest published evidence, and only focus on a specific aspect of the pandemic.”
    To obtain a fuller overview, Andreas Älgå and his colleagues have employed a machine learning technique that enables them to map key areas of a research field and track the development over time. This present study included 16,670 scientific papers on COVID-19 published from 14 February to 1 June 2020, divided into 14 different topics.
    The study shows that the most common research topics were health care response, clinical manifestations, and psychosocial impact. Some topics, like health care response, declined over time, while others, such as clinical manifestations and protective measures, showed a growing trend of publications.
    Protective measures, immunology, and clinical manifestations were the research topics published in journals with the highest average scientific ranking. The countries that accounted for the majority of publications (the USA, China, Italy and the UK) were also amongst the ones hardest hit by the pandemic.
    “Our results indicate how the scientific community has reacted to the current pandemic, what issues were prioritised during the early phase and where in the world the research was conducted,” says fellow-researcher Martin Nordberg, medical doctor and researcher at the Department of Clinical Science and Education, Sodersjukhuset.
    The researchers have also developed a website, where regular updates on the evolution of the COVID-19 evidence base can be found (http://www.c19research.org)
    “We hope that our results, including the website, could help researchers and policy makers to form a structured view of the research on COVID-19 and direct future research efforts accordingly,” says Dr Älgå.

    Story Source:
    Materials provided by Karolinska Institutet. Note: Content may be edited for style and length. More

  • in

    Machine learning models to predict critical illness and mortality in COVID-19 patients

    Mount Sinai researchers have developed machine learning models that predict the likelihood of critical events and mortality in COVID-19 patients within clinically relevant time windows. The new models outlined in the study — one of the first to use machine learning for risk prediction in COVID-19 patients among a large and diverse population, and published November 6 in the Journal of Medical Internet Research — could aid clinical practitioners at Mount Sinai and across the world in the care and management of COVID-19 patients.
    “From the initial outburst of COVID-19 in New York City, we saw that COVID-19 presentation and disease course are heterogeneous and we have built machine learning models using patient data to predict outcomes,” said Benjamin Glicksberg, PhD, Assistant Professor of Genetics and Genomic Sciences at the Icahn School of Medicine at Mount Sinai, member of the Hasso Plattner Institute for Digital Health at Mount Sinai and Mount Sinai Clinical Intelligence Center (MSCIC), and one of the study’s principal investigators. “Now in the early stages of a second wave, we are much better prepared than before. We are currently assessing how these models can aid clinical practitioners in managing care of their patients in practice.”
    In the retrospective study using electronic health records from more than 4,000 adult patients admitted to five Mount Sinai Health System hospitals from March to May, researchers and clinicians from the MSCIC analyzed characteristics of COVID-19 patients, including past medical history, comorbidities, vital signs, and laboratory test results at admission, to predict critical events such as intubation and mortality within various clinically relevant time windows that can forecast short and medium-term risks of patients over the hospitalization.
    The researchers used the models to predict a critical event or mortality at time windows of 3, 5, 7, and 10 days from admission. At the one-week mark — which performed best overall, correctly flagging the most critical events while returning the fewest false positives — acute kidney injury, fast breathing, high blood sugar, and elevated lactate dehydrogenase (LDH) indicating tissue damage or disease were the strongest drivers in predicting critical illness. Older age, blood level imbalance, and C-reactive protein levels indicating inflammation, were the strongest drivers in predicting mortality.
    “We have created high-performing predictive models using machine learning to improve the care of our patients at Mount Sinai,” said Girish Nadkarni, MD, Assistant Professor of Medicine (Nephrology) at the Icahn School of Medicine, Clinical Director of the Hasso Plattner Institute for Digital Health at Mount Sinai, and Co-Chair of MSCIC. “More importantly, we have created a method that identifies important health markers that drive likelihood estimates for acute care prognosis and can be used by health institutions across the world to improve care decisions, at both the physician and hospital level, and more effectively manage patients with COVID-19.”

    Story Source:
    Materials provided by The Mount Sinai Hospital / Mount Sinai School of Medicine. Note: Content may be edited for style and length. More

  • in

    Black hole or no black hole: On the outcome of neutron star collisions

    A new study lead by GSI scientists and international colleagues investigates black-hole formation in neutron star mergers. Computer simulations show that the properties of dense nuclear matter play a crucial role, which directly links the astrophysical merger event to heavy-ion collision experiments at GSI and FAIR. These properties will be studied more precisely at the future FAIR facility. The results have now been published in Physical Review Letters. With the award of the 2020 Nobel Prize in Physics for the theoretical description of black holes and for the discovery of a supermassive object at the center of our galaxy the topic currently also receives a lot of attention.
    But under which conditions does a black hole actually form? This is the central question of a study lead by the GSI Helmholtzzentrum für Schwerionenforschung in Darmstadt within an international collaboration. Using computer simulations, the scientists focus on a particular process to form black holes namely the merging of two neutron stars (simulation movie).
    Neutron stars consists of highly compressed dense matter. The mass of one and a half solar masses is squeezed to the size of just a few kilometers. This corresponds to similar or even higher densities than in the inner of atomic nuclei. If two neutron stars merge, the matter is additionally compressed during the collision. This brings the merger remnant on the brink to collapse to a black hole. Black holes are the most compact objects in the universe, even light cannot escape, so these objects cannot be observed directly.
    “The critical parameter is the total mass of the neutron stars. If it exceeds a certain threshold the collapse to a black hole is inevitable” summarizes Dr. Andreas Bauswein from the GSI theory department. However, the exact threshold mass depends on the properties of highly dense nuclear matter. In detail these properties of high-density matter are still not completely understood, which is why research labs like GSI collide atomic nuclei — like a neutron star merger but on a much smaller scale. In fact, the heavy-ion collisions lead to very similar conditions as mergers of neutron stars. Based on theoretical developments and physical heavy-ion experiments, it is possible to compute certain models of neutron star matter, so-call equations of state.
    Employing numerous of these equations of state, the new study calculated the threshold mass for black-hole formation. If neutron star matter or nuclear matter, respectively, is easily compressible — if the equation of state is “soft” — already the merger a relatively light neutron stars leads to the formation of a black hole. If nuclear matter is “stiffer” and less compressible, the remnant is stabilized against the so-called gravitational collapse and a massive rotating neutron star remnant forms from the collision. Hence, the threshold mass for collapse itself informs about properties of high-density matter. The new study revealed furthermore that the threshold to collapse may even clarify whether during the collision nucleon dissolve into their constituents, the quarks.
    “We are very excited about this results because we expect that future observations can reveal the threshold mass” adds Professor Nikolaos Stergioulas of the department of physics of the Aristotle University Thessaloniki in Greece. Just a few years ago a neutron star merger was observed for the first time by measuring gravitational waves from the collision. Telescopes also found the “electromagnetic counterpart” and detected light from the merger event. If a black hole is directly formed during the collision, the optical emission of the merger is pretty dim. Thus, the observational data indicates if a black hole was created. At the same time the gravitational-wave signal carries information about the total mass of the system. The more massive the stars the stronger is the gravitational-wave signal, which thus allows determining the threshold mass.
    While gravitational-wave detectors and telescopes wait for the next neutron star mergers, the course is being set in Darmstadt for knowledge that is even more detailed. The new accelerator facility FAIR, currently under construction at GSI, will create conditions, which are even more similar to those in neutron star mergers. Finally, only the combination of astronomical observations, computer simulations and heavy-ion experiments can settle the questions about the fundamental building blocks of matter and their properties, and, by this, they will also clarify how the collapse to a black hole occurs.

    Story Source:
    Materials provided by GSI Helmholtzzentrum für Schwerionenforschung GmbH. Note: Content may be edited for style and length. More

  • in

    Computer model can predict how COVID-19 spreads in cities

    A team of researchers has created a computer model that accurately predicted the spread of COVID-19 in 10 major cities this spring by analyzing three factors that drive infection risk: where people go in the course of a day, how long they linger and how many other people are visiting the same place at the same time.
    “We built a computer model to analyze how people of different demographic backgrounds, and from different neighborhoods, visit different types of places that are more or less crowded. Based on all of this, we could predict the likelihood of new infections occurring at any given place or time,” said Jure Leskovec, the Stanford computer scientist who led the effort, which involved researchers from Northwestern University.
    The study, published today in the journal Nature, merges demographic data, epidemiological estimates and anonymous cellphone location information, and appears to confirm that most COVID-19 transmissions occur at “superspreader” sites, like full-service restaurants, fitness centers and cafes, where people remain in close quarters for extended periods. The researchers say their model’s specificity could serve as a tool for officials to help minimize the spread of COVID-19 as they reopen businesses by revealing the tradeoffs between new infections and lost sales if establishments open, say, at 20 percent or 50 percent of capacity.
    Study co-author David Grusky, a professor of sociology at Stanford’s School of Humanities and Sciences, said this predictive capability is particularly valuable because it provides useful new insights into the factors behind the disproportionate infection rates of minority and low-income people. “In the past, these disparities have been assumed to be driven by preexisting conditions and unequal access to health care, whereas our model suggests that mobility patterns also help drive these disproportionate risks,” he said.
    Grusky, who also directs the Stanford Center on Poverty and Inequality, said the model shows how reopening businesses with lower occupancy caps tend to benefit disadvantaged groups the most. “Because the places that employ minority and low-income people are often smaller and more crowded, occupancy caps on reopened stores can lower the risks they face,” Grusky said. “We have a responsibility to build reopening plans that eliminate — or at least reduce — the disparities that current practices are creating.”
    Leskovec said the model “offers the strongest evidence yet” that stay-at-home policies enacted this spring reduced the number of trips outside the home and slowed the rate of new infections.

    advertisement

    Following footsteps
    The study traced the movements of 98 million Americans in 10 of the nation’s largest metropolitan areas through half a million different establishments, from restaurants and fitness centers to pet stores and new car dealerships.
    The team included Stanford PhD students Serina Chang, Pang Wei Koh and Emma Pierson, who graduated this summer, and Northwestern University researchers Jaline Gerardin and Beth Redbird, who assembled study data for the 10 metropolitan areas. In population order, these cities include: New York, Los Angeles, Chicago, Dallas, Washington, D.C., Houston, Atlanta, Miami, Philadelphia and San Francisco.
    SafeGraph, a company that aggregates anonymized location data from mobile applications, provided the researchers data showing which of 553,000 public locations such as hardware stores and religious establishments people visited each day; for how long; and, crucially, what the square footage of each establishment was so that researchers could determine the hourly occupancy density.
    The researchers analyzed data from March 8 to May 9 in two distinct phases. In phase one, they fed their model mobility data and designed their system to calculate a crucial epidemiological variable: the transmission rate of the virus under a variety of different circumstances in the 10 metropolitan areas. In real life, it is impossible to know in advance when and where an infectious and susceptible person come in contact to create a potential new infection. But in their model, the researchers developed and refined a series of equations to compute the probability of infectious events at different places and times. The equations were able to solve for the unknown variables because the researchers fed the computer one, important known fact: how many COVID-19 infections were reported to health officials in each city each day.

    advertisement

    The researchers refined the model until it was able to determine the transmission rate of the virus in each city. The rate varied from city to city depending on factors ranging from how often people ventured out of the house to which types of locations they visited.
    Once the researchers obtained transmission rates for the 10 metropolitan areas, they tested the model during phase two by asking it to multiply the rate for each city against their database of mobility patterns to predict new COVID-19 infections. The predictions tracked closely with the actual reports from health officials, giving the researchers confidence in the model’s reliability.
    Predicting infections
    By combining their model with demographic data available from a database of 57,000 census block groups — 600 to 3,000-person neighborhoods — the researchers show how minority and low-income people leave home more often because their jobs require it, and shop at smaller, more crowded establishments than people with higher incomes, who can work-from-home, use home-delivery to avoid shopping and patronize roomier businesses when they do go out. For instance, the study revealed that it’s roughly twice as risky for non-white populations to buy groceries compared to whites. “By merging mobility, demographic and epidemiological datasets, we were able to use our model to analyze the effectiveness and equity of different reopening policies,” Chang said.
    The team has made its tools and data publicly available so other researchers can replicate and build on the findings.
    “In principle, anyone can use this model to understand the consequences of different stay-at-home and business closure policy decisions,” said Leskovec, whose team is now working to develop the model into a user-friendly tool for policymakers and public health officials.
    Jure Leskovec is an associate professor of computer science at Stanford Engineering, a member of Stanford Bio-X and the Wu Tsai Neurosciences Institute. David Grusky is Edward Ames Edmonds Professor in the School of Humanities and Sciences, and a senior fellow at the Stanford Institute for Economic Policy Research (SIEPR).
    This research was supported by the National Science Foundation, the Stanford Data Science Initiative, the Wu Tsai Neurosciences Institute and the Chan Zuckerberg Biohub. More