More stories

  • in

    Ankle exoskeleton enables faster walking

    Being unable to walk quickly can be frustrating and problematic, but it is a common issue, especially as people age. Noting the pervasiveness of slower-than-desired walking, engineers at Stanford University have tested how well a prototype exoskeleton system they have developed — which attaches around the shin and into a running shoe — increased the self-selected walking speed of people in an experimental setting.
    The exoskeleton is externally powered by motors and controlled by an algorithm. When the researchers optimized it for speed, participants walked, on average, 42 percent faster than when they were wearing normal shoes and no exoskeleton. The results of this study were published April 20 in IEEE Transactions on Neural Systems and Rehabilitation Engineering.
    “We were hoping that we could increase walking speed with exoskeleton assistance, but we were really surprised to find such a large improvement,” said Steve Collins, associate professor of mechanical engineering at Stanford and senior author of the paper. “Forty percent is huge.”
    For this initial set of experiments, the participants were young, healthy adults. Given their impressive results, the researchers plan to run future tests with older adults and to look at other ways the exoskeleton design can be improved. They also hope to eventually create an exoskeleton that can work outside the lab, though that goal is still a ways off.
    “My research mission is to understand the science of biomechanics and motor control behind human locomotion and apply that to enhance the physical performance of humans in daily life,” said Seungmoon Song, a postdoctoral fellow in mechanical engineering and lead author of the paper. “I think exoskeletons are very promising tools that could achieve that enhancement in physical quality of life.”
    Walking in the loop
    The ankle exoskeleton system tested in this research is an experimental emulator that serves as a testbed for trying out different designs. It has a frame that fastens around the upper shin and into an integrated running shoe that the participant wears. It is attached to large motors that sit beside the walking surface and pull a tether that runs up the length of the back of the exoskeleton. Controlled by an algorithm, the tether tugs the wearer’s heel upward, helping them point their toe down as they push off the ground. More

  • in

    Quantum steering for more precise measurements

    Quantum systems consisting of several particles can be used to measure magnetic or electric fields more precisely. A young physicist at the University of Basel has now proposed a new scheme for such measurements that uses a particular kind of correlation between quantum particles.
    In quantum information, the fictitious agents Alice and Bob are often used to illustrate complex communication tasks. In one such process, Alice can use entangled quantum particles such as photons to transmit or “teleport” a quantum state — unknown even to herself — to Bob, something that is not feasible using traditional communications.
    However, it has been unclear whether the team Alice-Bob can use similar quantum states for other things besides communication. A young physicist at the University of Basel has now shown how particular types of quantum states can be used to perform measurements with higher precision than quantum physics would ordinarily allow. The results have been published in the scientific journal Nature Communications.
    Quantum steering at a distance
    Together with researchers in Great Britain and France, Dr. Matteo Fadel, who works at the Physics Department of the University of Basel, has thought about how high-precision measurement tasks can be tackled with the help of so-called quantum steering.
    Quantum steering describes the fact that in certain quantum states of systems consisting of two particles, a measurement on the first particle allows one to make more precise predictions about possible measurement results on the second particle than quantum mechanics would allow if only the measurement on the second particle had been made. It is just as if the measurement on the first particle had “steered” the state of the second one. More

  • in

    Machine learning model generates realistic seismic waveforms

    A new machine-learning model that generates realistic seismic waveforms will reduce manual labor and improve earthquake detection, according to a study published recently in JGR Solid Earth.
    “To verify the e?cacy of our generative model, we applied it to seismic ?eld data collected in Oklahoma,” said Youzuo Lin, a computational scientist in Los Alamos National Laboratory’s Geophysics group and principal investigator of the project. “Through a sequence of qualitative and quantitative tests and benchmarks, we saw that our model can generate high-quality synthetic waveforms and improve machine learning-based earthquake detection algorithms.”
    Quickly and accurately detecting earthquakes can be a challenging task. Visual detection done by people has long been considered the gold standard, but requires intensive manual labor that scales poorly to large data sets. In recent years, automatic detection methods based on machine learning have improved the accuracy and efficiency of data collection; however, the accuracy of those methods relies on access to a large amount of high?quality, labeled training data, often tens of thousands of records or more.
    To resolve this data dilemma, the research team developed SeismoGen based on a generative adversarial network (GAN), which is a type of deep generative model that can generate high?quality synthetic samples in multiple domains. In other words, deep generative models train machines to do things and create new data that could pass as real.
    Once trained, the SeismoGen model is capable of producing realistic seismic waveforms of multiple labels. When applied to real Earth seismic datasets in Oklahoma, the team saw that data augmentation from SeismoGen?generated synthetic waveforms could be used to improve earthquake detection algorithms in instances when only small amounts of labeled training data are available.
    Story Source:
    Materials provided by DOE/Los Alamos National Laboratory. Note: Content may be edited for style and length. More

  • in

    Artificial intelligence model predicts which key of the immune system opens the locks of coronavirus

    With an artificial intelligence (AI) method developed by researchers at Aalto University and University of Helsinki, researchers can now link immune cells to their targets and, for example, uncouple which white blood cells recognize SARS-CoV-2. The developed tool has broad applications in understanding the function of the immune system in infections, autoimmune disorders, and cancer.
    The human immune defense is based on the ability of white blood cells to accurately identify disease-causing pathogens and to initiate a defense reaction against them. The immune defense is able to recall the pathogens it has encountered previously, on which, for example, the effectiveness of vaccines is based. Thus, the immune defense the most accurate patient record system that carries a history of all pathogens an individual has faced. This information however has previously been difficult to obtain from patient samples.
    The learning immune system can be roughly divided into two parts, of which B cells are responsible for producing antibodies against pathogens, while T cells are responsible for destroying their targets. The measurement of antibodies by traditional laboratory methods is relatively simple, which is why antibodies already have several uses in healthcare.
    “Although it is known that the role of T cells in the defense response against for example viruses and cancer is essential, identifying the targets of T cells has been difficult despite extensive research,” says Satu Mustjoki, Professor of Translational Hematology.
    AI helps to identify new key-lock pairs
    T cells identify their targets in a key and a lock principle, where the key is the T cell receptor on the surface of the T cell and the key is the protein presented on the surface of an infected cell. An individual is estimated to carry more different T cell keys than there are stars in the Milky Way, making the mapping of T cell targets with laboratory techniques cumbersome. More

  • in

    Scientists glimpse signs of a puzzling state of matter in a superconductor

    Unconventional superconductors contain a number of exotic phases of matter that are thought to play a role, for better or worse, in their ability to conduct electricity with 100% efficiency at much higher temperatures than scientists had thought possible — although still far short of the temperatures that would allow their wide deployment in perfectly efficient power lines, maglev trains and so on.
    Now scientists at the Department of Energy’s SLAC National Accelerator Laboratory have glimpsed the signature of one of those phases, known as pair-density waves or PDW, and confirmed that it’s intertwined with another phase known as charge density wave (CDW) stripes — wavelike patterns of higher and lower electron density in the material.
    Observing and understanding PDW and its correlations with other phases may be essential for understanding how superconductivity emerges in these materials, allowing electrons to pair up and travel with no resistance, said Jun-Sik Lee, a SLAC staff scientist who led the research at the lab’s Stanford Synchrotron Radiation Lightsource (SSRL).
    Even indirect evidence of the PDW phase intertwined with charge stripes, he said, is an important step on the long road toward understanding the mechanism behind unconventional superconductivity, which has eluded scientists over more than 30 years of research.
    Lee added that the method his team used to make this observation, which involved dramatically increasing the sensitivity of a standard X-ray technique known as resonant soft X-ray scattering (RSXS) so it could see the extremely faint signals given off by these phenomena, has potential for directly sighting both the PDW signature and its correlations with other phases in future experiments. That’s what they plan to work on next.
    The scientists described their findings today in Physical Review Letters. More

  • in

    Mechanical engineers develop new high-performance artificial muscle technology

    In the field of robotics, researchers are continually looking for the fastest, strongest, most efficient and lowest-cost ways to actuate, or enable, robots to make the movements needed to carry out their intended functions.
    The quest for new and better actuation technologies and ‘soft’ robotics is often based on principles of biomimetics, in which machine components are designed to mimic the movement of human muscles — and ideally, to outperform them. Despite the performance of actuators like electric motors and hydraulic pistons, their rigid form limits how they can be deployed. As robots transition to more biological forms and as people ask for more biomimetic prostheses, actuators need to evolve.
    Associate professor (and alum) Michael Shafer and professor Heidi Feigenbaum of Northern Arizona University’s Department of Mechanical Engineering, along with graduate student researcher Diego Higueras-Ruiz, published a paper in Science Robotics presenting a new, high-performance artificial muscle technology they developed in NAU’s Dynamic Active Systems Laboratory. The paper, titled “Cavatappi artificial muscles from drawing, twisting, and coiling polymer tubes,” details how the new technology enables more human-like motion due to its flexibility and adaptability, but outperforms human skeletal muscle in several metrics.
    “We call these new linear actuators cavatappi artificial muscles based on their resemblance to the Italian pasta,” Shafer said.
    Because of their coiled, or helical, structure, the actuators can generate more power, making them an ideal technology for bioengineering and robotics applications. In the team’s initial work, they demonstrated that cavatappi artificial muscles exhibit specific work and power metrics ten and five times higher than human skeletal muscles, respectively, and as they continue development, they expect to produce even higher levels of performance.
    “The cavatappi artificial muscles are based on twisted polymer actuators (TPAs), which were pretty revolutionary when they first came out because they were powerful, lightweight and cheap. But they were very inefficient and slow to actuate because you had to heat and cool them. Additionally, their efficiency is only about two percent,” Shafer said. “For the cavatappi, we get around this by using pressurized fluid to actuate, so we think these devices are far more likely to be adopted. These devices respond about as fast as we can pump the fluid. The big advantage is their efficiency. We have demonstrated contractile efficiency of up to about 45 percent, which is a very high number in the field of soft actuation.”
    The engineers think this technology could be used in soft robotics applications, conventional robotic actuators (for example, for walking robots), or even potentially in assistive technologies like exoskeletons or prostheses.
    “We expect that future work will include the use of cavatappi artificial muscles in many applications due to their simplicity, low-cost, lightweight, flexibility, efficiency and strain energy recovery properties, among other benefits,” Shafer said.
    Technology is available for licensing, partnering opportunities.
    Working with the NAU Innovations team, the inventors have taken steps to protect their intellectual property. The technology has entered the protection and early commercialization stage and is available for licensing and partnering opportunities. For more information, please contact NAU Innovations.
    Shafer joined NAU in 2013. His other research interests are related to energy harvesting, wildlife telemetry systems and unmanned aerial systems. Feigenbaum joined NAU in 2007, and her other research interest include ratcheting in metals and smart materials. The graduate student on this project, Diego Higueras-Ruiz, received his MS in Mechanical Engineering from NAU in 2018 and will be completing his PhD in Bioengineering in Fall 2021. This work has been supported through a grant from NAU’s Research and Development Preliminary Studies program. More

  • in

    AI algorithms can influence people's voting and dating decisions in experiments

    In a new series of experiments, artificial intelligence (A.I.) algorithms were able to influence people’s preferences for fictitious political candidates or potential romantic partners, depending on whether recommendations were explicit or covert. Ujué Agudo and Helena Matute of Universidad de Deusto in Bilbao, Spain, present these findings in the open-access journal PLOS ONE on April 21, 2021.
    From Facebook to Google search results, many people encounter A.I. algorithms every day. Private companies are conducting extensive research on the data of their users, generating insights into human behavior that are not publicly available. Academic social science research lags behind private research, and public knowledge on how A.I. algorithms might shape people’s decisions is lacking.
    To shed new light, Agudo and Matute conducted a series of experiments that tested the influence of A.I. algorithms in different contexts. They recruited participants to interact with algorithms that presented photos of fictitious political candidates or online dating candidates, and asked the participants to indicate whom they would vote for or message. The algorithms promoted some candidates over others, either explicitly (e.g., “90% compatibility”) or covertly, such as by showing their photos more often than others’.
    Overall, the experiments showed that the algorithms had a significant influence on participants’ decisions of whom to vote for or message. For political decisions, explicit manipulation significantly influenced decisions, while covert manipulation was not effective. The opposite effect was seen for dating decisions.
    The researchers speculate these results might reflect people’s preference for human explicit advice when it comes to subjective matters such as dating, while people might prefer algorithmic advice on rational political decisions.
    In light of their findings, the authors express support for initiatives that seek to boost the trustworthiness of A.I., such as the European Commission’s Ethics Guidelines for Trustworthy AI and DARPA’s explainable AI (XAI) program. Still, they caution that more publicly available research is needed to understand human vulnerability to algorithms.
    Meanwhile, the researchers call for efforts to educate the public on the risks of blind trust in recommendations from algorithms. They also highlight the need for discussions around ownership of the data that drives these algorithms.
    The authors add: “If a fictitious and simplistic algorithm like ours can achieve such a level of persuasion without establishing actually customized profiles of the participants (and using the same photographs in all cases), a more sophisticated algorithm such as those with which people interact in their daily lives should certainly be able to exert a much stronger influence.”
    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    Pepper the robot talks to itself to improve its interactions with people

    Ever wondered why your virtual home assistant doesn’t understand your questions? Or why your navigation app took you on the side street instead of the highway? In a study published April 21st in the journal iScience, Italian researchers designed a robot that “thinks out loud” so that users can hear its thought process and better understand the robot’s motivations and decisions.
    “If you were able to hear what the robots are thinking, then the robot might be more trustworthy,” says co-author Antonio Chella, describing first author Arianna Pipitone’s idea that launched the study at the University of Palermo. “The robots will be easier to understand for laypeople, and you don’t need to be a technician or engineer. In a sense, we can communicate and collaborate with the robot better.”
    Inner speech is common in people and can be used to gain clarity, seek moral guidance, and evaluate situations in order to make better decisions. To explore how inner speech might impact a robot’s actions, the researchers built a robot called Pepper that speaks to itself. They then asked people to set the dinner table with Pepper according to etiquette rules to study how Pepper’s self-dialogue skills influence human-robot interactions.
    The scientists found that, with the help of inner speech, Pepper is better at solving dilemmas. In one experiment, the user asked Pepper to place the napkin at the wrong spot, contradicting the etiquette rule. Pepper started asking itself a series of self-directed questions and concluded that the user might be confused. To be sure, Pepper confirmed the user’s request, which led to further inner speech.
    “Ehm, this situation upsets me. I would never break the rules, but I can’t upset him, so I’m doing what he wants,” Pepper said to itself, placing the napkin at the requested spot. Through Pepper’s inner voice, the user can trace its thoughts to learn that Pepper was facing a dilemma and solved it by prioritizing the human’s request. The researchers suggest that the transparency could help establish human-robot trust.
    Comparing Pepper’s performance with and without inner speech, Pipitone and Chella discovered that the robot had a higher task-completion rate when engaging in self-dialogue. Thanks to inner speech, Pepper outperformed the international standard functional and moral requirements for collaborative robots — guidelines that machines, from humanoid AI to mechanic arms at the manufacturing line, follow.
    “People were very surprised by the robot’s ability,” says Pipitone. “The approach makes the robot different from typical machines because it has the ability to reason, to think. Inner speech enables alternative solutions for the robots and humans to collaborate and get out of stalemate situations.”
    Although hearing the inner voice of robots enriches the human-robot interaction, some people might find it inefficient because the robot spends more time completing tasks when it talks to itself. The robot’s inner speech is also limited to the knowledge that researchers gave it. Still, Pipitone and Chella say their work provides a framework to further explore how self-dialogue can help robots focus, plan, and learn.
    “In some sense, we are creating a generational robot that likes to chat,” says Chella. The authors say that, from navigation apps and the camera on your phone to medical robots in the operation rooms, machines and computers alike can benefit from this chatty feature. “Inner speech could be useful in all the cases where we trust the computer or a robot for the evaluation of a situation,” Chella says.
    Story Source:
    Materials provided by Cell Press. Note: Content may be edited for style and length. More