More stories

  • in

    Exoskeletons have a problem: They can strain the brain

    Exoskeletons — wearable devices used by workers on assembly lines or in warehouses to alleviate stress on their lower backs — may compete with valuable resources in the brain while people work, canceling out the physical benefits of wearing them, a new study suggests.
    The study, published recently in the journal Applied Ergonomics, found that when people wore exoskeletons while performing tasks that required them to think about their actions, their brains worked overtime and their bodies competed with the exoskeletons rather than working in harmony with them. The study indicates that exoskeletons may place enough burden on the brain that potential benefits to the body are negated.
    “It’s almost like dancing with a really bad partner,” said William Marras, senior author of the study, professor of integrated systems engineering and director of The Ohio State University Spine Research Institute.
    “The exoskeleton is trying to anticipate your moves, but it’s not going well, so you fight with the exoskeleton, and that causes this change in your brain which changes the muscle recruitment — and could cause higher forces on your lower back, potentially leading to pain and possible injuries.”
    For the study, researchers asked 12 people — six men and six women — to repeatedly lift a medicine ball in two 30-minutes sessions. For one of the sessions, the participants wore an exoskeleton. For the other, they did not.
    The exoskeleton, which is attached to the user’s chest and legs, is designed to help control posture and motion during lifting to protect the lower back and reduce the possibility of injury. More

  • in

    New simulator helps robots sharpen their cutting skills

    Researchers from the University of Southern California (USC) Department of Computer Science and NVIDIA have unveiled a new simulator for robotic cutting that can accurately reproduce the forces acting on a knife as it slices through common foodstuffs, such as fruit and vegetables. The system could also simulate cutting through human tissue, offering potential applications in surgical robotics. The paper was presented at the Robotics: Science and Systems (RSS) Conference 2021 on July 16, where it received the Best Student Paper Award.
    In the past, researchers have had trouble creating intelligent robots that replicate cutting. One challenge: in the real world, no two objects are the same, and current robotic cutting systems struggle with variation. To overcome this, the team devised a unique approach to simulate cutting by introducing springs between the two halves of the object being cut, represented by a mesh. These springs are weakened over time in proportion to the force exerted by the knife on the mesh.
    “What makes ours a special kind of simulator is that it is ‘differentiable,’ which means that it can help us automatically tune these simulation parameters from real-world measurements,” said lead author Eric Heiden, a PhD in computer science student at USC. “That’s important because closing this reality gap is a significant challenge for roboticists today. Without this, robots may never break out of simulation into the real world.”
    To transfer skills from simulation to reality, the simulator must be able to model a real system. In one of the experiments, the researchers used a dataset of force profiles from a physical robot to produce highly accurate predictions of how the knife would move in real life. In addition to applications in the food processing industry, where robots could take over dangerous tasks like repetitive cutting, the simulator could improve force haptic feedback accuracy in surgical robots, helping to guide surgeons and prevent injury.
    “Here, it is important to have an accurate model of the cutting process and to be able to realistically reproduce the forces acting on the cutting tool as different kinds of tissue are being cut,” said Heiden. “With our approach, we are able to automatically tune our simulator to match such different types of material and achieve highly accurate simulations of the force profile.” In ongoing research, the team is applying the system to real-world robots.
    Co-authors are Miles Macklin, Yashraj S Narang, Dieter Fox, Animesh Garg, Fabio Ramos, all of NVIDIA.
    Video: https://www.youtube.com/watch?v=bN4yqHhfAfQ
    The full paper (open access) and blog post are available here: https://diff-cutting-sim.github.io/
    Story Source:
    Materials provided by University of Southern California. Original written by Caitlin Dawson. Note: Content may be edited for style and length. More

  • in

    New quantum research gives insights into how quantum light can be mastered

    A team of scientists at Los Alamos National Laboratory proposes that modulated quantum metasurfaces can control all properties of photonic qubits, a breakthrough that could impact the fields of quantum information, communications, sensing and imaging, as well as energy and momentum harvesting. The results of their study were released yesterday in the journal Physical Review Letters, published by the American Physical Society.
    “People have studied classical metasurfaces for a long time,” says Diego Dalvit, who works in the Condensed Matter and Complex Systems group at the Laboratory’s Theoretical Division. “But we came up with this new idea, which was to modulate in time and space the optical properties of a quantum metasurface that allow us to manipulate, on-demand, all degrees of freedom of a single photon, which is the most elementary unit of light.”
    Metasurfaces are ultrathin structures that can manipulate light in ways not usually seen in nature. In this case, the team developed a metasurface that looked like an array of rotated crosses, which they can then manipulate with lasers or electrical pulses. They then proposed to shoot a single photon through the metasurface, where the photon splits into a superposition of many colors, paths, and spinning states that are all intertwined, generating so-called quantum entanglement — meaning the single photon is capable of inheriting all these different properties at once.
    “When the metasurface is modulated with laser or electrical pulses, one can control the frequency of the refracted single photon, alter its angle of trajectory, the direction of its electric field, as well as its twist,” says Abul Azad from the Center for Integrated Nanotechnologies at the Laboratory’s Materials Physics and Applications Division.
    By manipulating these properties, this technology could be used to encode information in photons traveling within a quantum network, everything from banks, quantum computers, and between Earth and satellites. Encoding photons is particularly desirable in the field of cryptography because “eavesdroppers” are unable to view a photon without changing its fundamental physics, which if done would then alert the sender and receiver that the information has been compromised.
    The researchers are also working on how to pull photons from a vacuum by modulating the quantum metasurface.
    “The quantum vacuum is not empty but full of fleeting virtual photons. With the modulated quantum metasurface one is able to efficiently extract and convert virtual photons into real photon pairs,” says Wilton Kort-Kamp, who works in the Theoretical Division at the Lab’s Condensed Matter and Complex Systems group.
    Harnessing photons that exist in the vacuum and shooting them in one direction should create propulsion in the opposite direction. Similarly, stirring the vacuum should create rotational motion from the twisted photons. Structured quantum light could then one day be used to generate mechanical thrust, using only tiny amounts of energy to drive the metasurface.
    Story Source:
    Materials provided by DOE/Los Alamos National Laboratory. Note: Content may be edited for style and length. More

  • in

    New framework applies machine learning to atomistic modeling

    Northwestern University researchers have developed a new framework using machine learning that improves the accuracy of interatomic potentials — the guiding rules describing how atoms interact — in new materials design. The findings could lead to more accurate predictions of how new materials transfer heat, deform, and fail at the atomic scale.
    Designing new nanomaterials is an important aspect of developing next-generation devices used in electronics, sensors, energy harvesting and storage, optical detectors, and structural materials. To design these materials, researchers create interatomic potentials through atomistic modeling, a computational approach that predicts how these materials behave by accounting for their properties at the smallest level. The process to establish materials’ interatomic potential — called parameterization — has required significant chemical and physical intuition, leading to less accurate prediction of new materials design.
    The researchers’ platform minimizes user intervention by employing multi-objective genetic algorithm optimization and statistical analysis techniques, and screens promising interatomic potentials and parameter sets.
    “The computational algorithms we developed provide analysts with a methodology to assess and avoid traditional shortcomings,” said Horacio Espinosa, James N. and Nancy J. Farley Professor in Manufacturing and Entrepreneurship and professor of mechanical engineering and (by courtesy) biomedical engineering and civil and environmental engineering, who led the research. “They also provide the means to tailor the parameterization to applications of interest.”
    The findings were published in a study titled “Parametrization of Interatomic Potentials for Accurate Large Deformation Pathways Using Multi-Objective Genetic Algorithms and Statistical Analyses: A Case Study on Two-Dimensional Materials” on July 21 in Nature Partner Journals — Computational Materials.
    Xu Zhang and Hoang Nguyen, both students in Northwestern Engineering’s Theoretical and Applied Mechanics (TAM) graduate program, were co-first authors of the study. Other co-authors included Jeffrey T. Paci of the University of Victoria, Canada, Subramanian Sankaranarayanan of Argonne National Laboratory, and Jose Mendoza of Michigan State University. More

  • in

    New algorithm flies drones faster than human racing pilots can

    To be useful, drones need to be quick. Because of their limited battery life they must complete whatever task they have — searching for survivors on a disaster site, inspecting a building, delivering cargo — in the shortest possible time. And they may have to do it by going through a series of waypoints like windows, rooms, or specific locations to inspect, adopting the best trajectory and the right acceleration or deceleration at each segment.
    Algorithm outperforms professional pilots
    The best human drone pilots are very good at doing this and have so far always outperformed autonomous systems in drone racing. Now, a research group at the University of Zurich (UZH) has created an algorithm that can find the quickest trajectory to guide a quadrotor — a drone with four propellers — through a series of waypoints on a circuit. “Our drone beat the fastest lap of two world-class human pilots on an experimental race track,” says Davide Scaramuzza, who heads the Robotics and Perception Group at UZH and the Rescue Robotics Grand Challenge of the NCCR Robotics, which funded the research.
    “The novelty of the algorithm is that it is the first to generate time-optimal trajectories that fully consider the drones’ limitations,” says Scaramuzza. Previous works relied on simplifications of either the quadrotor system or the description of the flight path, and thus they were sub-optimal. “The key idea is, rather than assigning sections of the flight path to specific waypoints, that our algorithm just tells the drone to pass through all waypoints, but not how or when to do that,” adds Philipp Foehn, PhD student and first author of the paper.
    External cameras provide position information in real-time
    The researchers had the algorithm and two human pilots fly the same quadrotor through a race circuit. They employed external cameras to precisely capture the motion of the drones and — in the case of the autonomous drone — to give real-time information to the algorithm on where the drone was at any moment. To ensure a fair comparison, the human pilots were given the opportunity to train on the circuit before the race. But the algorithm won: all its laps were faster than the human ones, and the performance was more consistent. This is not surprising, because once the algorithm has found the best trajectory it can reproduce it faithfully many times, unlike human pilots.
    Before commercial applications, the algorithm will need to become less computationally demanding, as it now takes up to an hour for the computer to calculate the time-optimal trajectory for the drone. Also, at the moment, the drone relies on external cameras to compute where it was at any moment. In future work, the scientists want to use onboard cameras. But the demonstration that an autonomous drone can in principle fly faster than human pilots is promising. “This algorithm can have huge applications in package delivery with drones, inspection, search and rescue, and more,” says Scaramuzza.
    Story Source:
    Materials provided by University of Zurich. Note: Content may be edited for style and length. More

  • in

    Rounding errors could make certain stopwatches pick wrong race winners, researchers show

    As the Summer Olympics draw near, the world will shift its focus to photo finishes and races determined by mere fractions of a second. Obtaining such split-second measurements relies on faultlessly rounding a raw time recorded by a stopwatch or electronic timing system to a submitted time.
    Researchers at the University of Surrey found certain stopwatches commit rounding errors when converting raw times to final submitted times. In American Journal of Physics, by AIP Publishing, David Faux and Janet Godolphin outline a series of computer simulations based on procedures for converting raw race times for display.
    Faux was inspired when he encountered the issue firsthand while volunteering at a swim meet. While helping input times into the computer, he noticed a large portion of times they inputted were rounded to either the closest half-second or full second.
    “Later, when the frequencies of the digit pairs were plotted, a distinct pattern emerged,” he said. “We discovered that the distribution of digit pairs was statistically inconsistent with the hypothesis that each digit pair was equally likely, as one would expect from stopwatches.”
    Stopwatches and electronic timing systems use quartz oscillators to measure time intervals, with each oscillation calculated as 0.0001 seconds. These times are then processed for display to 0.01 seconds, for example, to the public at a sporting venue.
    Faux and Godolphin set to work simulating roughly 3 million race times corresponding to swimmers of all ages and abilities. As expected, the raw times indicated each fraction of a second had the same chance of being a race time. For example, there was 1% chance a race time ended in either 0.55 seconds or 0.6 seconds.
    When they processed raw times through the standard display routine, the uniform distribution disappeared. Most times were correctly displayed.
    Where rounding errors occurred, they usually resulted in changes of one one-hundredth of a second. One raw time of 28.3194 was converted to a displayed time of 28.21.
    “The question we really need to answer is whether rounding errors are uncorrected in electronic timing systems used in sporting events worldwide,” Faux said. “We have so far been unable to unearth the actual algorithm that is used to translate a count of quartz oscillations to a display.”
    The researchers collected more than 30,000 race times from swimming competitions and will investigate if anomalous timing patterns appear in the collection, which would suggest the potential for rounding errors in major sporting events.
    The article “The floating point: Rounding error in timing devices” is authored by David A. Faux and Janet Godolphin. The article appears in American Journal of Physics.
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Wearable brain-machine interface turns intentions into actions

    A new wearable brain-machine interface (BMI) system could improve the quality of life for people with motor dysfunction or paralysis, even those struggling with locked-in syndrome — when a person is fully conscious but unable to move or communicate.
    A multi-institutional, international team of researchers led by the lab of Woon-Hong Yeo at the Georgia Institute of Technology combined wireless soft scalp electronics and virtual reality in a BMI system that allows the user to imagine an action and wirelessly control a wheelchair or robotic arm.
    The team, which included researchers from the University of Kent (United Kingdom) and Yonsei University (Republic of Korea), describes the new motor imagery-based BMI system this month in the journal Advanced Science.
    “The major advantage of this system to the user, compared to what currently exists, is that it is soft and comfortable to wear, and doesn’t have any wires,” said Yeo, associate professor on the George W. Woodruff School of Mechanical Engineering.
    BMI systems are a rehabilitation technology that analyzes a person’s brain signals and translates that neural activity into commands, turning intentions into actions. The most common non-invasive method for acquiring those signals is ElectroEncephaloGraphy, EEG, which typically requires a cumbersome electrode skull cap and a tangled web of wires.
    These devices generally rely heavily on gels and pastes to help maintain skin contact, require extensive set-up times, are generally inconvenient and uncomfortable to use. The devices also often suffer from poor signal acquisition due to material degradation or motion artifacts — the ancillary “noise” which may be caused by something like teeth grinding or eye blinking. This noise shows up in brain-data and must be filtered out. More

  • in

    How intricate Venus’s-flower-baskets manipulate the flow of seawater

    A Venus’s-flower-basket isn’t all show. This stunning deep-sea sponge can also alter the flow of seawater in surprising ways.

    A lacy, barrel-shaped chamber forms the sponge’s glassy skeleton. Flow simulations reveal how this intricate structure alters the way water moves around and through the sponge, helping it endure unforgiving ocean currents and perhaps feed and reproduce, researchers report online July 21 in Nature.

    Previous studies have found that the gridlike construction of a Venus’s-flower-basket (Euplectella aspergillum) is strong and flexible. “But no one has ever tried to see if these beautiful structures have fluid-dynamic properties,” says mechanical engineer Giacomo Falcucci of Tor Vergata University of Rome.

    Harnessing supercomputers, Falcucci and colleagues simulated how water flows around and through the sponge’s body, with and without different skeletal components such as the sponge’s myriad pores. If the sponge were a solid cylinder, water flowing past would form a turbulent wake immediately downstream that could jostle the creature, Falcucci says. Instead water flows through and around the highly porous Venus’s-flower-basket and forms a gentle zone of water that flanks the sponge and displaces turbulence downstream, the team found. That way, the sponge’s body endures less stress.

    Ridges that spiral around the outside of the sponge’s skeleton also somehow cause water to slow and swirl inside the structure, the simulations showed. As a result, food and reproductive cells that drift into the sponge would become trapped for up to twice as long as in the same sponge without ridges. That lingering could help the filter feeders catch more plankton. And because Venus’s-flower-baskets can reproduce sexually, it could also enhance the chances that free-floating sperm encounter eggs, the researchers say.

    It’s amazing that such beauty could be so functional, Falcucci says. The sponge’s flow-altering abilities, he says, might help inspire taller, more wind-resistant skyscrapers.

    This simulation shows how water flows around and through a Venus’s-flower-basket (gray). Ridges that spiral across the outside of the sponge cause water inside to somehow slow and swirl, forming particle-trapping vortices. And the sponge’s shape creates a gentle zone of slower water that forms immediately downstream, buffering the creature against turbulence. Vertical cross sections contrast the flow activity of the calm zone (nearer the sponge) and the turbulent zone (downstream).G. Falcucci et al/Nature 2021 More