More stories

  • in

    Microfluidic-based soft robotic prosthetics promise relief for diabetic amputees

    Every 30 seconds, a leg is amputated somewhere in the world due to diabetes. These patients often suffer from neuropathy, a loss of sensation in the lower extremities, and are therefore unable to detect damage resulting from an ill-fitting prosthesis, which leads to the amputation of a limb.
    In Biomicrofluidics, by AIP Publishing, Canadian scientists reveal their development of a new type of prosthetic using microfluidics-enabled soft robotics that promises to greatly reduce skin ulcerations and pain in patients who have had an amputation between the ankle and knee.
    More than 80% of lower-limb amputations in the world are the result of diabetic foot ulcers, and the lower limb is known to swell at unpredictable times, resulting in volume changes of 10% or more.
    Typically, the prosthesis used after amputation includes fabric and silicone liners that can be added or removed to improve fit. The amputee needs to manually change the liners, but neuropathy leading to poor sensation makes this difficult and can lead to more damage to the remaining limb.
    “Rather than creating a new type of prosthetic socket, the typical silicon/fabric limb liner is replaced with a single layer of liner with integrated soft fluidic actuators as an interfacing layer,” said author Carolyn Ren, from the University of Waterloo. “These actuators are designed to be inflated to varying pressures based on the anatomy of the residual limb to reduce pain and prevent pressure ulcerations.”
    The scientists started with a recently developed device using pneumatic actuators to adjust the pressure of the prosthetic socket. This initial device was quite heavy, limiting its use in real-world situations.
    To address this problem, the group developed a way to miniaturize the actuators. They designed a microfluidic chip with 10 integrated pneumatic valves to control each actuator. The full system is controlled by a miniature air pump and two solenoid valves that provide air to the microfluidic chip. The control box is small and light enough to be worn as part of the prosthesis.
    Medical personnel with extensive experience in prosthetic devices were part of the team and provided a detailed map of desired pressures for the prosthetic socket. The group carried out extensive measurements of the contact pressure provided by each actuator and compared these to the desired pressure for a working prosthesis.
    All 10 actuators were found to produce pressures in the desired range, suggesting the new device will work well in the field. Future research will test the approach on a more accurate biological model.
    The group plans additional research to integrate pressure sensors directly into the prosthetic liner, perhaps using newly available knitted soft fabric that incorporates pressure sensing material.
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Electrospinning promises major improvements in wearable technology

    Wearable technology has exploded in recent years. Spurred by advances in flexible sensors, transistors, energy storage, and harvesting devices, wearables encompass miniaturized electronic devices worn directly on the human skin for sensing a range of biophysical and biochemical signals or, as with smart watches, for providing convenient human-machine interfaces.
    Engineering wearables for optimal skin conformity, breathability, and biocompatibility without compromising the tunability of their mechanical, electrical, and chemical properties is no small task. The emergence of electrospinning — the fabrication of nanofibers with tunable properties from a polymer base — is an exciting development in the field.
    In APL Bioengineering, by AIP Publishing, researchers from Tufts University examined some of the latest advances in wearable electronic devices and systems being developed using electrospinning.
    “We show how the scientific community has realized many remarkable things using electrospun nanomaterials,” said author Sameer Sonkusale. “They have applied them for physical activity monitoring, motion tracking, measuring biopotentials, chemical and biological sensing, and even batteries, transistors, and antennas, among others.”
    Sonkusale and his colleagues showcase the many advantages electrospun materials have over conventional bulk materials.
    Their high surface-to-volume ratio endows them with enhanced porosity and breathability, which is important for long-term wearability. Also, with the appropriate blend of polymers, they can achieve superior biocompatibility.
    Conductive electrospun nanofibers provide high surface area electrodes, enabling both flexibility and performance improvements, including rapid charging and high energy storage capacities.
    “Also, their nanoscale features mean they adhere well to the skin without need for chemical adhesives, which is important if you are interested in measuring biopotentials, like heart activity using electrocardiography or brain activity using electroencephalography,” said Sonkusale.
    Electrospinning is considerably less expensive and more user-friendly than photolithography for realizing nanoscale transistor morphologies with superior electronic transport.
    The researchers are confident electrospinning will further establish its claim as a versatile, feasible, and inexpensive technique for the fabrication of wearable devices in the coming years. They note there are areas for improvement to be considered, including broadening the choice for materials and improving the ease of integration with human physiology.
    They suggest the aesthetics of wearables may be improved by making them smaller and, perhaps, with the incorporation of transparent materials, “almost invisible.”
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Humans in the loop help robots find their way

    Just like us, robots can’t see through walls. Sometimes they need a little help to get where they’re going.
    Engineers at Rice University have developed a method that allows humans to help robots “see” their environments and carry out tasks.
    The strategy called Bayesian Learning IN the Dark — BLIND, for short — is a novel solution to the long-standing problem of motion planning for robots that work in environments where not everything is clearly visible all the time.
    The peer-reviewed study led by computer scientists Lydia Kavraki and Vaibhav Unhelkar and co-lead authors Carlos Quintero-Peña and Constantinos Chamzas of Rice’s George R. Brown School of Engineering was presented at the Institute of Electrical and Electronics Engineers’ International Conference on Robotics and Automation in late May.
    The algorithm developed primarily by Quintero-Peña and Chamzas, both graduate students working with Kavraki, keeps a human in the loop to “augment robot perception and, importantly, prevent the execution of unsafe motion,” according to the study.
    To do so, they combined Bayesian inverse reinforcement learning (by which a system learns from continually updated information and experience) with established motion planning techniques to assist robots that have “high degrees of freedom” — that is, a lot of moving parts. More

  • in

    Processing photons in picoseconds

    Light has long been used to transmit information in many of our everyday electronic devices. Because light is made of quantum particles called photons, it will also play an important role in information processing in the coming generation of quantum devices. But first, researchers need to gain control of individual photons. Writing in Optica, Columbia Engineers propose using a time lens.
    “Just like an ordinary magnifying glass can zoom in on some spatial phenomena you wouldn’t otherwise be able to see, a time lens lets you resolve details on a temporal scale,” said Chaitali Joshi, the first author of the work and former PhD student in Alexander Gaeta’s lab. A laser is a focused beam of many, many, many photons oscillating through space at a particular frequency; the team’s time lens lets them pick out individual particles of light faster than ever before.
    The experimental set-up consists of two laser beams that “mix” with a signal photon to create another packet of light at a different frequency. With their time lens, Joshi and her colleagues were able to identify single photons from a larger beam with picosecond resolution. That’s 10-12 of a second and about 70x faster than has been observed with other single-photon detectors said Joshi, now a postdoc at CalTech. Such a time lens allows for temporally resolving individual photons with a precision that can’t be achieved with current photon detectors.
    In addition to seeing single photons, the team can also manipulate their spectra (i.e., their composite colors), reshaping the path along which they traveled. This is an important step for building quantum information networks. “In such a network, all the nodes need to be able to talk to each other. When those nodes are photons, you need their spectral and temporal bandwidths to match, which we can achieve with a time lens,” said Joshi.
    With future tweaks, the Gaeta lab hopes to further reduce the time resolution by more than a factor of three and will continue exploring how they can control individual photons. “With our time lens, the bandwidth is tunable to whatever application you have in mind: you just need to adjust the magnification,” said Joshi. Potential applications include information processing, quantum key distribution, quantum sensing, and more.
    For now, the work is done with optical fibers, though the lab hopes to one day incorporate time lenses into integrated photonic chips, like electronic chips, and scale the system to many devices on a chip to allow for processing many photons simultaneously.
    Story Source:
    Materials provided by Columbia University School of Engineering and Applied Science. Original written by Ellen Neff. Note: Content may be edited for style and length. More

  • in

    Making dark semiconductors shine

    Whether or not a solid can emit light, for instance as a light-emitting diode (LED), depends on the energy levels of the electrons in its crystalline lattice. An international team of researchers led by University of Oldenburg physicists Dr Hangyong Shan and Prof. Dr Christian Schneider has succeeded in manipulating the energy-levels in an ultra-thin sample of the semiconductor tungsten diselenide in such a way that this material, which normally has a low luminescence yield, began to glow. The team has now published an article on its research in the science journal Nature Communications.
    According to the researchers, their findings constitute a first step towards controlling the properties of matter through light fields. “The idea has been discussed for years, but had not yet been convincingly implemented,” said Schneider. The light effect could be used to optimize the optical properties of semiconductors and thus contribute to the development of innovative LEDs, solar cells, optical components and other applications. In particular the optical properties of organic semiconductors — plastics with semiconducting properties that are used in flexible displays and solar cells or as sensors in textiles — could be enhanced in this way.
    Tungsten diselenide belongs to an unusual class of semiconductors consisting of a transition metal and one of the three elements sulphur, selenium or tellurium. For their experiments the researchers used a sample that consisted of a single crystalline layer of tungsten and selenium atoms with a sandwich-like structure. In physics, such materials, which are only a few atoms thick, are also known as two-dimensional (2D) materials. They often have unusual properties because the charge carriers they contain behave in a completely different manner to those in thicker solids and are sometimes referred to as “quantum materials.”
    The team led by Shan and Schneider placed the tungsten diselenide sample between two specially prepared mirrors and used a laser to excite the material. With this method they were able to create a coupling between light particles (photons) and excited electrons. “In our study, we demonstrate that via this coupling the structure of the electronic transitions can be rearranged such that a dark material effectively behaves like a bright one,” Schneider explained. “The effect in our experiment is so strong that the lower state of tungsten diselenide becomes optically active.” The team was also able to show that the experimental results matched the predictions of a theoretical model to a high degree.
    The current study is the result of a collaboration between the researchers at the Carl von Ossietzky University of Oldenburg (Germany) and colleagues from Reykjavik University (Iceland), the University of Würzburg (Germany), Friedrich Schiller University (Germany), Arizona State University (USA) and the National Institute for Materials Science in Tsukuba (Japan). Parts of the theory were developed by colleagues at ITMO University in St. Petersburg (Russia) before the universities terminated their collaboration.
    Story Source:
    Materials provided by University of Oldenburg. Note: Content may be edited for style and length. More

  • in

    Who's really in control?

    Humans have long been known to sympathize with the machines or computer representations they operate. Whether driving a car or directing a video game avatar, people are more likely to identify with something that they perceive to be in control of. However, it remains unknown how the attitudes represented in the autonomous behavior of the robots affects their operators. Now, researchers from Japan have found that when a person controls only a part of the body of a semi-autonomous robot, they are influenced by the robot’s expressed “attitudes.”
    Researchers at the Department of Systems Innovation at Osaka University tested the psychological impact of remotely operating certain semi-autonomous robots on humans. These “telepresence” robots are designed to transmit the human voice and mannerisms as a way of alleviating labor shortages and minimizing commuting costs. For example, a human operator may control the voice, while the body movements are handled automatically by a computer. “Semi-autonomous robots have shown potential for practical applications in which a robot’s autonomous actions and human teleoperation are jointly used to accomplish difficult tasks. A system that combines the ‘intentions’ of different agents, such as an algorithm and a human user, that are collectively used to operate a single robot is called collaborative control,” first author Tomonori Kubota says.
    In the experiment, the team investigated whether the attitude of the teleoperator would align more with that expressed by the semi-autonomous robot when they controlled a part of the robot’s body. Beforehand, experimental participants were asked to rank a set of 10 paintings. They were then assigned to one of three conditions for controlling a human-like robot. Either they operated the robot’s hand movement, ability to smile, or did not control the robot at all. They were then shown the android speaking to another participant who was actually collaborating with the experimenters. The android recommended the painting that had been ranked sixth, and the experimenters recorded how much this influenced the robot operator’s subsequent ranking of that painting. “This study reveals that when a person operates a part of the body of an android robot that autonomously interacts with a human, the person’s attitudes come to closely align with the robot’s attitudes,” senior author Hiroshi Ishiguro says.
    This research indicates that in future implementations of “human-robot collaborations,” designers need to be mindful of the ways operators may be influenced by their role with subconscious changes in attitude.
    Story Source:
    Materials provided by Osaka University. Note: Content may be edited for style and length. More

  • in

    Supernumerary virtual robotic arms can feel like part of our body

    Research teams at the University of Tokyo, Keio University and Toyohashi University of Technology in Japan have developed a virtual robotic limb system which can be operated by users’ feet in a virtual environment as extra, or supernumerary, limbs. After training, users reported feeling like the virtual robotic arms had become part of their own body. This study focused on the perceptual changes of the participants, understanding of which can contribute to designing real physical robotic supernumerary limb systems that people can use naturally and freely just like our own bodies.
    What would you do with an extra arm, or if like Spider-Man’s nemesis Doctor Octopus, you could have an extra four? Research into extra, or supernumerary, robotic limbs looks at how we might adapt, mentally and physically, to having additional limbs added to our bodies.
    Doctoral student Ken Arai from the Research Center for Advanced Science and Technology (RCAST) at the University of Tokyo became interested in this research as a way to explore the limits of human “plasticity” — in other words, our brain’s ability to alter and adapt to external and internal changes. One example of plasticity is the way that we can learn to use new tools and sometimes even come to see them as extensions of ourselves, referred to as “tool embodiment,” whether it’s an artist’s paintbrush or hairdresser’s scissors.
    To explore these concepts in action, teams at the University of Tokyo, Keio University and Toyohashi University of Technology in Japan collaborated to create a virtual robotic limb system. They then asked participants to perform tasks in virtual reality (VR) using the virtual limbs.
    “We investigated whether virtual robotic arms, as supernumerary limbs, could be perceived as part of one’s own body, and whether perceptual changes would occur regarding the proximal space around the robotic arm,” said Arai.
    Participants wore a head-mounted display to give them a first-person view of their own arms represented in VR, as well as the additional virtual robotic arms. They then had to perform tasks using only the virtual robotic arms, which were controlled by moving their toes. Tactile devices returned sensations from the virtual robotic arms to the tops and soles of their feet when they touched an object, like a virtual ball.
    Once the participants learned how to use the virtual system, they reported feeling like the virtual robotic arms had become their own extra arms and not just extensions of their real arms or feet. “The scores of subjective evaluation statistically became significantly higher for ‘sense of body ownership,’ ‘sense of agency’ and ‘sense of self-location,’ which are important measures of embodiment, where the supernumerary robotic limb is able to become part of the body,” said Arai.
    The team also found that the participant’s “peripersonal space” (the area around our bodies which we perceive as being our personal space) extended to include the area around the virtual robotic arms. As Arai explained, “We succeeded in capturing the positive association between the perceptual change in visuo-tactile integration around the supernumerary robotic limbs (peripersonal space), and the score change of subjective evaluation of feeling the number of one’s arms increased (supernumerary limb sensation).”
    Next, the team wants to look at the potential for cooperative behavior between participants’ own arms in virtual reality and the virtual robotic arms. “Investigating the mechanisms and dynamics of the supernumerary limb sensation reported here from the standpoint of cognitive neuroscience will be important in exploring human plasticity limits and the design of supernumerary robotic limb systems,” said Arai. The hope is that by understanding the perceptual changes and cognitive effort required to operate a supernumerary robotic limb system in VR, this will aid in designing real-life systems in future which people can use naturally just like their own body.
    Story Source:
    Materials provided by University of Tokyo. Note: Content may be edited for style and length. More

  • in

    The heat is on: Traces of fire uncovered dating back at least 800,000 years

    They say that where there’s smoke, there’s fire, and Weizmann Institute of Science researchers are working hard to investigate that claim, or at least elucidate what constitutes “smoke.” In an article published today in PNAS, the scientists reveal an advanced, innovative method that they have developed and used to detect nonvisual traces of fire dating back at least 800,000 years — one of the earliest known pieces of evidence for the use of fire. The newly developed technique may provide a push toward a more scientific, data-driven type of archaeology, but — perhaps more importantly — it could help us better understand the origins of the human story, our most basic traditions and our experimental and innovative nature.
    The controlled use of fire by ancient hominins — a group that includes humans and some of our extinct family members — is hypothesized to date back at least a million years, to around the time that archaeologists believe Homo habilis began its transition to Homo erectus. That is no coincidence, as the working theory, called the “cooking hypothesis,” is that the use of fire was instrumental in our evolution, not only for allowing hominins to stay warm, craft advanced tools and ward off predators but also for acquiring the ability to cook. Cooking meat not only eliminates pathogens but increases efficient protein digestion and nutritional value, paving the way for the growth of the brain. The only problem with this hypothesis is a lack of data: since finding archaeological evidence of pyrotechnology primarily relies on visual identification of modifications resulting from the combustion of objects (mainly, a color change), traditional methods have managed to find widespread evidence of fire use no older than 200,000 years. While there is some evidence of fire dating back to 500,000 years ago, it remains sparse, with only five archaeological sites around the world providing reliable evidence of ancient fire.
    “We may have just found the sixth site,” says Dr. Filipe Natalio of Weizmann’s Plant and Environmental Sciences Department, whose previous collaboration with Dr. Ido Azuri, of Weizmann’s Life Core Facilities Department, and colleagues provided the basis for this project. Together they pioneered the application of AI and spectroscopy in archaeology to find indications of controlled burning of stone tools dating back to between 200,000 and 420,000 years ago in Israel. Now they’re back, joined by PhD student Zane Stepka, Dr. Liora Kolska Horwitz from the Hebrew University of Jerusalem and Prof. Michael Chazan from the University of Toronto, Canada. The team upped the ante by taking a “fishing expedition” — casting far out into the water and seeing what they could reel back in. “When we started this project,” says Natalio, “the archaeologists who’ve been analyzing the findings from Evron Quarry told us we wouldn’t find anything. We should have made a bet.”
    Evron Quarry, located in the Western Galilee, is an open-air archaeological site that was first discovered in the mid-1970s. During a series of excavations that took place at that time and were led by Prof. Avraham Ronen, archaeologists dug down 14 meters and uncovered a large array of animal fossils and Paleolithic tools dating back to between 800,000 and 1 million years ago, making it one of the oldest sites in Israel. None of the finds from the site or the soil in which they were found had any visual evidence of heat: ash and charcoal degrade over time, eliminating the chances of finding visual evidence of burning. Thus, if the Weizmann scientists wanted to find evidence of fire, they had to search farther afield.
    The “fishing” expedition began with the development of a more advanced AI model than they had previously used. “We tested a variety of methods, among them traditional data analysis methods, machine learning modeling and more advanced deep learning models,” says Azuri, who headed the development of the models. “The deep learning models that prevailed had a specific architecture that outperformed the others and successfully gave us the confidence we needed to further use this tool in an archaeological context having no visual signs of fire use.” The advantage of AI is that it can find hidden patterns across a multitude of scales. By pinpointing the chemical composition of materials down to the molecular level, the output of the model can estimate the temperature to which the stone tools were heated, ultimately providing information about past human behaviors.
    With an accurate AI method in hand, the team could start fishing for molecular signals from the stone tools used by the inhabitants of the Evron Quarry almost a million years ago. To this end, the team assessed the heat exposure of 26 flint tools found at the site almost half a century ago. The results revealed that the tools had been heated to a wide range of temperatures — some exceeding 600°C. In addition, using a different spectroscopic technique, they analyzed 87 faunal remains and discovered that the tusk of an extinct elephant also exhibited structural changes resulting from heating. While cautious in their claim, the presence of hidden heat suggests that our ancient ancestors, not unlike the scientists themselves, were experimentalists.
    According to the research team, by looking at the archaeology from a different perspective, using new tools, we may find much more than we initially thought. The methods they’ve developed could be applied, for example, at other Lower Paleolithic sites to identify nonvisual evidence of fire use. Furthermore, this method could perhaps offer a renewed spatiotemporal perspective on the origins and controlled use of fire, helping us to better understand how hominin’s pyrotechnology-related behaviors evolved and drove other behaviors. “Especially in the case of early fire,” says Stepka, “if we use this method at archaeological sites that are one or two million years old, we might learn something new.”
    By all accounts, the fishing expedition was a resounding success. “It was not only a demonstration of exploration and being rewarded in terms of the knowledge gained,” says Natalio, “but of the potential that lies in combining different disciplines: Ido has a background in quantum chemistry, Zane is a scientific archaeologist, and Liora and Michael are prehistorians. By working together, we have learned from each other. For me, it’s a demonstration of how scientific research across the humanities and science should work.”
    Dr. Natalio’s research is supported by the Yeda-Sela Center for Basic Research. More