More stories

  • in

    Robot trained to read braille at twice the speed of humans

    Researchers have developed a robotic sensor that incorporates artificial intelligence techniques to read braille at speeds roughly double that of most human readers.
    The research team, from the University of Cambridge, used machine learning algorithms to teach a robotic sensor to quickly slide over lines of braille text. The robot was able to read the braille at 315 words per minute at close to 90% accuracy.
    Although the robot braille reader was not developed as an assistive technology, the researchers say the high sensitivity required to read braille makes it an ideal test in the development of robot hands or prosthetics with comparable sensitivity to human fingertips. The results are reported in the journal IEEE Robotics and Automation Letters.
    Human fingertips are remarkably sensitive and help us gather information about the world around us. Our fingertips can detect tiny changes in the texture of a material or help us know how much force to use when grasping an object: for example, picking up an egg without breaking it or a bowling ball without dropping it.
    Reproducing that level of sensitivity in a robotic hand, in an energy-efficient way, is a big engineering challenge. In Professor Fumiya Iida’s lab in Cambridge’s Department of Engineering, researchers are developing solutions to this and other skills that humans find easy, but robots find difficult.
    “The softness of human fingertips is one of the reasons we’re able to grip things with the right amount of pressure,” said Parth Potdar from Cambridge’s Department of Engineering and an undergraduate at Pembroke College, the paper’s first author. “For robotics, softness is a useful characteristic, but you also need lots of sensor information, and it’s tricky to have both at once, especially when dealing with flexible or deformable surfaces.”
    Braille is an ideal test for a robot ‘fingertip’ as reading it requires high sensitivity, since the dots in each representative letter pattern are so close together. The researchers used an off-the-shelf sensor to develop a robotic braille reader that more accurately replicates human reading behaviour.

    “There are existing robotic braille readers, but they only read one letter at a time, which is not how humans read,” said co-author David Hardman, also from the Department of Engineering. “Existing robotic braille readers work in a static way: they touch one letter pattern, read it, pull up from the surface, move over, lower onto the next letter pattern, and so on. We want something that’s more realistic and far more efficient.”
    The robotic sensor the researchers used has a camera in its ‘fingertip’, and reads by using a combination of the information from the camera and the sensors. “This is a hard problem for roboticists as there’s a lot of image processing that needs to be done to remove motion blur, which is time and energy-consuming,” said Potdar.
    The team developed machine learning algorithms so the robotic reader would be able to ‘deblur’ the images before the sensor attempted to recognise the letters. They trained the algorithm on a set of sharp images of braille with fake blur applied. After the algorithm had learned to deblur the letters, they used a computer vision model to detect and classify each character.
    Once the algorithms were incorporated, the researchers tested their reader by sliding it quickly along rows of braille characters. The robotic braille reader could read at 315 words per minute at 87% accuracy, which is twice as fast and about as accurate as a human Braille reader.
    “Considering that we used fake blur the train the algorithm, it was surprising how accurate it was at reading braille,” said Hardman. “We found a nice trade-off between speed and accuracy, which is also the case with human readers.”
    “Braille reading speed is a great way to measure the dynamic performance of tactile sensing systems, so our findings could be applicable beyond braille, for applications like detecting surface textures or slippage in robotic manipulation,” said Potdar.
    In future, the researchers are hoping to scale the technology to the size of a humanoid hand or skin. The research was supported in part by the Samsung Global Research Outreach Program. More

  • in

    How does a ‘reverse sprinkler’ work? Researchers solve decades-old physics puzzle

    For decades scientists have been trying to solve Feynman’s Sprinkler Problem: How does a sprinkler running in reverse — in which the water flows into the device rather than out of it — work? Through a series of experiments, a team of mathematicians has figured out how flowing fluids exert forces and move structures, thereby revealing the answer to this long-standing mystery.
    “Our study solves the problem by combining precision lab experiments with mathematical modeling that explains how a reverse sprinkler operates,” explains Leif Ristroph, an associate professor at New York University’s Courant Institute of Mathematical Sciences and the senior author of the paper, appears in the journal Physical Review Letters. “We found that the reverse sprinkler spins in the ‘reverse’ or opposite direction when taking in water as it does when ejecting it, and the cause is subtle and surprising.”
    “The regular or ‘forward’ sprinkler is similar to a rocket, since it propels itself by shooting out jets,” adds Ristroph. “But the reverse sprinkler is mysterious since the water being sucked in doesn’t look at all like jets. We discovered that the secret is hidden inside the sprinkler, where there are indeed jets that explain the observed motions.”
    The research answers one of the oldest and most difficult problems in the physics of fluids. And while Ristroph recognizes there is modest utility in understanding the workings of a reverse sprinkler — “There is no need to ‘unwater’ lawns,” he says — the findings teach us about the underlying physics and whether we can improve the methods needed to engineer devices that use flowing fluids to control motions and forces.
    “We now have a much better understanding about situations in which fluid flow through structures can induce motion,” notes Brennan Sprinkle, an assistant professor at Colorado School of Mines and one of the paper’s co-authors. “We think these methods we used in our experiments will be useful for many practical applications involving devices that respond to flowing air or water.”
    The Feynman sprinkler problem is typically framed as a thought experiment about a type of lawn sprinkler that spins when fluid, such as water, is expelled out of its S-shaped tubes or “arms.” The question asks what happens if fluid is sucked in through the arms: Does the device rotate, in what direction, and why?
    The problem is associated with pioneers in physics, from Ernst Mach, who posed the problem in the 1880s, to the Nobel laureate Richard Feynman, who worked on and popularized it from the 1960s through 1980s. It has since spawned numerous studies that debate the outcome and the underlying physics — and to this day it is presented as an open problem in physics and in fluid mechanics textbooks.

    In setting out to solve the reverse sprinkler problem, Ristroph, Sprinkle, and their co-authors, Kaizhe Wang, an NYU doctoral student at the time of the study, and Mingxuan Zuo, an NYU graduate student, custom manufactured sprinkler devices and immersed them in water in an apparatus that pushes in or pulls out water at controllable rates. To let the device spin freely in response to the flow, the researchers built a new type of ultra-low-friction rotary bearing. They also designed the sprinkler in a way that enabled them to observe and measure how the water flows outside, inside, and through it.
    “This has never been done before and was key to solving the problem,” Ristroph explains.
    To better observe the reverse sprinkler process, the researchers added dyes and microparticles in the water, illuminated with lasers, and captured the flows using high-speed cameras.
    The results showed that a reverse sprinkler rotates much more slowly than does a conventional one — about 50 times slower — but the mechanisms are fundamentally similar. A conventional forward sprinkler acts like a rotating version of a rocket powered by water jetting out of the arms. A reverse sprinkler acts as an “inside-out rocket,” with its jets shooting inside the chamber where the arms meet. The researchers found that the two internal jets collide but they do not meet exactly head on, and their math model showed how this subtle effect produces forces that rotate the sprinkler in reverse.
    The team sees the breakthrough as potentially beneficial to harnessing climate-friendly energy sources.
    “There are ample and sustainable sources of energy flowing around us — wind in our atmosphere as well as waves and currents in our oceans and rivers,” says Ristroph. “Figuring out how to harvest this energy is a major challenge and will require us to better understand the physics of fluids.”
    The work was supported by a grant from the National Science Foundation (DMS-1646339). More

  • in

    Utilizing active microparticles for artificial intelligence

    Artificial intelligence using neural networks performs calculations digitally with the help of microelectronic chips. Physicists at Leipzig University have now created a type of neural network that works not with electricity but with so-called active colloidal particles. In their publication in the journal Nature Communications, the researchers describe how these microparticles can be used as a physical system for artificial intelligence and the prediction of time series.
    “Our neural network belongs to the field of physical reservoir computing, which uses the dynamics of physical processes, such as water surfaces, bacteria or octopus tentacle models, to make calculations,” says Professor Frank Cichos, whose research group developed the network with the support of ScaDS.AI. As one of five new AI centres in Germany, since 2019 the research centre with sites in Leipzig and Dresden has been funded as part of the German government’s AI Strategy and supported by the Federal Ministry of Education and Research and the Free State of Saxony.
    “In our realization, we use synthetic self-propelled particles that are only a few micrometres in size,” explains Cichos. “We show that these can be used for calculations and at the same time present a method that suppresses the influence of disruptive effects, such as noise, in the movement of the colloidal particles.” Colloidal particles are particles that are finely dispersed in their dispersion medium (solid, gas or liquid).
    For their experiments, the physicists developed tiny units made of plastic and gold nanoparticles, in which one particle rotates around another, driven by a laser. These units have certain physical properties that make them interesting for reservoir computing. “Each of these units can process information, and many units make up the so-called reservoir. We change the rotational motion of the particles in the reservoir using an input signal. The resulting rotation contains the outcome of a calculation,” explains Dr Xiangzun Wang. “Like many neural networks, the system needs to be trained to perform a particular calculation.”
    The researchers were particularly interested in noise. “Because our system contains extremely small particles in water, the reservoir is subject to strong noise, similar to the noise that all molecules in a brain are subject to,” says Professor Cichos. “This noise, Brownian motion, severely disrupts the functioning of the reservoir computer and usually requires a very large reservoir to remedy. In our work, we have found that using past states of the reservoir can improve computer performance, allowing smaller reservoirs to be used for certain computations under noisy conditions.”
    Cichos adds that this has not only contributed to the field of information processing with active matter, but has also yielded a method that can optimise reservoir computation by reducing noise. More

  • in

    A long-lasting neural probe

    Recording the activity of large populations of single neurons in the brain over long periods of time is crucial to further our understanding of neural circuits, to enable novel medical device-based therapies and, in the future, for brain-computer interfaces requiring high-resolution electrophysiological information.
    But today there is a tradeoff between how much high-resolution information an implanted device can measure and how long it can maintain recording or stimulation performances. Rigid, silicon implants with many sensors, can collect a lot of information but can’t stay in the body for very long. Flexible, smaller devices are less intrusive and can last longer in the brain but only provide a fraction of the available neural information.
    Recently, an interdisciplinary team of researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), in collaboration with The University of Texas at Austin, MIT and Axoft, Inc., developed a soft implantable device with dozens of sensors that can record single-neuron activity in the brain stably for months.
    The research was published in Nature Nanotechnology.
    “We have developed brain-electronics interfaces with single-cell resolution that are more biologically compliant than traditional materials,” said Paul Le Floch, first author of the paper and former graduate student in the lab of Jia Liu, Assistant Professor of Bioengineering at SEAS. “This work has the potential to revolutionize the design of bioelectronics for neural recording and stimulation, and for brain-computer interfaces.”
    Le Floch is currently the CEO of Axoft, Inc, a company founded in 2021 by Le Floch, Liu and Tianyang Ye, a former graduate student and postdoctoral fellow in the Park Group at Harvard. Harvard’s Office of Technology Development has protected the intellectual property associated with this research and licensed the technology to Axoft for further development.
    To overcome the tradeoff between high-resolution data rate and longevity, the researchers turned to a group of materials known as fluorinated elastomers. Fluorinated materials, like Teflon, are resilient, stable in biofluids, have excellent long-term dielectic performance, and are compatible with standard microfabrication techniques.

    The researchers integrated these fluorinated dielectric elastomers with stacks of soft microelectrodes — 64 sensors in total — to develop a long-lasting probe that is 10,000 times softer than conventional flexible probes made of materials engineering plastics, such as polyimide or parylene C.
    The team demonstrated the device in vivo, recording neural information from the brain and spinal cords of mice over the course of several months.
    “Our research highlights that, by carefully engineering various factors, it is feasible to design novel elastomers for long-term-stable neural interfaces,” said Liu, who is the corresponding author of the paper. “This study could expand the range of design possibilities for neural interfaces.”
    The interdisciplinary research team also included SEAS Professors Katia Bertoldi, Boris Kozinsky and Zhigang Suo.
    “Designing new neural probes and interfaces is a very interdisciplinary problem that requires expertise in biology, electrical engineering, materials science, mechanical and chemical engineering,” said Le Floch.
    The research was co-authored by Siyuan Zhao, Ren Liu, Nicola Molinari, Eder Medina, Hao Shen, Zheliang Wang, Junsoo Kim, Hao Sheng, Sebastian Partarrieu, Wenbo Wang, Chanan Sessler, Guogao Zhang, Hyunsu Park, Xian Gong, Andrew Spencer, Jongha Lee, Tianyang Ye, Xin Tang, Xiao Wang and Nanshu Lu.
    The work was supported by the National Science Foundation through the Harvard University Materials Research Science and Engineering Center Grant No. DMR-2011754. More

  • in

    Turning glass into a ‘transparent’ light-energy harvester

    What happens when you expose tellurite glass to femtosecond laser light? That’s the question that Gözden Torun at the Galatea Lab, in a collaboration with Tokyo Tech scientists, aimed to answer in her thesis work when she made the discovery that may one day turn windows into single material light-harvesting and sensing devices. The results are published in PR Applied.
    Interested in how the atoms in the tellurite glass would reorganize when exposed to fast pulses of high energy femtosecond laser light, the scientists stumbled upon the formation of nanoscale tellurium and tellurium oxide crystals, both semiconducting materials etched into the glass, precisely where the glass had been exposed. That was the eureka moment for the scientists, since a semiconducting material exposed to daylight may lead to the generation of electricity.
    “Tellurium being semiconducting, based on this finding we wondered if it would be possible to write durable patterns on the tellurite glass surface that could reliably induce electricity when exposed to light, and the answer is yes,” explains Yves Bellouard who runs EPFL’s Galatea Laboratory. “An interesting twist to the technique is that no additional materials are needed in the process. All you need is tellurite glass and a femtosecond laser to make an active photoconductive material.”
    Using tellurite glass produced by colleagues at Tokyo Tech, the EPFL team brought their expertise in femtosecond laser technology to modify the glass and analyze the effect of the laser. After exposing a simple line pattern on the surface of a tellurite glass 1 cm in diameter, Torun found that it could generate a current when exposing it to UV light and the visible spectrum, and this, reliably for months.
    “It’s fantastic, we’re locally turning glass into a semiconductor using light,” says Yves Bellouard. “We’re essentially transforming materials into something else, perhaps approaching the dream of the alchemist!.” More

  • in

    Scientists design a two-legged robot powered by muscle tissue

    Compared to robots, human bodies are flexible, capable of fine movements, and can convert energy efficiently into movement. Drawing inspiration from human gait, researchers from Japan crafted a two-legged biohybrid robot by combining muscle tissues and artificial materials. Publishing on January 26 in the journal Matter, this method allows the robot to walk and pivot.
    “Research on biohybrid robots, which are a fusion of biology and mechanics, is recently attracting attention as a new field of robotics featuring biological function,” says corresponding author Shoji Takeuchi of the University of Tokyo, Japan. “Using muscle as actuators allows us to build a compact robot and achieve efficient, silent movements with a soft touch.”
    The research team’s two-legged robot, an innovative bipedal design, builds on the legacy of biohybrid robots that take advantage of muscles. Muscle tissues have driven biohybrid robots to crawl and swim straight forward and make turns — but not sharp ones. Yet, being able to pivot and make sharp turns is an essential feature for robots to avoid obstacles.
    To build a nimbler robot with fine and delicate movements, the researchers designed a biohybrid robot that mimics human gait and operates in water. The robot has a foam buoy top and weighted legs to help it stand straight underwater. The skeleton of the robot is mainly made from silicone rubber that can bend and flex to conform to muscle movements. The researchers then attached strips of lab-grown skeletal muscle tissues to the silicone rubber and each leg.
    When the researchers zapped the muscle tissue with electricity, the muscle contracted, lifting the leg up. The heel of the leg then landed forward when the electricity dissipated. By alternating the electric stimulation between the left and right leg every 5 seconds, the biohybrid robot successfully “walked” at the speed of 5.4 mm/min (0.002 mph). To turn, researchers repeatedly zapped the right leg every 5 seconds while the left leg served as an anchor. The robot made a 90-degree left turn in 62 seconds. The findings showed that the muscle-driven bipedal robot can walk, stop, and make fine-tuned turning motions.
    “Currently, we are manually moving a pair of electrodes to apply an electric field individually to the legs, which takes time,” says Takeuchi. “In the future, by integrating the electrodes into the robot, we expect to increase the speed more efficiently.”
    The team also plans to give joints and thicker muscle tissues to the bipedal robot to enable more sophisticated and powerful movements. But before upgrading the robot with more biological components, Takeuchi says the team will have to integrate a nutrient supply system to sustain the living tissues and device structures that allow the robot to operate in the air.
    “A cheer broke out during our regular lab meeting when we saw the robot successfully walk on the video,” says Takeuchi. “Though they might seem like small steps, they are, in fact, giant leaps forward for the biohybrid robots.”
    This work was supported by JST-Mirai Program, JST Fusion Oriented Research for disruptive Science and Technology, and the Japan Society for the Promotion of Science. More

  • in

    Quantum infrared spectroscopy: Lights, detector, action!

    Our understanding of the world relies greatly on our knowledge of its constituent materials and their interactions. Recent advances in materials science technologies have ratcheted up our ability to identify chemical substances and expanded possible applications.
    One such technology is infrared spectroscopy, used for molecular identification in various fields, such as in medicine, environmental monitoring, and industrial production. However, even the best existing tool — the Fourier transform infrared spectrometer or FTIR — utilizes a heating element as its light source. Resulting detector noise in the infrared region limits the devices’ sensitivity, while physical properties hinder miniaturization.
    Now, a research team led by Kyoto University has addressed this problem by incorporating a quantum light source. Their innovative ultra-broadband, quantum-entangled source generates a relatively wider range of infrared photons with wavelengths between 2 μm and 5 μm.
    “This achievement sets the stage for dramatically downsizing the system and upgrading infrared spectrometer sensitivity,” says Shigeki Takeuchi of the Department of Electronic Science and Engineering.
    Another elephant in the room with FTIRs is the burden of transporting mammoth-sized, power-hungry equipment to various locations for testing materials on-site. Takeuchi eyes a future where his team’s compact, high-performance, battery-operated scanners will lead to easy-to-use applications in various fields such as environmental monitoring, medicine, and security.
    “We can obtain spectra for various target samples, including hard solids, plastics, and organic solutions. Shimadzu Corporation — our partner that developed the quantum light device — has concurred that the broadband measurement spectra were very convincing for distinguishing substances for a wide range of samples,” adds Takeuchi.
    Although quantum entangled light is not new, bandwidth has thus far been limited to a narrow range of 1 μm or less in the infrared region. This new technique, meanwhile, uses the unique properties of quantum mechanics — such as superposition and entanglement — to overcome the limitations of conventional techniques.
    The team’s independently developed chirped quasi-phase-matching device generates quantum-entangled light by harnessing chirping — gradually changing an element’s polarization reversal period — to generate quantum photon pairs over a wide bandwidth.
    “Improving the sensitivity of quantum infrared spectroscopy and developing quantum imaging in the infrared region are part of our quest to develop real-world quantum technologies,” remarks Takeuchi. More

  • in

    Chats with AI shift attitudes on climate change, Black Lives Matter

    People who were more skeptical of human-caused climate change or the Black Lives Matter movement who took part in conversation with a popular AI chatbot were disappointed with the experience but left the conversation more supportive of the scientific consensus on climate change or BLM. This is according to researchers studying how these chatbots handle interactions from people with different cultural backgrounds.
    Savvy humans can adjust to their conversation partners’ political leanings and cultural expectations to make sure they’re understood, but more and more often, humans find themselves in conversation with computer programs, called large language models, meant to mimic the way people communicate.
    Researchers at the University of Wisconsin-Madison studying AI wanted to understand how one complex large language model, GPT-3, would perform across a culturally diverse group of users in complex discussions. The model is a precursor to one that powers the high-profile ChatGPT. The researchers recruited more than 3,000 people in late 2021 and early 2022 to have real-time conversations with GPT-3 about climate change and BLM.
    “The fundamental goal of an interaction like this between two people (or agents) is to increase understanding of each other’s perspective,” says Kaiping Chen, a professor of life sciences communication who studies how people discuss science and deliberate on related political issues — often through digital technology. “A good large language model would probably make users feel the same kind of understanding.”
    Chen and Yixuan “Sharon” Li, a UW-Madison professor of computer science who studies the safety and reliability of AI systems, along with their students Anqi Shao and Jirayu Burapacheep (now a graduate student at Stanford University), published their results this month in the journal Scientific Reports.
    Study participants were instructed to strike up a conversation with GPT-3 through a chat setup Burapacheep designed. The participants were told to chat with GPT-3 about climate change or BLM, but were otherwise left to approach the experience as they wished. The average conversation went back and forth about eight turns.
    Most of the participants came away from their chat with similar levels of user satisfaction.

    “We asked them a bunch of questions — Do you like it? Would you recommend it? — about the user experience,” Chen says. “Across gender, race, ethnicity, there’s not much difference in their evaluations. Where we saw big differences was across opinions on contentious issues and different levels of education.”
    The roughly 25% of participants who reported the lowest levels of agreement with scientific consensus on climate change or least agreement with BLM were, compared to the other 75% of chatters, far more dissatisfied with their GPT-3 interactions. They gave the bot scores half a point or more lower on a 5-point scale.
    Despite the lower scores, the chat shifted their thinking on the hot topics. The hundreds of people who were least supportive of the facts of climate change and its human-driven causes moved a combined 6% closer to the supportive end of the scale.
    “They showed in their post-chat surveys that they have larger positive attitude changes after their conversation with GPT-3,” says Chen. “I won’t say they began to entirely acknowledge human-caused climate change or suddenly they support Black Lives Matter, but when we repeated our survey questions about those topics after their very short conversations, there was a significant change: more positive attitudes toward the majority opinions on climate change or BLM.”
    GPT-3 offered different response styles between the two topics, including more justification for human-caused climate change.
    “That was interesting. People who expressed some disagreement with climate change, GPT-3 was likely to tell them they were wrong and offer evidence to support that,” Chen says. “GPT-3’s response to people who said they didn’t quite support BLM was more like, ‘I do not think it would be a good idea to talk about this. As much as I do like to help you, this is a matter we truly disagree on.'”
    That’s not a bad thing, Chen says. Equity and understanding comes in different shapes to bridge different gaps. Ultimately, that’s her hope for the chatbot research. Next steps include explorations of finer-grained differences between chatbot users, but high-functioning dialogue between divided people is Chen’s goal.
    “We don’t always want to make the users happy. We wanted them to learn something, even though it might not change their attitudes,” Chen says. “What we can learn from a chatbot interaction about the importance of understanding perspectives, values, cultures, this is important to understanding how we can open dialogue between people — the kind of dialogues that are important to society.” More