More stories

  • in

    Insect-sized robot navigates mazes with the agility of a cheetah

    Many insects and spiders get their uncanny ability to scurry up walls and walk upside down on ceilings with the help of specialized sticky footpads that allow them to adhere to surfaces in places where no human would dare to go.
    Engineers at the University of California, Berkeley, have used the principle behind some of these footpads, called electrostatic adhesion, to create an insect-scale robot that can swerve and pivot with the agility of a cheetah, giving it the ability to traverse complex terrain and quickly avoid unexpected obstacles.
    The robot is constructed from a thin, layered material that bends and contracts when an electric voltage is applied. In a 2019 paper, the research team demonstrated that this simple design can be used to create a cockroach-sized robot that can scurry across a flat surface at a rate of 20 body lengths per second, or about 1.5 miles per hour — nearly the speed of living cockroaches themselves, and the fastest relative speed of any insect-sized robot.
    In a new study, the research team added two electrostatic footpads to the robot. Applying a voltage to either of the footpads increases the electrostatic force between the footpad and a surface, making that footpad stick more firmly to the surface and forcing the rest of the robot to rotate around the foot.
    The two footpads give operators full control over the trajectory of the robot, and allow the robot to make turns with a centripetal acceleration that exceeds that of most insects.
    “Our original robot could move very, very fast, but we could not really control whether the robot went left or right, and a lot of the time it would move randomly, because if there was a slight difference in the manufacturing process — if the robot was not symmetrical — it would veer to one side,” said Liwei Lin, a professor of mechanical engineering at UC Berkeley. “In this work, the major innovation was adding these footpads that allow it to make very, very fast turns.”
    To demonstrate the robot’s agility, the research team filmed the robot navigating Lego mazes while carrying a small gas sensor and swerving to avoid falling debris. Because of its simple design, the robot can also survive being stepped on by a 120-pound human.
    Small, robust robots like these could be ideal for conducting search and rescue operations or investigating other hazardous situations, such as scoping out potential gas leaks, Lin said. While the team demonstrated most of the robot’s skills while it was “tethered,” or powered and controlled through a small electrical wire, they also created an “untethered” version that can operate on battery power for up to 19 minutes and 31 meters while carrying a gas sensor.
    “One of the biggest challenges today is making smaller scale robots that maintain the power and control of bigger robots,” Lin said. “With larger-scale robots, you can include a big battery and a control system, no problem. But when you try to shrink everything down to a smaller and smaller scale, the weight of those elements become difficult for the robot to carry and the robot generally moves very slowly. Our robot is very fast, quite strong, and requires very little power, allowing it to carry sensors and electronics while also carrying a battery.”
    Video: https://www.youtube.com/watch?v=TmRol48_DKs
    Story Source:
    Materials provided by University of California – Berkeley. Original written by Kara Manke. Note: Content may be edited for style and length. More

  • in

    Researchers explore how children learn language

    Small children learn language at a pace far faster than teenagers or adults. One explanation for this learning advantage comes not from differences between children and adults, but from the differences in the way that people talk to children and adults.
    For the first time, a team of researchers developed a method to experimentally evaluate how parents use what they know about their children’s language when they talk to them. They found that parents have extremely precise models of their children’s language knowledge, and use these models to tune the language they use when speaking to them. The results are available in an advance online publication of the journal of Psychological Science.
    “We have known for years that parents talk to children differently than to other adults in a lot of ways, for example simplifying their speech, reduplicating words and stretching out vowel sounds,” said Daniel Yurovsky, assistant professor in psychology at Carnegie Mellon University. “This stuff helps young kids get a toehold into language, but we didn’t whether parents change the way they talk as children are acquiring language, giving children language input that is ‘just right’ for learning the next thing.”
    Adults tend to speak to children more slowly and at a higher pitch. They also use more exaggerated enunciation, repetition and simplified language structure. Adults also pepper their communication with questions to gauge the child’s comprehension. As the child’s language fluency increases, the sentence structure and complexity used by adults increases.
    Yurovsky likens this to the progression a student follows when learning math in school.
    “When you go to school, you start with algebra and then take plane geometry before moving onto calculus,” said Yurovsky. “People talk to kids using same kind of structure without thinking about it. They are tracking how much their child knows about language and modifying how they speak so that for children understand them.”
    Yurovsky and his team sought to understand exactly how caregivers tune their interactions to match their child’s speech development. The team developed a game where parents helped their children to pick a specific animal from a set of three, a game that toddlers (aged 15 to 23 months) and their parents play routinely in their daily lives. Half of the animals in the matching game were animals that children typically learn before age 2 (e.g. cat, cow), and the other half were animals that are typically learned later (e.g. peacock, leopard). More

  • in

    Skin in the game: Transformative approach uses the human body to recharge smartwatches

    As smart watches are increasingly able to monitor the vital signs of health, including what’s going on when we sleep, a problem has emerged: those wearable, wireless devices are often disconnected from our body overnight, being charged at the bedside.
    “Quality of sleep and its patterns contain a lot of important information about patients’ health conditions,” says Sunghoon Ivan Lee, assistant professor in the University of Massachusetts Amherst College of Information and Computer Sciences and director of the Advanced Human Health Analytics Laboratory.
    But that information can’t be tracked on smartwatches if the wearable devices are being charged as users sleep, which prior research has shown is frequently the case. Lee adds, “The main reason users discontinue the long-term use of wearable devices is because they have to frequently charge the on-device battery.”
    Pondering this problem, Lee brainstormed with UMass Amherst wearable computing engineer Jeremy Gummeson to find a solution to continuously recharge these devices on the body so they can monitor the user’s health 24/7.
    The scientists’ aha moment came when they realized “human skin is a conductible material,” Lee recalls. “Why can’t we instrument daily objects, such as the office desk, chair and car steering wheel, so they can seamlessly transfer power through human skin to charge up a watch or any wearable sensor while the users interact with them? Like, using human skin as a wire.
    “Then we can motivate people to do things like sleep tracking because they never have to take their watch off to charge it,” he adds. More

  • in

    Understanding potential topological quantum bits

    Quantum computers promise great advances in many fields — from cryptography to the simulation of protein folding. Yet, which physical system works best to build the underlying quantum bits is still an open question. Unlike regular bits in your computer, these so-called qubits cannot only take the values 0 and 1, but also mixtures of the two. While this potentially makes them very useful, they also become very unstable.
    One approach to solve this problem bets on topological qubits that encode the information in their spatial arrangement. That could provide a more stable and error-resistant basis for computation than other setups. The problem is that no one has ever definitely found a topological qubit yet.
    An international team of researchers from Austria, Copenhagen, and Madrid around Marco Valentini from the Nanoelectronics group at IST Austria now have examined a setup which was predicted to produce the so-called Majorana zero modes — the core ingredient for a topological qubit. They found that a valid signal for such modes can in fact be a false flag.
    Half of an Electron
    The experimental setup is composed of a tiny wire just some hundred nanometers — some millionths of a millimeter — long, grown by Peter Krogstrup from Microsoft Quantum and University of Copenhagen. These appropriately-called nanowires form a free-floating connection between two metal conductors on a chip. They are coated with a superconducting material that loses all electrical resistance at very low temperatures. The coating goes all the way up to a tiny part left at one end of the wire, which forms a crucial part of the setup: the junction. The whole contraption is then exposed to a magnetic field.
    The scientists’ theories predicted that Majorana zero modes — the basis for the topological qubit they were looking for — should appear in the nanowire. These Majorana zero modes are a strange phenomenon, because they started out as a mathematical trick to describe one electron in the wire as composed of two halves. Usually, physicists do not think of electrons as something that can be split, but using this nanowire setup it should have been possible so separate these “half-electrons” and to use them as qubits.
    “We were excited to work on this very promising material platform,” explains Marco Valentini, who joined IST Austria as an intern before becoming a PhD student in the Nanoelectronics group. “What we expected to see was the signal of Majorana zero modes in the nanowire, but we found nothing. First, we were confused, then frustrated. Eventually, and in close collaboration with our colleagues from the Theory of Quantum Materials and Solid State Quantum Technologies group in Madrid, we examined the setup, and found out what was wrong with it.”
    A False Flag
    After attempting to find the signatures of the Majorana zero modes, the researchers began to vary the nanowire setup to check whether any effects from its architecture were disturbing their experiment. “We did several experiments on different setups to find out what was going wrong,” Valentini explains. “It took us a while, but when we doubled the length of the uncoated junction from a hundred nanometers to two hundred, we found our culprit.”
    When the junction was big enough the following happened: The exposed inner nanowire formed a so-called quantum dot — a tiny speck of matter that shows special quantum mechanical properties due to its confined geometry. The electrons in this quantum dot could then interact with the ones in the coating superconductor next to it, and by that mimic the signal of the “half-electrons” — the Majorana zero modes — which the scientists were looking for.
    “This unexpected conclusion came after we established the theoretical model of how the quantum dot interacts with the superconductor in a magnetic field and compared the experimental data with detailed simulations performed by Fernando Peñaranda, a PhD student in the Madrid team,” says Valentini.
    “Mistaking this mimicking signal for a Majorana zero mode shows us how careful we have to be in our experiments and in our conclusions,” Valentini cautions. “While this may seem like a step back in the search for Majorana zero modes, it actually is a crucial step forward in understanding nanowires and their experimental signals. This finding shows that the cycle of discovery and critical examination among international peers is central to the advancement of scientific knowledge.” More

  • in

    Using AI to predict 3D printing processes

    Additive manufacturing has the potential to allow one to create parts or products on demand in manufacturing, automotive engineering, and even in outer space. However, it’s a challenge to know in advance how a 3D printed object will perform, now and in the future.
    Physical experiments — especially for metal additive manufacturing (AM) — are slow and costly. Even modeling these systems computationally is expensive and time-consuming.
    “The problem is multi-phase and involves gas, liquids, solids, and phase transitions between them,” said University of Illinois Ph.D. student Qiming Zhu. “Additive manufacturing also has a wide range of spatial and temporal scales. This has led to large gaps between the physics that happens on the small scale and the real product.”
    Zhu, Zeliang Liu (a software engineer at Apple), and Jinhui Yan (professor of Civil and Environmental Engineering at the University of Illinois), are trying to address these challenges using machine learning. They are using deep learning and neural networks to predict the outcomes of complex processes involved in additive manufacturing.
    “We want to establish the relationship between processing, structure, properties, and performance,” Zhu said.
    Current neural network models need large amounts of data for training. But in the additive manufacturing field, obtaining high-fidelity data is difficult, according to Zhu. To reduce the need for data, Zhu and Yan are pursuing ‘physics informed neural networking,’ or PINN. More

  • in

    Novel microscopy method provides look into future of cell biology

    What if a microscope allowed us to explore the 3D microcosm of blood vessels, nerves, and cancer cells instantaneously in virtual reality? What if it could provide views from multiple directions in real time without physically moving the specimen and worked up to 100 times faster than current technology?
    UT Southwestern scientists collaborated with colleagues in England and Australia to build and test a novel optical device that converts commonly used microscopes into multiangle projection imaging systems. The invention, described in an article in today’s Nature Methods, could open new avenues in advanced microscopy, the researchers say.
    “It is a completely new technology, although the theoretical foundations for it can be found in old computer science literature,” says corresponding author Reto Fiolka, Ph.D. Both he and co-author Kevin Dean, Ph.D., are assistant professors of cell biology and in the Lyda Hill Department of Bioinformatics at UT Southwestern.
    “It is as if you are holding the biological specimen with your hand, rotating it, and inspecting it, which is an incredibly intuitive way to interact with a sample. By rapidly imaging the sample from two different perspectives, we can interactively visualize the sample in virtual reality on the fly,” says Dean, director of the UTSW Microscopy Innovation Laboratory, which collaborates with researchers across campus to develop custom instruments that leverage advances in light microscopy.
    Currently, acquiring 3D-image information from a microscope requires a data-intensive process, in which hundreds of 2D images of the specimen are assembled into a so-called image stack. To visualize the data, the image stack is then loaded into a graphics software program that performs computations to form two-dimensional projections from different viewing perspectives on a computer screen, the researchers explain.
    “Those two steps require a lot of time and may need a very powerful and expensive computer to interact with the data,” Fiolka says. More

  • in

    New data science platform speeds up Python queries

    Researchers from Brown University and MIT have developed a new data science framework that allows users to process data with the programming language Python — without paying the “performance tax” normally associated with a user-friendly language.
    The new framework, called Tuplex, is able to process data queries written in Python up to 90 times faster than industry-standard data systems like Apache Spark or Dask. The research team unveiled the system in research presented at SIGMOD 2021, a premier data processing conference, and have made the software freely available to all.
    “Python is the primary programming language used by people doing data science,” said Malte Schwarzkopf, an assistant professor of computer science at Brown and one of the developers of Tuplex. “That makes a lot of sense. Python is widely taught in universities, and it’s an easy language to get started with. But when it comes to data science, there’s a huge performance tax associated with Python because platforms can’t process Python efficiently on the back end.”
    Platforms like Spark perform data analytics by distributing tasks across multiple processor cores or machines in a data center. That parallel processing allows users to deal with giant data sets that would choke a single computer to death. Users interact with these platforms by inputting their own queries, which contain custom logic written as “user-defined functions” or UDFs. UDFs specify custom logic, like extracting the number of bedrooms from the text of a real estate listing for a query that searches all of the real estate listings in the U.S. and selects all the ones with three bedrooms.
    Because of its simplicity, Python is the language of choice for creating UDFs in the data science community. In fact, the Tuplex team cites a recent poll showing that 66% of data platform users utilize Python as their primary language. The problem is that analytics platforms have trouble dealing with those bits of Python code efficiently.
    Data platforms are written in high-level computer languages that are compiled before running. Compilers are programs that take computer language and turn it into machine code — sets of instructions that a computer processor can quickly execute. Python, however, is not compiled beforehand. Instead, computers interpret Python code line by line while the program runs, which can mean far slower performance. More

  • in

    How children integrate information

    Children learn a huge number of words in the early preschool years. A two-year-old might be able to say just a handful of words, while a five-year-old is quite likely to know many thousands. How do children achieve this marvelous feat? The question has occupied psychologists for over a century: In countless carefully designed experiments, researchers titrate the information children use to learn new words. How children integrate different types of information, has remained unclear.
    “We know that children use a lot of different information sources in their social environment, including their own knowledge, to learn new words. But the picture that emerges from the existing research is that children have a bag of tricks that they can use,” says Manuel Bohn, a researcher at the Max Planck Institute for Evolutionary Anthropology.
    For example, if you show a child an object they already know — say a cup — as well as an object they have never seen before, the child will usually think that a word they never heard before belongs with the new object. Why? Children use information in the form of their existing knowledge of words (the thing you drink out of is called a “cup”) to infer that the object that doesn’t have a name goes with the name that doesn’t have an object. Other information comes from the social context: children remember past interactions with a speaker to find out what they are likely to talk about next.
    “But in the real world, children learn words in complex social settings in which more than just one type of information is available. They have to use their knowledge of words while interacting with a speaker. Word learning always requires integrating multiple, different information sources,” Bohn continues. An open question is how children combine different, sometimes even conflicting, sources of information.
    Predictions by a computer program
    In a new study, a team of researchers from the Max Planck Institute for Evolutionary Anthropology, MIT, and Stanford University takes on this issue. In a first step, they conducted a series of experiments to measure children’s sensitivity to different information sources. Next, they formulated a computational cognitive model which details the way that this information is integrated. More