More stories

  • in

    Quantum liquid becomes solid when heated

    Supersolids are a relatively new and exciting area of research. They exhibit both solid and superfluid properties simultaneously. In 2019, three research groups were able to demonstrate this state for the first time beyond doubt in ultracold quantum gases, among them the research group led by Francesca Ferlaino from the Department of Experimental Physics at the University of Innsbruck and the ÖAW Institute for Quantum Optics and Quantum Information (IQOQI) in Innsbruck.
    In 2021, Francesca Ferlaino’s team studied in detail the life cycle of supersolid states in a dipolar gas of dysprosium atoms. They observed something unexpected: “Our data suggested that an increase in temperature promotes the formation of supersolid structures,” recounts Claudia Politi of Francesca Ferlaino’s team. “This surprising behaviour was an important boost to theory, which had previously paid little attention to thermal fluctuations in this context.”
    The Innsbruck scientists joined the force with the danish theoretical group led by Thomas Pohl to explore the effect of thermal fluctuation. They developed and published in Nature Communications a theoretical model that can explain the experimental results and underlines the thesis that heating the quantum liquid can lead to the formation of a quantum crystal. The theoretical model shows that as the temperature rises, these structures can form more easily.
    “With the new model, we now have a phase diagram for the first time that shows the formation of a supersolid state as a function of temperature,” Francesca Ferlaino is delighted to say. “The surprising behavior, which contradicts our everyday observation, arises from the anisotropic nature of the dipole-dipole interaction of the strongly magnetic atoms of dysprosium.”
    The research is an important step towards a better understanding of supersolid states of matter and was funded by the Austrian Science Fund FWF, the European Research Council ERC and the European Union, among others. More

  • in

    Physicists discover transformable nano-scale electronic devices

    The nano-scale electronic parts in devices like smartphones are solid, static objects that once designed and built cannot transform into anything else. But University of California, Irvine physicists have reported the discovery of nano-scale devices that can transform into many different shapes and sizes even though they exist in solid states.
    It’s a finding that could fundamentally change the nature of electronic devices, as well as the way scientists research atomic-scale quantum materials. The study is published recently in Science Advances.
    “What we discovered is that for a particular set of materials, you can make nano-scale electronic devices that aren’t stuck together,” said Javier Sanchez-Yamagishi, an assistant professor of physics & astronomy whose lab performed the new research. “The parts can move, and so that allows us to modify the size and shape of a device after it’s been made.”
    The electronic devices are modifiable much like refrigerator door magnets — stuck on but can be reconfigured into any pattern you like.
    “The significance of this research is that it demonstrates a new property that can be utilized in these materials that allows for fundamentally different types of devices architectures to be realized, including mechanically reconfigure parts of a circuit,” said Ian Sequeira, a Ph.D student in Sanchez-Yamagishi’s lab.
    If it sounds like science fiction, said Sanchez-Yamagishi, that’s because until now scientists did not think such a thing was possible.

    Indeed, Sanchez-Yamagishi and his team, which also includes UCI Ph.D. student Andrew Barabas, weren’t even looking for what they ultimately discovered.
    “It was definitely not what we were initially setting out to do,” said Sanchez-Yamagishi. “We expected everything to be static, but what happened was we were in the middle of trying to measure it, and we accidentally bumped into the device, and we saw that it moved.”
    What they saw specifically was that tiny nano-scale gold wires could slide with very low friction on top of special crystals called “van der Waals materials.”
    Taking advantage of these slippery interfaces, they made electronic devices made of single-atom thick sheets of a substance called graphene attached to gold wires that can be transformed into a variety of different configurations on the fly.
    Because it conducts electricity so well, gold is a common part of electronic components. But exactly how the discovery could impact industries that use such devices is unclear.

    “The initial story is more about the basic science of it, although it is an idea which could one day have an effect on industry,” said Sanchez-Yamagishi. “This germinates the idea of it.”
    Meanwhile, the team expects their work could usher in a new era of quantum science research.
    “It could fundamentally change how people do research in this field,” Sanchez-Yamagishi said.
    “Researchers dream of having flexibility and control in their experiments, but there are a lot of restrictions when dealing with nanoscale materials,” he added. “Our results show that what was once thought to be fixed and static can be made flexible and dynamic.”
    Other UCI co-authors include Yuhui Yang, a senior undergraduate at UCI, and postdoctoral scholar Aaron Barajas-Aguilar. More

  • in

    Team designs four-legged robotic system that can walk a balance beam

    Researchers in Carnegie Mellon University’s Robotics Institute (RI) have designed a system that makes an off-the-shelf quadruped robot nimble enough to walk a narrow balance beam — a feat that is likely the first of its kind.
    “This experiment was huge,” said Zachary Manchester, an assistant professor in the RI and head of the Robotic Exploration Lab. “I don’t think anyone has ever successfully done balance beam walking with a robot before.”
    By leveraging hardware often used to control satellites in space, Manchester and his team offset existing constraints in the quadruped’s design to improve its balancing capabilities.
    The standard elements of most modern quadruped robots include a torso and four legs that each end in a rounded foot, allowing the robot to traverse basic, flat surfaces and even climb stairs. Their design resembles a four-legged animal, but unlike cheetahs who can use their tails to control sharp turns or falling cats that adjust their orientation in mid-air with the help of their flexible spines, quadruped robots do not have such instinctive agility. As long as three of the robot’s feet remain in contact with the ground, it can avoid tipping over. But if only one or two feet are on the ground, the robot can’t easily correct for disturbances and has a much higher risk of falling. This lack of balance makes walking over rough terrain particularly difficult.
    “With current control methods, a quadruped robot’s body and legs are decoupled and don’t speak to one another to coordinate their movements,” Manchester said. “So how can we improve their balance?”
    The team’s solution employs a reaction wheel actuator (RWA) system that mounts to the back of a quadruped robot. With the help of a novel control technique, the RWA allows the robot to balance independent of the positions of its feet.

    RWAs are widely used in the aerospace industry to perform attitude control on satellites by manipulating the angular momentum of the spacecraft.
    “You basically have a big flywheel with a motor attached,” said Manchester, who worked on the project with RI graduate student Chi-Yen Lee and mechanical engineering graduate students Shuo Yang and Benjamin Boksor. “If you spin the heavy flywheel one way, it makes the satellite spin the other way. Now take that and put it on the body of a quadruped robot.”
    The team prototyped their approach by mounting two RWAs on a commercial Unitree A1 robot — one on the pitch axis and one on the roll axis — to provide control over the robot’s angular momentum. With the RWA, it doesn’t matter if the robot’s legs are in contact with the ground or not because the RWAs provide independent control of the body’s orientation.
    Manchester said it was easy to modify an existing control framework to account for the RWAs because the hardware doesn’t change the robot’s mass distribution, nor does it have the joint limitations of a tail or spine. Without needing to account for such constraints, the hardware can be modeled like a gyrostat (an idealized model of a spacecraft) and integrated into a standard model-predictive control algorithm.
    The team tested their system with a series of successful experiments that demonstrated the robot’s enhanced ability to recover from sudden impacts. In simulation, they mimicked the classic falling-cat problem by dropping the robot upside down from nearly half a meter, with the RWAs enabling the robot to reorient itself mid-air and land on its feet. On hardware, they showed the robot’s ability to recover from disturbances — as well as the system’s balancing capability — with an experiment where the robot walked along a 6-centimeter-wide balance beam.
    Manchester predicts that quadruped robots will soon transition from being primarily research platforms in labs to widely available commercial-use products, similar to where drones were about 10 years ago. And with continued work to enhance a quadruped robot’s stabilizing capabilities to match the instinctual four-legged animals that inspired their design, they could be used in high-stakes scenarios like search-and-rescue in the future.
    “Quadrupeds are the next big thing in robots,” Manchester said. “I think you’re going to see a lot more of them in the wild in the next few years.”
    Video: https://youtu.be/tH3oP2s3NOQ More

  • in

    A neuromorphic visual sensor can recognize moving objects and predict their path

    A new bio-inspired sensor can recognise moving objects in a single frame from a video and successfully predict where they will move to. This smart sensor, described in a Nature Communications paper, will be a valuable tool in a range of fields, including dynamic vision sensing, automatic inspection, industrial process control, robotic guidance, and autonomous driving technology.
    Current motion detection systems need many components and complex algorithms doing frame-by-frame analyses, which makes them inefficient and energy-intensive. Inspired by the human visual system, researchers at Aalto University have developed a new neuromorphic vision technology that integrates sensing, memory, and processing in a single device that can detect motion and predict trajectories.
    At the core of their technology is an array of photomemristors, electrical devices that produce electric current in response to light. The current doesn’t immediately stop when the light is switched off. Instead, it decays gradually, which means that photomemristors can effectively ‘remember’ whether they’ve been exposed to light recently. As a result, a sensor made from an array of photomemristors doesn’t just record instantaneous information about a scene, like a camera does, but also includes a dynamic memory of the preceding instants.
    ‘The unique property of our technology is its ability to integrate a series of optical images in one frame,’ explains Hongwei Tan, the research fellow who led the study. ‘The information of each image is embedded in the following images as hidden information. In other words, the final frame in a video also has information about all the previous frames. That lets us detect motion earlier in the video by analysing only the final frame with a simple artificial neural network. The result is a compact and efficient sensing unit.’
    To demonstrate the technology, the researchers used videos showing the letters of a word one at a time. Because all the words ended with the letter ‘E’, the final frame of all the videos looked similar. Conventional vision sensors couldn’t tell whether the ‘E’ on the screen had appeared after the other letters in ‘APPLE’ or ‘GRAPE’. But the photomemristor array could use hidden information in the final frame to infer which letters had preceded it and predict what the word was with nearly 100% accuracy.
    In another test, the team showed the sensor videos of a simulated person moving at three different speeds. Not only was the system able to recognize motion by analysing a single frame, but it also correctly predicted the next frames.
    Accurately detecting motion and predicting where an object will be are vital for self-driving technology and intelligent transport. Autonomous vehicles need accurate predictions of how cars, bikes, pedestrians, and other objects will move in order to guide their decisions. By adding a machine learning system to the photomemristor array, the researchers showed that their integrated system can predict future motion based on in-sensor processing of an all-informative frame.
    ‘Motion recognition and prediction by our compact in-sensor memory and computing solution provides new opportunities in autonomous robotics and human-machine interactions,’ says Professor Sebastiaan van Dijken. ‘The in-frame information that we attain in our system using photomemristors avoids redundant data flows, enabling energy-efficient decision-making in real time.’ More

  • in

    Novel, highly sensitive biosensor set to transform wearable health monitoring

    Wireless wearable biosensors have been a game changer in personalized health monitoring and healthcare digitization because they can efficiently detect, record, and monitor medically significant biological signals. Chipless resonant antennae are highly promising components of wearable biosensors, as they are affordable and tractable. However, their practical applications are limited by low sensitivity (inability to detect small biological signals) caused by low quality (Q) factor of the system.
    To overcome this hurdle, researchers led by Professor Takeo Miyake from Waseda University, Professor Yin Sijie from Beijing Institute of Technology, and Taiki Takamatsu from Japan Aerospace Exploration Agency, have developed a wireless bioresonator using “parity-time (PT) symmetry” that can detect minute biological signals. Their work has been published in Advanced Materials Technologies.
    In this study, the researchers designed a bioresonator consisting of a magnetically coupled reader and sensor with high Q factor, and thus, increased sensitivity to biochemical changes. The reader and sensor both comprise an inductor (L) and capacitor (C) that are parallel-connected to a resistor (R). In the sensor, the resistor is a chemical sensor called a “chemiresistor” that converts biochemical signals into changes in resistance. The chemiresistor contains an enzymatic electrode with an immobilized enzyme. Minute biochemical changes at the enzymatic electrode (in response to changes in the levels of biomolecules such as blood sugar or lactate) are thus converted into electrical signals by the sensor, and then amplified at the reader.
    Explaining the technical concept behind their novel biosensor, Miyake says, “We modeled the characteristics of the PT-symmetric wireless sensing system by using an eigenvalue solution and input impedance, and experimentally demonstrated the sensitivity enhancement at/near the exceptional point by using parallel inductance-capacitance-resistance (LCR) resonators.The developed amplitude modulation-based PT-symmetric bioresonator can detect small biological signals that have been difficult to measure wirelessly until now. Moreover, our PT-symmetric system provides two types of readout modes: threshold-based switching and enhanced linear detection. Different readout modes can be used for different sensing ranges.”
    The researchers tested the system (here containing a glucose-specific enzyme) on human tear fluids and found that it could detect glucose concentrations ranging from 0.1 to 0.6 mM. They also tested it with a lactate-specific enzyme and commercially available human skin and found that it could measure lactate levels in the range of 0.0 to 4.0 mM through human skin tissue, without any loss of sensitivity. This result further indicates that the biosensor can be used as an implantable device. Compared to a conventional chipless resonant antenna-based system, the PT-symmetric system achieved a 2000-fold higher sensitivity in linear and a 78% relative change in threshold-based detection respectively.
    Sharing his vision for the future, Miyake concludes, “The present telemetry system is robust and tunable. It can enhance the sensitivity of sensors to small biological signals. We envision that this technology can be used for developing smart contact lenses to detect tear glucose and/or implantable medical devices to detect lactate for efficient monitoring of diabetes and blood poisoning.” More

  • in

    Multi-compartment membranes for multicellular robots: Everybody needs some body

    The typical image of a robot is one composed of motors and circuits, encased in metal. Yet the field of molecular robotics, which is being spearheaded in Japan, is beginning to change that.
    Much like how complex living organisms are formed, molecular robots derive form and functionality from assembled molecules. Such robots could have important applications, such as being used to treat and diagnose diseases in vivo.
    The first challenge in building a molecular robot is the same as the most basic need of any organism: the body, which holds everything together. But manufacturing complex structures, especially at the microscopic level, has proven to be an engineering nightmare, and many limitations on what is possible currently exist.
    To address this problem, a research team at Tohoku University has developed a simple method for creating molecular robots from artificial, multicellular-like bodies by using molecules which can organize themselves into the desired shape.
    The team, including Associate Professor Shin-ichiro Nomura and postdoctoral researcher Richard Archer from the Department of Robotics at the Graduate School of Engineering, recently reported their breakthrough in the American Chemical Society’s publication, Langmuir.
    “Our work demonstrated a simple, self-assembly technique which utilizes phospholipids and synthetic surfactants coated onto a hydrophobic silicone sponge,” said Archer.
    When Nomura and his colleagues introduced water into the lipid coated sponge, the hydrophilic and hydrophobic forces enabled the lipids and surfactants to assemble themselves, thereby allowing water to soak in. The sponge was then placed into oil, spontaneously forming micron sized, stabilized aqueous droplets as the water was expelled from the solid support. When pipetted on the surface of water, these droplets quickly assembled into larger planar macroscopic structures, like bricks coming together to form a wall.
    “Our developed technique can easily build centimeter size structures from the assembly of micron sized compartments and is capable of being done with more than one droplet type,” adds Archer. “By using different sponges with water containing different solutes, and forming different droplet types, the droplets can combine to form heterogeneous structures. This modular approach to assembly unleashes near endless possibilities.”
    The team could also turn these bodies into controllable devices with induced motion. To do so, they introduced magnetic nanoparticles into the hydrophobic walls of the multi-compartment structure. Archer says this multi-compartment approach to robot design will allow flexible modular designs with multiple functionalities and could redefine what we imagine robots to be. “Future work here will move us closer to a new generation of robots which are assembled by molecules rather than forged in steel and use functional chemicals rather than silicon chips and motors.” More

  • in

    Software to untangle genetic factors linked to shared characteristics among different species

    Aston University has worked with international partners to develop a software package to help scientists answer key questions about genetic factors associated with shared characteristics among different species.
    Called CALANGO (comparative analysis with annotation-based genomic components), it has the potential to help geneticists investigate vital issues such as antibacterial resistance and improvement of agricultural crops.
    This work ,”CALANGO: a phylogeny-aware comparative genomics tool for discovering quantitative genotype-phenotype associations across species,” has been published in the journal Patterns. It is the result of a four year collaboration between Aston University, the Federal University of Minas Gerais in Brazil and other partners in Brazil, Norway and the US.
    Similarities between species may arise either from shared ancestry (homology) or from shared evolutionary pressures (convergent evolution). For example, ravens, pigeons and bats can all fly, but the first two are birds whereas bats are mammals.
    This means that the biology of flight in ravens and pigeons is likely to share genetic aspects due to their common ancestry. Both species are able to fly nowadays because their last common ancestor — an ancestor bird — was also a flying organism.
    In contrast, bats have the ability to fly via potentially different genes than the ones in birds, since the last common ancestor of birds and mammals was not a flying animal.

    Untangling the genetic components shared due to common ancestry from the ones shared due to common evolutionary pressures requires sophisticated statistical models that take common ancestry into account.
    So far, this has been an obstacle for scientists who want to understand the emergence of complex traits across different species, mainly due to the lack of proper frameworks to investigate these associations.
    The new software has been designed to effectively incorporate vast amounts of genomic, evolutionary and functional annotation data to explore the genetic mechanisms which underly similar characteristics between different species sharing common ancestors.
    Although the statistical models used in the tool are not new, it is the first time they have been combined to extract novel biological insights from genomic data.
    The technique has the potential to be applied to many different areas of research, allowing scientists to analyse massive amounts of open-source genetic data belonging to thousands of organisms in more depth.
    Dr Felipe Campelo from the Department of Computer Science in the College of Engineering and Physical Sciences at Aston University,said: “There are many exciting examples of how this tool can be applied to solve major problems facing us today. These include exploring the co-evolution of bacteria and bacteriophages and unveiling factors associated with plant size, with direct implications for both agriculture and ecology.”
    “Further potential applications include supporting the investigation of bacterial resistance to antibiotics, and of the yield of plant and animal species of economic importance.”
    The corresponding author of the study, Dr Francisco Pereira Lobo from the Department of Genetics, Ecology and Evolution at the Federal University of Minas Gerais in Brazil, said: “Most genetic and phenotypic variations occur between different species, rather than within them. Our newly developed tool allows the generation of testable hypotheses about genotype-phenotype associations across multiple species that enable the prioritisation of targets for later experimental characterization.”
    Further information: https://labpackages.github.io/CALANGO/ More

  • in

    Scientists develop new way to measure wind

    Wind speed and direction provide clues for forecasting weather patterns. In fact, wind influences cloud formation by bringing water vapor together. Atmospheric scientists have now found a novel way of measuring wind — by developing an algorithm that uses data from water vapor movements. This could help predict extreme events like hurricanes and storms.
    A study published by University of Arizona researchers in the journal Geophysical Research Letters provides, for the first time, data on the vertical distribution of horizontal winds over the tropics and midlatitudes. The researchers got the water vapor movement data by using two operational satellites of the National Oceanic and Atmospheric Administration, or NOAA, the federal agency for weather forecasting.
    Wind brings everything else in the atmosphere together, including clouds, aerosols, water vapor, precipitation and radiation, said Xubin Zeng, co-author of the study and the director of the Climate Dynamics and Hydrometeorology Collaborative at UArizona. But it has remained somewhat elusive.
    “We never knew the wind very well. I mean, that’s the last frontier. That’s why I’m excited,” Zeng said.
    Thanks to more advanced algorithms, Zeng said, the researchers were able to do the estimation of horizontal winds not just at one altitude, but at different altitudes at the same location.
    “This was not possible a decade ago,” Zeng said.

    Wind measurement typically is done in three different ways, Zeng explained. The first is through the use of radiosonde, an instrumental package suspended below a 6-foot-wide balloon. Sensors on the radiosonde measure wind speed and direction, and take measurements of atmospheric pressure, temperature and relative humidity. The downsides of radiosonde balloons, Zeng said, is the cost. Each launch could cost around $400 to $500, and some regions, such as Africa and the Amazon rainforest, have limited radiosonde stations. The other limitation is that radiosondes are not available over oceans, Zeng said.
    Another way to measure wind is using cloud top, which is the height at which the upper visible part of the cloud is located, Zeng said. By tracking cloud top movement using geostationary satellite data, weather experts monitor wind speed and direction at one height. But Zeng said cloud tops exist most of the time below 2 miles or above 4 1/2 miles above Earth’s surface, depending on whether the clouds are low or high. This means wind information is usually not available in the middle, between 2 and 4 1/2 miles.
    Lidar, which stands for light detection and ranging, is a method that precisely measures wind movements at different elevations, and it provides very good data, Zeng said. But with lidar, measurements can be acquired only in one vertical “curtain,” with measured wind typically in the east-west direction, he added.
    Nowadays, Zeng said, to study topics like air quality and volcano ash dispersion, which are directly influenced by wind, experts use weather forecasting models to ingest measurements from different sources rather than using direct measurements of wind. But model outputs are not good enough when there is rainfall, Zeng said.
    In their study, Zeng and his team avoided using data from models. They instead used data from the movement of water vapor recorded by the two NOAA satellites. The satellites moved in the same direction separated by a 50-minute interval, and they detected the water vapor movement through infrared radiation.
    While our eyes cannot detect the minute movements of water vapor in the atmosphere, lead study author Amir Ouyed, a member of Zeng’s research group, used machine-learning algorithms that do better image processing to track water vapor.
    “For decades, people were saying, ‘You have to move the cloud top or water vapors enough so that you can see the difference of the pattern.’ But now, we don’t need to do that,” Zeng said.
    “The resolution of the data is coarse, with a pixel size of 100 kilometers. It’s a demonstration of the feasibility for our future satellite mission we are pursuing where we hope to provide the 10-kilometer resolution,” Zeng said.
    Zeng and his collaborators at other institutions are planning to pursue a new satellite wind mission in which they envision combining water vapor movement data and measurements from wind lidar to provide better wind measurements overall. More