More stories

  • in

    50 years ago, researchers discovered a leak in Earth’s oceans

    Oceans may be shrinking — Science News, March 10, 1973

    The oceans of the world may be gradually shrinking, leaking slowly away into the Earth’s mantle…. Although the oceans are constantly being slowly augmented by water carried up from Earth’s interior by volcanic activity … some process such as sea-floor spreading seems to be letting the water seep away more rapidly than it is replaced.

    Update

    Scientists traced the ocean’s leak to subduction zones, areas where tectonic plates collide and the heavier of the two sinks into the mantle. It’s still unclear how much water has cycled between the deep ocean and mantle through the ages. A 2019 analysis suggests that sea levels have dropped by an average of up to 130 meters over the last 230 million years, in part due to Pangea’s breakup creating new subduction zones. Meanwhile, molten rock that bubbles up from the mantle as continents drift apart may “rain” water back into the ocean, scientists reported in 2022. But since Earth’s mantle can hold more water as it cools (SN: 6/13/14), the oceans’ mass might shrink by 20 percent every billion years. More

  • in

    Breakthrough in the understanding of quantum turbulence

    Researchers have shown how energy disappears in quantum turbulence, paving the way for a better understanding of turbulence in scales ranging from the microscopic to the planetary.
    Dr Samuli Autti from Lancaster University is one of the authors of a new study of quantum wave turbulence together with researchers at Aalto University.
    The team’s findings, published in Nature Physics, demonstrate a new understanding of how wave-like motion transfers energy from macroscopic to microscopic length scales, and their results confirm a theoretical prediction about how the energy is dissipated at small scales.
    Dr Autti said: “This discovery will become a cornerstone of the physics of large quantum systems.”
    Quantum turbulence at large scales — such as turbulence around moving aeroplanes or ships — is difficult to simulate. At small scales, quantum turbulence is different from classical turbulence because the turbulent flow of a quantum fluid is confined around line-like flow centres called vortices and can only take certain, quantised values.
    This granularity makes quantum turbulence significantly easier to capture in a theory, and it is generally believed that mastering quantum turbulence will help physicists understand classical turbulence too.

    In the future, an improved understanding of turbulence beginning on the quantum level could allow for improved engineering in domains where the flow and behaviour of fluids and gases like water and air is a key question.
    Lead author Dr Jere Mäkinen from Aalto University said: “Our research with the basic building blocks of turbulence might help point the way to a better understanding of interactions between different length scales in turbulence.
    “Understanding that in classical fluids will help us do things like improve the aerodynamics of vehicles, predict the weather with better accuracy, or control water flow in pipes. There is a huge number of potential real-world uses for understanding macroscopic turbulence.”
    Dr Autti said quantum turbulence was a challenging problem for scientists.
    “In experiments, the formation of quantum turbulence around a single vortex has remained elusive for decades despite an entire field of physicists working on quantum turbulence trying to find it. This includes people working on superfluids and quantum gases such as atomic Bose-Einstein Condensates (BEC). The theorised mechanism behind this process is known as the Kelvin wave cascade.
    “In the present manuscript we show that this mechanism exists and works as theoretically anticipated. This discovery will become a cornerstone of the physics or large quantum systems.”
    The team of researchers, led by Senior Scientist Vladimir Eltsov, studied turbulence in the Helium-3 isotope in a unique, rotating ultra-low temperature refrigerator in the Low Temperature Laboratory at Aalto. They found that at microscopic scales so-called Kelvin waves act on individual vortices by continually pushing energy to smaller and smaller scales — ultimately leading to the scale at which dissipation of energy takes place.
    Dr Jere Mäkinen from Aalto University said: “The question of how energy disappears from quantized vortices at ultra-low temperatures has been crucial in the study of quantum turbulence. Our experimental set-up is the first time that the theoretical model of Kelvin waves transferring energy to the dissipative length scales has been demonstrated in the real world.”
    The team’s next challenge is to manipulate a single quantized vortex using nano-scale devices submerged in superfluids. More

  • in

    Modelling superfast processes in organic solar cell material

    In organic solar cells, carbon-based polymers convert light into charges that are passed to an acceptor. This type of material has great potential, but to unlock this, a better understanding is needed of the way in which charges are produced and transported along the polymers. Scientists from the University of Groningen have now calculated how this happens by combining molecular dynamics simulations with quantum calculations and have provided theoretical insights to interpret experimental data. The results were published on 15 March in the Journal of Physical Chemistry C.
    Organic solar cells are thinner than classic silicon-based cells and they are flexible and probably easier to manufacture. To improve their efficiency, it is important to understand how charges travel through the polymer film. ‘These films are made up of an electron donor and an electron acceptor,’ explains Elisa Palacino-González, a postdoctoral researcher in the Theory of Condensed Matter group at the Zernike Institute for Advanced Materials, University of Groningen (the Netherlands). ‘The charges are delocalized along the entangled polymer chains and transferred from donor to acceptor on a sub-100 femtosecond timescale. So, we need theoretical studies and simulations to understand this process.’
    Charge transfer
    The system that Palacino-González studied is made up of the plastic semiconductor P3HT as the donor and PCBM, a polymer with a C60 ‘buckyball’, as the acceptor. ‘We wanted to know how charges are conducted through the material to understand how this material captures and transports energy. For if we understand this, it may be possible to control it.’ Experimental studies of the material provide some information, but only on bulk processes. ‘Therefore, we combined molecular dynamics simulations to determine the motion of the molecules in the material with quantum chemistry calculations to atomistically model the donor polymer, using time-dependent density functional theory.’
    These theoretical studies were carried out using a donor polymer that was made up of twelve monomers. ‘We focused mainly on the donor to study how the excitations in the material occur.’ The molecular dynamics simulations show the movement in the ground state due to thermal effects. Palacino-González calculated this for a period of 12.5 picoseconds, which sufficed to study the femtosecond charge transfer.
    Experiments
    ‘And the next step was to superimpose the quantum world onto these molecules,’ continues Palacino-González. To do this, she started with dimers. ‘Two monomers next to each other in the polymer chain will interact, they ‘talk’ to each other. This causes a split in the energy levels of the duo,’ Palacino-González explains. She created a ‘fingerprint’ of the dimer’s energy in the shape of a Hamiltonian, a matrix that contains all the information about a molecular system. ‘When two monomers are aligned in a parallel fashion, the two are coupled and talk to each other. But when they are at 90-degree angles, the interaction is minimal.’
    Such an angle forms a kink in the molecule, which hampers energy transfer along the polymer chain. ‘A statistical analysis of the simulated material, made up of 845 polymers, shows that around half of them are perfectly aligned, while the other half have mostly one or two kinks,’ says Palacino-González. From dimers, she calculated the Hamiltonian of 12-mers (made up of 6 dimers). Her calculations included a varying number of kinks in the 12-mer donor polymers. ‘These studies show the energy distribution along the polymers and provide us with a realistic model to characterize the effect of the environment created by the materials on the spectral signals of the acceptor polymer blends, which is directly comparable with current experiments on these materials.’
    Realistic description
    Although the model is limited, since it only allows monomers to interact with their direct neighbour, the results provide important insights into experimental results. ‘Our calculations are from first principles and this is the first time that such an analysis, including the realistic description of the blend environment, was made for this material. This means that we can now help to explain the spectra generated from experimental studies with P3HT/PCBM mixtures. For example, we can show how size distribution changes the spectra that are generated by laser light excitation,’ says Palacino-González. ‘We are now able to look at the ultrafast charge transfer process, from donor to acceptor. This will inspire theoretical studies on organic photovoltaics and help experimentalists to understand their results.’ More

  • in

    Major advance in super-resolution fluorescence microscopy

    Scientists led by Nobel Laureate Stefan Hell at the Max Planck Institute for Medical Research in Heidelberg have developed a super-resolution microscope with a spatio-temporal precision of one nanometer per millisecond. An improved version of their recently introduced MINFLUX super-resolution microscopy allowed tiny movements of single proteins to be observed at an unprecedented level of detail: the stepping motion of the motor protein kinesin-1 as it walks along microtubules while consuming ATP. The work highlights the power of MINFLUX as a revolutionary new tool for observing nanometer-sized conformational changes in proteins.
    Unraveling the inner workings of a cell requires knowledge of the biochemistry of individual proteins. Measuring tiny changes in their position and shape is the central challenge here. Fluorescence microscopy, in particular super-resolution microscopy (i.e. nanoscopy) has become indispensable in this emerging field. MINFLUX, the recently introduced fluorescence nanoscopy system, has already attained a spatial resolution of one to a few nanometers: the size of small organic molecules. But taking our understanding of molecular cell physiology to the next level requires observations at even higher spatio-temporal resolution.
    When Stefan Hell’s group first presented MINFLUX in 2016, it had been used to track fluorescently labeled proteins in cells. However, these movements were random, and the tracking had precisions of the order of tens of nanometers. Their study is the first to apply the resolving power of MINFLUX to conformational changes of proteins, specifically the motor protein kinesin-1. To do this, the researchers at the Max Planck Institute for Medical Research developed a new MINFLUX version for tracking single fluorescent molecules.
    All established methods for measuring protein dynamics have severe limitations, hampering their ability to address the critically important (sub)nanometer / (sub)millisecond range. Some provide a high spatial resolution, down to a few nanometers, but cannot track changes fast enough. Others have a high temporal resolution but require labeling with beads that are 2 to 3 orders of magnitude larger than the protein being studied. Since the functioning of the protein is likely to be compromised by a bead of this size, studies using beads leave open questions.
    Fluorescence from a single molecule
    MINFLUX, however, requires only a standard 1-nm sized fluorescence molecule as a label attached to the protein, and therefore can provide both the resolution and the minimal invasiveness that are needed in studying native protein dynamics. “One challenge lies in building a MINFLUX microscope that works close to the theoretical limit and is shielded against environmental noise,” says Otto Wolff, PhD student in the group. “Designing probes that do not affect the protein function, but still reveal the biological mechanism, is another,” adds his colleague Lukas Scheiderer.
    The MINFLUX microscope which the researchers now introduce can record protein movements with a spatiotemporal precision of up to 1.7 nanometers per millisecond. It requires the detection of only about 20 photons emitted by the fluorescent molecule. “I think we are opening a new chapter in the study of the dynamics of individual proteins and how they change shape during their functioning,” says Stefan Hell. “The combination of high spatial and temporal resolution provided by MINFLUX will allow researchers to study biomolecules as never before.”
    Resolving the stepping motion of kinesin-1 with ATP under physiological conditions
    Kinesin-1 is a key player in transporting cargo throughout our cells, and mutations of the protein are at the heart of several diseases. Kinesin-1 actually ‘walks’ along filaments (the microtubules) that span our cells like a network of streets. One can imagine the motion as literally ‘stepping’, since the protein has two ‘heads’ that alternately change their location on the microtubule. This movement occurs usually along one of the 13 protofilaments forming the microtubule, and is fueled by splitting of the cell’s principal energy supplier ATP (adenosine triphosphate).
    Using only a single fluorophore for labeling the kinesin-1, the scientists recorded the regular 16 nm. steps of individual heads as well as 8 nm substeps, with nanometer/millisecond spatiotemporal resolution. Their results proved that ATP is taken up while a single head is bound to the microtubule, but that ATP hydrolysis occurs when both heads are bound. It also revealed that the stepping involves a rotation of the protein ‘stalk’, the part of the kinesin molecule that holds the cargo. The spatiotemporal resolution of MINFLUX also revealed a rotation of the head in the initial phase of each step. Significantly, these findings were made using physiological concentrations of ATP, as was hitherto not possible with tiny fluorescence labels.
    Future potential in exploring protein dynamics
    “I’m excited so see where MINFLUX will take us. It adds another dimension to the study of how proteins work. This can help us to understand the mechanisms behind many diseases and ultimately contribute to the development of therapies,” adds Jessica Matthias, a postdoctoral scientist formerly in Hell’s group who is now exploring the applications of MINFLUX to a variety of biological questions. More

  • in

    Scientists develop energy-saving, tunable meta-devices for high-precision, secure 6G communications

    The future of wireless communications is set to take a giant leap with the advent of sixth-generation (6G) wireless technology. A research team at City University of Hong Kong (CityU) invented a groundbreaking tunable terahertz (THz) meta-device that can control the radiation direction and coverage area of THz beams. By rotating its metasurface, the device can promptly direct the 6G signal only to a designated recipient, minimizing power leakage and enhancing privacy. It is expected to provide a highly adjustable, directional and secure means for future 6G communications systems.
    The potential of THz band technology is unlimited, as it has abundant spectrum resources to support 100 Gbps (gigabit per second)- and even Tbps (terabit per second)-level ultrahigh-speed data rate for wireless communications, which is hundreds to thousands of times faster than the 5G transmission data rate. However, conventional THz systems use bulky, heavy dielectric lenses and reflectors, which can guide waves only to a fixed transmitter or detector, or transmit them to a single receiver located at a fixed position or covering a limited area. This hinders the development of future 6G applications, which require precise positioning and concentrated signal strength.
    Existing bulky systems hinder 6G applications
    With the joint effort of two research teams at CityU, led by Professor Tsai Din-Ping, Chair Professor in the Department of Electrical Engineering, and Professor Chan Chi-hou, Acting Provost and Director of the State Key Laboratory of Terahertz and Millimeter Waves (SKLTMW), a novel, tunable meta-devices that can fully control the THz beam’s propagation direction and coverage area was recently developed to overcome these challenges.
    “The advent of a tunable THz meta-device presents exciting prospects for 6G communications systems,” said Professor Tsai, who is an expert in the field of metasurfaces and photonics. “Our meta-device allows for signal delivery to specific users or detectors and has the flexibility to adjust the propagating direction, as needed.”
    “Our findings offer a range of benefits for advanced THz communications systems, including security, flexibility, high directivity and signal concentration,” added Professor Chan, who specializes in terahertz technology research.

    Rotary metasurface with thousands of micro-antennas
    The meta-device consists of two or three rotary metasurfaces (artificial, thin-sheet material with sub-wavelength thickness), which work as efficient projectors to steer the focal spot of THz beams on a two-dimensional plane or in a three-dimensional space. With a diameter of 30 mm, each metasurface has about 11,000 micro-antennas, which are just 0.25mm x 0.25mm in size and different from each other. “The secret to the success of the meta-device lies in the meticulous calculation and design of each micro-antenna,” said Professor Tsai. By simply rotating the metasurfaces without additional space requirements, the THz beam focus can be adjusted and directed to the specified X, Y and Z coordinates of the destination accordingly.
    With the highly precise and advanced equipment in the SKLTMW, the research team conducted experiments and verified that the two kinds of varifocal meta-devices they developed — doublet and triplet meta-devices — can project the focusing spot of the THz wave into an arbitrary spot in a 2D plane and a 3D space, respectively, with high precision.
    This innovative design has demonstrated the capability of a meta-device to direct a 6G signal towards a specific location in two- and three-dimensional space.
    Since only the user or detector in a specific spot can receive the signal, and the highly concentrated signal can be flexibly switched to other users or detectors without wasting power on nearby receivers or impairing privacy, the meta-device can increase directivity, security and flexibility in future 6G communications with lower energy consumption.

    Easy to scale up production at low cost
    The metasurfaces are fabricated with high-temperature resin and a 3D printing method developed by the team. They are lightweight and small and can be easily produced in large scale at low cost for practical applications.
    The novel THz tunable meta-device is expected to have great application potential for 6G communications systems, including wireless power transfer, zoom imaging and remote sensing. The research team plans to design further meta-device applications based on THz varifocal imaging.
    The findings were published in the scientific journal Science Advances under the title “A 6G meta-device for 3D varifocal.”
    Professor Tsai and Professor Chan are the co-corresponding authors. The co-first authors are Mr Zhang Jingcheng, PhD student under Professor Tsai’s supervision, Dr Wu Gengbo, postdoctoral research fellow in the SKLTMW, and Dr Chen Mu-Ku, Assistant Professor in the Department of Electrical Engineering at CityU. Miss Liu Xiaoyuan, PhD student in the Department of Electrical Engineering, and Dr Chan Ka-fai, from the SKLTMW, also contributed to the research.
    The research was supported by the University Grants Committee and the Research Grants Council of HKSAR, the Science, Technology and Innovation Commission of Shenzhen Muncipality, the Department of Science and Technology of Guangdong Province, and CityU. More

  • in

    Could AI-powered object recognition technology help solve wheat disease?

    A new University of Illinois project is using advanced object recognition technology to keep toxin-contaminated wheat kernels out of the food supply and to help researchers make wheat more resistant to fusarium head blight, or scab disease, the crop’s top nemesis.
    “Fusarium head blight causes a lot of economic losses in wheat, and the associated toxin, deoxynivalenol (DON), can cause issues for human and animal health. The disease has been a big deterrent for people growing wheat in the Eastern U.S. because they could grow a perfectly nice crop, and then take it to the elevator only to have it get docked or rejected. That’s been painful for people. So it’s a big priority to try to increase resistance and reduce DON risk as much as possible,” says Jessica Rutkoski, assistant professor in the Department of Crop Sciences, part of the College of Agricultural, Consumer and Environmental Sciences (ACES) at Illinois. Rutkoski is a co-author on the new paper in the Plant Phenome Journal.
    Increasing resistance to any crop disease traditionally means growing a lot of genotypes of the crop, infecting them with the disease, and looking for symptoms. The process, known in plant breeding as phenotyping, is successful when it identifies resistant genotypes that don’t develop symptoms, or less severe symptoms. When that happens, researchers try to identify the genes related to disease resistance and then put those genes in high-performing hybrids of the crop.
    It’s a long, repetitive process, but Rutkoski hoped one step — phenotyping for disease symptoms — could be accelerated. She looked for help from AI experts Junzhe Wu, doctoral student in the Department of Agricultural and Biological Engineering (ABE), and Girish Chowdhary, associate professor in ABE and the Department of Computer Science (CS). ABE is part of ACES and the Grainger College of Engineering, which also houses CS.
    “We wanted to test whether we could quantify kernel damage using simple cell phone images of grains. Normally, we look at a petri dish of kernels and then give it a subjective rating. It’s very mind-numbing work. You have to have people specifically trained and it’s slow, difficult, and subjective. A system that could automatically score kernels for damage seemed doable because the symptoms are pretty clear,” Rutkoski says.
    Wu and Chowdhary agreed it was possible. They started with algorithms similar to those used by tech giants for object detection and classification. But discerning minute differences in diseased and healthy wheat kernels from cell phone images required Wu and Chowdhary to advance the technology further.

    “One of the unique things about this advance is that we trained our network to detect minutely damaged kernels with good enough accuracy using just a few images. We made this possible through meticulous pre-processing of data, transfer learning, and bootstrapping of labeling activities,” Chowdhary says. “This is another nice win for machine learning and AI for agriculture and society.”
    He adds, “This project builds on the AIFARMS National AI Institute and the Center for Digital Agriculture here at Illinois to leverage the strength of AI for agriculture.”
    Successfully detecting fusarium damage — small, shriveled, gray, or chalky kernels — meant the technology could also foretell the grain’s toxin load; the more external signs of damage, the greater the DON content.
    When the team tested the machine learning technology alone, it was able to predict DON levels better than in-field ratings of disease symptoms, which breeders often rely on instead of kernel phenotyping to save time and resources. But when compared to humans rating disease damage on kernels in the lab, the technology was only 60% as accurate.
    The researchers are still encouraged, though, as their initial tests didn’t use a large number of samples to train the model. They’re currently adding samples and expect to achieve greater accuracy with additional tweaking.
    “While further training is needed to improve the capabilities of our model, initial testing shows promising results and demonstrates the possibility of providing an automated and objective phenotyping method for fusarium damaged kernels that could be widely deployed to support resistance breeding efforts,” Wu says.
    Rutkoski says the ultimate goal is to create an online portal where breeders like her could upload cell phone photos of wheat kernels for automatic scoring of fusarium damage.
    “A tool like this could save weeks of time in a lab, and that time is critical when you’re trying to analyze the data and prepare the next trial. And ultimately, the more efficiency we can bring to the process, the faster we can improve resistance to the point where scab can be eliminated as a problem,” she says. More

  • in

    Resilient bug-sized robots keep flying even after wing damage

    Bumblebees are clumsy fliers. It is estimated that a foraging bee bumps into a flower about once per second, which damages its wings over time. Yet despite having many tiny rips or holes in their wings, bumblebees can still fly.
    Aerial robots, on the other hand, are not so resilient. Poke holes in the robot’s wing motors or chop off part of its propellor, and odds are pretty good it will be grounded.
    Inspired by the hardiness of bumblebees, MIT researchers have developed repair techniques that enable a bug-sized aerial robot to sustain severe damage to the actuators, or artificial muscles, that power its wings — but to still fly effectively.
    They optimized these artificial muscles so the robot can better isolate defects and overcome minor damage, like tiny holes in the actuator. In addition, they demonstrated a novel laser repair method that can help the robot recover from severe damage, such as a fire that scorches the device.
    Using their techniques, a damaged robot could maintain flight-level performance after one of its artificial muscles was jabbed by 10 needles, and the actuator was still able to operate after a large hole was burnt into it. Their repair methods enabled a robot to keep flying even after the researchers cut off 20 percent of its wing tip.
    This could make swarms of tiny robots better able to perform tasks in tough environments, like conducting a search mission through a collapsing building or dense forest.

    “We spent a lot of time understanding the dynamics of soft, artificial muscles and, through both a new fabrication method and a new understanding, we can show a level of resilience to damage that is comparable to insects. We’re very excited about this. But the insects are still superior to us, in the sense that they can lose up to 40 percent of their wing and still fly. We still have some catch-up work to do,” says Kevin Chen, the D. Reid Weedon, Jr. Assistant Professor in the Department of Electrical Engineering and Computer Science (EECS), the head of the Soft and Micro Robotics Laboratory in the Research Laboratory of Electronics (RLE), and the senior author of the paper on these latest advances.
    Chen wrote the paper with co-lead authors and EECS graduate students Suhan Kim and Yi-Hsuan Hsiao; Younghoon Lee, a postdoc; Weikun “Spencer” Zhu, a graduate student in the Department of Chemical Engineering; Zhijian Ren, an EECS graduate student; and Farnaz Niroui, the EE Landsman Career Development Assistant Professor of EECS at MIT and a member of the RLE. The article will appear in Science Robotics.
    Robot repair techniques
    The tiny, rectangular robots being developed in Chen’s lab are about the same size and shape as a microcassette tape, though one robot weighs barely more than a paper clip. Wings on each corner are powered by dielectric elastomer actuators (DEAs), which are soft artificial muscles that use mechanical forces to rapidly flap the wings. These artificial muscles are made from layers of elastomer that are sandwiched between two razor-thin electrodes and then rolled into a squishy tube. When voltage is applied to the DEA, the electrodes squeeze the elastomer, which flaps the wing.
    But microscopic imperfections can cause sparks that burn the elastomer and cause the device to fail. About 15 years ago, researchers found they could prevent DEA failures from one tiny defect using a physical phenomenon known as self-clearing. In this process, applying high voltage to the DEA disconnects the local electrode around a small defect, isolating that failure from the rest of the electrode so the artificial muscle still works.

    Chen and his collaborators employed this self-clearing process in their robot repair techniques.
    First, they optimized the concentration of carbon nanotubes that comprise the electrodes in the DEA. Carbon nanotubes are super-strong but extremely tiny rolls of carbon. Having fewer carbon nanotubes in the electrode improves self-clearing, since it reaches higher temperatures and burns away more easily. But this also reduces the actuator’s power density.
    “At a certain point, you will not be able to get enough energy out of the system, but we need a lot of energy and power to fly the robot. We had to find the optimal point between these two constraints — optimize the self-clearing property under the constraint that we still want the robot to fly,” Chen says.
    However, even an optimized DEA will fail if it suffers from severe damage, like a large hole that lets too much air into the device.
    Chen and his team used a laser to overcome major defects. They carefully cut along the outer contours of a large defect with a laser, which causes minor damage around the perimeter. Then, they can use self-clearing to burn off the slightly damaged electrode, isolating the larger defect.
    “In a way, we are trying to do surgery on muscles. But if we don’t use enough power, then we can’t do enough damage to isolate the defect. On the other hand, if we use too much power, the laser will cause severe damage to the actuator that won’t be clearable,” Chen says.
    The team soon realized that, when “operating” on such tiny devices, it is very difficult to observe the electrode to see if they had successfully isolated a defect. Drawing on previous work, they incorporated electroluminescent particles into the actuator. Now, if they see light shining, they know that part of the actuator is operational, but dark patches mean they successfully isolated those areas.
    Flight test success
    Once they had perfected their techniques, the researchers conducted tests with damaged actuators — some had been jabbed by many needles while other had holes burned into them. They measured how well the robot performed in flapping wing, take-off, and hovering experiments.
    Even with damaged DEAs, the repair techniques enabled the robot to maintain its flight performance, with altitude, position, and attitude errors that deviated only very slightly from those of an undamaged robot. With laser surgery, a DEA that would have been broken beyond repair was able to recover 87 percent of its performance.
    “I have to hand it to my two students, who did a lot of hard work when they were flying the robot. Flying the robot by itself is very hard, not to mention now that we are intentionally damaging it,” Chen says.
    These repair techniques make the tiny robots much more robust, so Chen and his team are now working on teaching them new functions, like landing on flowers or flying in a swarm. They are also developing new control algorithms so the robots can fly better, teaching the robots to control their yaw angle so they can keep a constant heading, and enabling the robots to carry a tiny circuit, with the longer-term goal of carrying its own power source.
    This work is funded, in part, by the National Science Foundation (NSF) and a MathWorks Fellowship. More

  • in

    Mix-and-match kit could enable astronauts to build a menagerie of lunar exploration bots

    When astronauts begin to build a permanent base on the moon, as NASA plans to do in the coming years, they’ll need help. Robots could potentially do the heavy lifting by laying cables, deploying solar panels, erecting communications towers, and building habitats. But if each robot is designed for a specific action or task, a moon base could become overrun by a zoo of machines, each with its own unique parts and protocols.
    To avoid a bottleneck of bots, a team of MIT engineers is designing a kit of universal robotic parts that an astronaut could easily mix and match to rapidly configure different robot “species” to fit various missions on the moon. Once a mission is completed, a robot can be disassembled and its parts used to configure a new robot to meet a different task.
    The team calls the system WORMS, for the Walking Oligomeric Robotic Mobility System. The system’s parts include worm-inspired robotic limbs that an astronaut can easily snap onto a base, and that work together as a walking robot. Depending on the mission, parts can be configured to build, for instance, large “pack” bots capable of carrying heavy solar panels up a hill. The same parts could be reconfigured into six-legged spider bots that can be lowered into a lava tube to drill for frozen water.
    “You could imagine a shed on the moon with shelves of worms,” says team leader George Lordos, a PhD candidate and graduate instructor in MIT’s Department of Aeronautics and Astronautics (AeroAstro), in reference to the independent, articulated robots that carry their own motors, sensors, computer, and battery. “Astronauts could go into the shed, pick the worms they need, along with the right shoes, body, sensors and tools, and they could snap everything together, then disassemble it to make a new one. The design is flexible, sustainable, and cost-effective.”
    Lordos’ team has built and demonstrated a six-legged WORMS robot. Last week, they presented their results at IEEE’s Aerospace Conference, where they also received the conference’s Best Paper Award.
    MIT team members include Michael J. Brown, Kir Latyshev, Aileen Liao, Sharmi Shah, Cesar Meza, Brooke Bensche, Cynthia Cao, Yang Chen, Alex S. Miller, Aditya Mehrotra, Jacob Rodriguez, Anna Mokkapati, Tomas Cantu, Katherina Sapozhnikov, Jessica Rutledge, David Trumper, Sangbae Kim, Olivier de Weck, Jeffrey Hoffman, along with Aleks Siemenn, Cormac O’Neill, Diego Rivero, Fiona Lin, Hanfei Cui, Isabella Golemme, John Zhang, Jolie Bercow, Prajwal Mahesh, Stephanie Howe, and Zeyad Al Awwad, as well as Chiara Rissola of Carnegie Mellon University and Wendell Chun of the University of Denver.

    Animal instincts
    WORMS was conceived in 2022 as an answer to NASA’s Breakthrough, Innovative and Game-changing (BIG) Idea Challenge — an annual competition for university students to design, develop, and demonstrate a game-changing idea. In 2022, NASA challenged students to develop robotic systems that can move across extreme terrain, without the use of wheels.
    A team from MIT’s Space Resources Workshop took up the challenge, aiming specifically for a lunar robot design that could navigate the extreme terrain of the moon’s South Pole — a landscape that is marked by thick, fluffy dust; steep, rocky slopes; and deep lava tubes. The environment also hosts “permanently shadowed” regions that could contain frozen water, which, if accessible, would be essential for sustaining astronauts.
    As they mulled over ways to navigate the moon’s polar terrain, the students took inspiration from animals. In their initial brainstorming, they noted certain animals could conceptually be suited to certain missions: A spider could drop down and explore a lava tube, a line of elephants could carry heavy equipment while supporting each other down a steep slope, and a goat, tethered to an ox, could help lead the larger animal up the side of a hill as it transports an array of solar panels.
    “As we were thinking of these animal inspirations, we realized that one of the simplest animals, the worm, makes similar movements as an arm, or a leg, or a backbone, or a tail,” says deputy team leader and AeroAstro graduate student Michael Brown. “And then the lightbulb went off: We could build all these animal-inspired robots using worm-like appendages.'”
    Snap on, snap off

    Lordos, who is of Greek descent, helped coin WORMS, and chose the letter “O” to stand for “oligomeric,” which in Greek signifies “a few parts.”
    “Our idea was that, with just a few parts, combined in different ways, you could mix and match and get all these different robots,” says AeroAstro undergraduate Brooke Bensche.
    The system’s main parts include the appendage, or worm, which can be attached to a body, or chassis, via a “universal interface block” that snaps the two parts together through a twist-and-lock mechanism. The parts can be disconnected with a small tool that releases the block’s spring-loaded pins.
    Appendages and bodies can also snap into accessories such as a “shoe,” which the team engineered in the shape of a wok, and a LiDAR system that can map the surroundings to help a robot navigate.
    “In future iterations we hope to add more snap-on sensors and tools, such as winches, balance sensors, and drills,” says AeroAstro undergraduate Jacob Rodriguez.
    The team developed software that can be tailored to coordinate multiple appendages. As a proof of concept, the team built a six-legged robot about the size of a go-cart. In the lab, they showed that once assembled, the robot’s independent limbs worked to walk over level ground. The team also showed that they could quickly assemble and disassemble the robot in the field, on a desert site in California.
    In its first generation, each WORMS appendage measures about 1 meter long and weighs about 20 pounds. In the moon’s gravity, which is about one-sixth that of Earth’s, each limb would weigh about 3 pounds, which an astronaut could easily handle to build or disassemble a robot in the field. The team has planned out the specs for a larger generation with longer and slightly heavier appendages. These bigger parts could be snapped together to build “pack” bots, capable of transporting heavy payloads.
    “There are many buzz words that are used to describe effective systems for future space exploration: modular, reconfigurable, adaptable, flexible, cross-cutting, et cetera,” says Kevin Kempton, an engineer at NASA’s Langley Research Center, who served as a judge for the 2022 BIG Idea Challenge. “The MIT WORMS concept incorporates all these qualities and more.”
    This research was supported, in part, by NASA, MIT, the Massachusetts Space Grant, the National Science Foundation, and the Fannie and John Hertz Foundation.
    Video: https://youtu.be/U72lmSXEVkM More