More stories

  • in

    Good vibrations: New tech may lead to smaller, more powerful wireless devices

    What if your earbuds could do everything your smartphone can do already, except better? What sounds a bit like science fiction may actually not be so far off. A new class of synthetic materials could herald the next revolution of wireless technologies, enabling devices to be smaller, require less signal strength and use less power.
    The key to these advances lies in what experts call phononics, which is similar to photonics. Both take advantage of similar physical laws and offer new ways to advance technology. While photonics takes advantage of photons — or light — phononics does the same with phonons, which are the physical particles that transmit mechanical vibrations through a material, akin to sound, but at frequencies much too high to hear.
    In a paper published in Nature Materials, researchers at the University of Arizona Wyant College of Optical Sciences and Sandia National Laboratories report clearing a major milestone toward real-world applications based on phononics. By combining highly specialized semiconductor materials and piezoelectric materials not typically used together, the researchers were able to generate giant nonlinear interactions between phonons. Together with previous innovations demonstrating amplifiers for phonons using the same materials, this opens up the possibility of making wireless devices such as smartphones or other data transmitters smaller, more efficient and more powerful.
    “Most people would probably be surprised to hear that there are something like 30 filters inside their cell phone whose sole job it is to transform radio waves into sound waves and back,” said the study’s senior author, Matt Eichenfield, who holds a joint appointment at the UArizona College of Optical Sciences and Sandia National Laboratories in Albuquerque, New Mexico.
    Part of what are known as front-end processors, these piezoelectric filters, made on special microchips, are necessary to convert sound and electronic waves multiple times each time a smartphone receives or sends data, he said. Because these can’t be made out of the same materials, such as silicon, as the other critically important chips in the front-end processor, the physical size of your device is much bigger than it needs to be, and along the way, there are losses from going back and forth between radio waves and sound waves that add up and degrade the performance, Eichenfield said.
    “Normally, phonons behave in a completely linear fashion, meaning they don’t interact with each other,” he said. “It’s a bit like shining one laser pointer beam through another; they just go through each other.”
    Nonlinear phononics refers to what happens in special materials when the phonons can and do interact with each other, Eichenfield said. In the paper, the researchers demonstrated what he calls “giant phononic nonlinearities.” The synthetic materials produced by the research team caused the phonons to interact with each other much more strongly than in any conventional material.

    “In the laser pointer analogy, this would be like changing the frequency of the photons in the first laser pointer when you turn on the second,” he said. “As a result, you’d see the beam from the first one changing color.”
    With the new phononics materials, the researchers demonstrated that one beam of phonons can, in fact, change the frequency of another beam. What’s more, they showed that phonons can be manipulated in ways that could only be realized with transistor-based electronics — until now.
    The group has been working toward the goal of making all of the components needed for radio frequency signal processors using acoustic wave technologies instead of transistor-based electronics on a single chip, in a way that’s compatible with standard microprocessor manufacturing, and the latest publication proves that it can be done. Previously, the researchers succeeded in making acoustic components including amplifiers, switches and others. With the acoustic mixers described in the latest publication, they have added the last piece of the puzzle.
    “Now, you can point to every component in a diagram of a radiofrequency front-end processor and say, ‘Yeah, I can make all of these on one chip with acoustic waves,'” Eichenfield said. “We’re ready to move on to making the whole shebang in the acoustic domain.”
    Having all the components needed to make a radio frequency front end on a single chip could shrink devices such as cell phones and other wireless communication gadgets by as much as a factor of a 100, according to Eichenfield.
    The team accomplished its proof of principle by combining highly specialized materials into microelectronics-sized devices through which they sent acoustic waves. Specifically, they took a silicon wafer with a thin layer of lithium niobate — a synthetic material used extensively in piezoelectronic devices and cell phones — and added an ultra-thin layer (fewer than 100 atoms thick) of a semiconductor containing indium gallium arsenide.

    “When we combined these materials in just the right way, we were able to experimentally access a new regime of phononic nonlinearity,” said Sandia engineer Lisa Hackett, the lead author on the paper. “This means we have a path forward to inventing high-performance tech for sending and receiving radio waves that’s smaller than has ever been possible.”
    In this setup, acoustic waves moving through the system behave in nonlinear ways when they travel through the materials. This effect can be used to change frequencies and encode information. A staple of photonics, nonlinear effects have long been used to make things like invisible laser light into visible laser pointers, but taking advantage of nonlinear effects in phononics has been hindered by limitations in technology and materials. For example, while lithium niobate is one of the most nonlinear phononic materials known, its usefulness for technical applications is hindered by the fact that those nonlinearities are very weak when used on its own.
    By adding the indium-gallium arsenide semiconductor, Eichenfield’s group created an environment in which the acoustic waves traveling through the material influence the distribution of electrical charges in the indium gallium arsenide semiconductor film, causing the acoustic waves to mix in specific ways that can be controlled, opening up the system to various applications.
    “The effective nonlinearity you can generate with these materials is hundreds or even thousands of times larger than was possible before, which is crazy,” Eichenfield said. “If you could do the same for nonlinear optics, you would revolutionize the field.”
    With physical size being one of the fundamental limitations of current, state-of-the-art radiofrequency processing hardware, the new technology could open the door to electronic devices that are even more capable than their current counterparts, according to the authors. Communication devices that take virtually no space, have better signal coverage and longer battery life, are on the horizon. More

  • in

    New machine learning algorithm promises advances in computing

    Systems controlled by next-generation computing algorithms could give rise to better and more efficient machine learning products, a new study suggests.
    Using machine learning tools to create a digital twin, or a virtual copy, of an electronic circuit that exhibits chaotic behavior, researchers found that they were successful at predicting how it would behave and using that information to control it.
    Many everyday devices, like thermostats and cruise control, utilize linear controllers — which use simple rules to direct a system to a desired value. Thermostats, for example, employ such rules to determine how much to heat or cool a space based on the difference between the current and desired temperatures.
    Yet because of how straightforward these algorithms are, they struggle to control systems that display complex behavior, like chaos.
    As a result, advanced devices like self-driving cars and aircraft often rely on machine learning-based controllers, which use intricate networks to learn the optimal control algorithm needed to best operate. However, these algorithms have significant drawbacks, the most demanding of which is that they can be extremely challenging and computationally expensive to implement.
    Now, having access to an efficient digital twin is likely to have a sweeping impact on how scientists develop future autonomous technologies, said Robert Kent, lead author of the study and a graduate student in physics at The Ohio State University.
    “The problem with most machine learning-based controllers is that they use a lot of energy or power and they take a long time to evaluate,” said Kent. “Developing traditional controllers for them has also been difficult because chaotic systems are extremely sensitive to small changes.”
    These issues, he said, are critical in situations where milliseconds can make a difference between life and death, such as when self-driving vehicles must decide to brake to prevent an accident.

    The study was published recently in Nature Communications.
    Compact enough to fit on an inexpensive computer chip capable of balancing on your fingertip and able to run without an internet connection, the team’s digital twin was built to optimize a controller’s efficiency and performance, which researchers found resulted in a reduction of power consumption. It achieves this quite easily, mainly because it was trained using a type of machine learning approach called reservoir computing.
    “The great thing about the machine learning architecture we used is that it’s very good at learning the behavior of systems that evolve in time,” Kent said. “It’s inspired by how connections spark in the human brain.”
    Although similarly sized computer chips have been used in devices like smart fridges, according to the study, this novel computing ability makes the new model especially well-equipped to handle dynamic systems such as self-driving vehicles as well as heart monitors, which must be able to quickly adapt to a patient’s heartbeat.
    “Big machine learning models have to consume lots of power to crunch data and come out with the right parameters, whereas our model and training is so extremely simple that you could have systems learning on the fly,” he said.
    To test this theory, researchers directed their model to complete complex control tasks and compared its results to those from previous control techniques. The study revealed that their approach achieved a higher accuracy at the tasks than its linear counterpart and is significantly less computationally complex than a previous machine learning-based controller.

    “The increase in accuracy was pretty significant in some cases,” said Kent. Though the outcome showed that their algorithm does require more energy than a linear controller to operate, this tradeoff means that when it is powered up, the team’s model lasts longer and is considerably more efficient than current machine learning-based controllers on the market.
    “People will find good use out of it just based on how efficient it is,” Kent said. “You can implement it on pretty much any platform and it’s very simple to understand.” The algorithm was recently made available to scientists.
    Outside of inspiring potential advances in engineering, there’s also an equally important economic and environmental incentive for creating more power-friendly algorithms, said Kent.
    As society becomes more dependent on computers and AI for nearly all aspects of daily life, demand for data centers is soaring, leading many experts to worry over digital systems’ enormous power appetite and what future industries will need to do to keep up with it.
    And because building these data centers as well as large-scale computing experiments can generate a large carbon footprint, scientists are looking for ways to curb carbon emissions from this technology.
    To advance their results, future work will likely be steered toward training the model to explore other applications like quantum information processing, Kent said. In the meantime, he expects that these new elements will reach far into the scientific community.
    “Not enough people know about these types of algorithms in the industry and engineering, and one of the big goals of this project is to get more people to learn about them,” said Kent. “This work is a great first step toward reaching that potential.”
    This study was supported by the U.S. Air Force’s Office of Scientific Research. Other Ohio State co-authors include Wendson A.S. Barbosa and Daniel J. Gauthier. More

  • in

    A better way to control shape-shifting soft robots

    Imagine a slime-like robot that can seamlessly change its shape to squeeze through narrow spaces, which could be deployed inside the human body to remove an unwanted item.
    While such a robot does not yet exist outside a laboratory, researchers are working to develop reconfigurable soft robots for applications in health care, wearable devices, and industrial systems.
    But how can one control a squishy robot that doesn’t have joints, limbs, or fingers that can be manipulated, and instead can drastically alter its entire shape at will? MIT researchers are working to answer that question.
    They developed a control algorithm that can autonomously learn how to move, stretch, and shape a reconfigurable robot to complete a specific task, even when that task requires the robot to change its morphology multiple times. The team also built a simulator to test control algorithms for deformable soft robots on a series of challenging, shape-changing tasks.
    Their method completed each of the eight tasks they evaluated while outperforming other algorithms. The technique worked especially well on multifaceted tasks. For instance, in one test, the robot had to reduce its height while growing two tiny legs to squeeze through a narrow pipe, and then un-grow those legs and extend its torso to open the pipe’s lid.
    While reconfigurable soft robots are still in their infancy, such a technique could someday enable general-purpose robots that can adapt their shapes to accomplish diverse tasks.
    “When people think about soft robots, they tend to think about robots that are elastic, but return to their original shape. Our robot is like slime and can actually change its morphology. It is very striking that our method worked so well because we are dealing with something very new,” says Boyuan Chen, an electrical engineering and computer science (EECS) graduate student and co-author of a paper on this approach.

    Chen’s co-authors include lead author Suning Huang, an undergraduate student at Tsinghua University in China who completed this work while a visiting student at MIT; Huazhe Xu, an assistant professor at Tsinghua University; and senior author Vincent Sitzmann, an assistant professor of EECS at MIT who leads the Scene Representation Group in the Computer Science and Artificial Intelligence Laboratory. The research will be presented at the International Conference on Learning Representations.
    Controlling dynamic motion
    Scientists often teach robots to complete tasks using a machine-learning approach known as reinforcement learning, which is a trial-and-error process in which the robot is rewarded for actions that move it closer to a goal.
    This can be effective when the robot’s moving parts are consistent and well-defined, like a gripper with three fingers. With a robotic gripper, a reinforcement learning algorithm might move one finger slightly, learning by trial and error whether that motion earns it a reward. Then it would move on to the next finger, and so on.
    But shape-shifting robots, which are controlled by magnetic fields, can dynamically squish, bend, or elongate their entire bodies.
    “Such a robot could have thousands of small pieces of muscle to control, so it is very hard to learn in a traditional way,” says Chen.

    To solve this problem, he and his collaborators had to think about it differently. Rather than moving each tiny muscle individually, their reinforcement learning algorithm begins by learning to control groups of adjacent muscles that work together.
    Then, after the algorithm has explored the space of possible actions by focusing on groups of muscles, it drills down into finer detail to optimize the policy, or action plan, it has learned. In this way, the control algorithm follows a coarse-to-fine methodology.
    “Coarse-to-fine means that when you take a random action, that random action is likely to make a difference. The change in the outcome is likely very significant because you coarsely control several muscles at the same time,” Sitzmann says.
    To enable this, the researchers treat a robot’s action space, or how it can move in a certain area, like an image.
    Their machine-learning model uses images of the robot’s environment to generate a 2D action space, which includes the robot and the area around it. They simulate robot motion using what is known as the material-point-method, where the action space is covered by points, like image pixels, and overlayed with a grid.
    The same way nearby pixels in an image are related (like the pixels that form a tree in a photo), they built their algorithm to understand that nearby action points have stronger correlations. Points around the robot’s “shoulder” will move similarly when it changes shape, while points on the robot’s “leg” will also move similarly, but in a different way than those on the “shoulder.”
    In addition, the researchers use the same machine-learning model to look at the environment and predict the actions the robot should take, which makes it more efficient.
    Building a simulator
    After developing this approach, the researchers needed a way to test it, so they created a simulation environment called DittoGym.
    DittoGym features eight tasks that evaluate a reconfigurable robot’s ability to dynamically change shape. In one, the robot must elongate and curve its body so it can weave around obstacles to reach a target point. In another, it must change its shape to mimic letters of the alphabet.
    “Our task selection in DittoGym follows both generic reinforcement learning benchmark design principles and the specific needs of reconfigurable robots. Each task is designed to represent certain properties that we deem important, such as the capability to navigate through long-horizon explorations, the ability to analyze the environment, and interact with external objects,” Huang says. “We believe they together can give users a comprehensive understanding of the flexibility of reconfigurable robots and the effectiveness of our reinforcement learning scheme.”
    Their algorithm outperformed baseline methods and was the only technique suitable for completing multistage tasks that required several shape changes.
    “We have a stronger correlation between action points that are closer to each other, and I think that is key to making this work so well,” says Chen.
    While it may be many years before shape-shifting robots are deployed in the real world, Chen and his collaborators hope their work inspires other scientists not only to study reconfigurable soft robots but also to think about leveraging 2D action spaces for other complex control problems. More

  • in

    2D all-organic perovskites: potential use in 2D electronics

    Perovskites are among the most researched topics in materials science. Recently, a research team led by Prof. LOH Kian Ping, Chair Professor of Materials Physics and Chemistry and Global STEM Professor of the Department of Applied Physics of The Hong Kong Polytechnic University (PolyU), Dr Kathy LENG, Assistant Professor of the same department, together with Dr Hwa Seob CHOI, Postdoctoral Research Fellow and the first author of the research paper, has solved an age-old challenge to synthesise all-organic two-dimensional perovskites, extending the field into the exciting realm of 2D materials. This breakthrough opens up a new field of 2D all-organic perovskites, which holds promise for both fundamental science and potential applications. This research titled “Molecularly thin, two-dimensional all-organic perovskites” was recently published in the journal Science.
    Perovskites are named after their structural resemblance to the mineral calcium titanate perovskite, and are well known for their fascinating properties that can be applied in wide-ranging fields such as solar cells, lighting and catalysis. With a fundamental chemical formula of ABX3, perovskites possess the ability to be finely tuned by adjusting the A and B cations as well as the X anion, paving the way for the development of high-performance materials.
    While perovskite was first discovered as an inorganic compound, Prof. Loh’s team has focused their attention on the emerging class of all-organic perovskites. In this new family, A, B, and X constituents are organic molecules rather than individual atoms like metals or oxygen. The design principles for creating three-dimensional (3D) perovskites using organic components have only recently been established. Significantly, all-organic perovskites offer distinct advantages over their all-inorganic counterparts, as they are solution-processible and flexible, enabling cost-effective fabrication. Moreover, by manipulating the chemical composition of the crystal, valuable electromagnetic properties such as dielectric properties, which finds applications in electronics and capacitors, can be precisely engineered.
    Traditionally, researchers face challenges in the synthesis of all-organic 3D perovskites due to the restricted selection of organic molecules that can fit with the crystal structure. Recognising this limitation, Prof. Loh and his team proposed an innovative approach: synthesising all-organic perovskites in the form of 2D layers instead of 3D crystals. This strategy aimed to overcome the constraints imposed by bulky molecules and facilitate the incorporation of a broader range of organic ions. The anticipated outcome was the emergence of novel and extraordinary properties in these materials.
    Validating their prediction, the team developed a new general class of layered organic perovskites. Following the convention for naming perovskites, they called it the “Choi-Loh-v phase” (CL-v) after Dr Choi and Prof. Loh. These perovskites comprise molecularly thin layers held together by forces that hold graphite layers together, the so-called van der Waals forces — hence the “v” in CL-v. Compared with the previously studied hybrid 2D perovskites, the CL-v phase is stabilised by the addition of another B cation into the unit cell and has the general formula A2B2X4.
    Using solution-phase chemistry, the research team prepared a CL-v material known as CMD-N-P2, in which the A, B and X sites are occupied by CMD (a chlorinated cyclic organic molecule), ammonium and PF6− ions, respectively. The expected crystal structure was confirmed by high-resolution electron microscopy carried out at cryogenic temperature. These molecularly thin 2D organic perovskites are fundamentally different from traditional 3D minerals, they are single crystalline in two dimensions and can be exfoliated as hexagonal flakes just a few nanometres thick — 20,000 times thinner than a human hair.
    The solution-processibility of 2D organic perovskites presents exciting opportunities for their application in 2D electronics. The Poly U team conducted measurements on the dielectric constants of the CL-v phase, yielding values ranging from 4.8 to 5.5. These values surpass those of commonly used materials such as silicon dioxide and hexagonal boron nitride. This discovery establishes a promising avenue for incorporating CL-v phase as a dielectric layer in 2D electronic devices, as these devices often necessitate 2D dielectric layers with high dielectric constants, which are typically scarce. Team member Dr Leng successfully addressed the challenge of integrating 2D organic perovskites with 2D electronics. In their approach, the CL-v phase was employed as the top gate dielectric layer, while the channel material consisted of atomically thin Molybdenum Sulfide. By utilising the CL-v phase, the transistor achieved superior control over the current flow between the source and drain terminals, surpassing the capabilities of conventional silicon oxide dielectric layers.
    Prof. Loh’s research not only establishes an entirely new class of all-organic perovskites but also demonstrates how they can be solution-processed in conjunction with advanced fabrication technique to enhance the performance of 2D electronic devices. These developments open up new possibilities for the creation of more efficient and versatile electronic systems. More

  • in

    AI advancements make the leap into 3D pathology possible

    Human tissue is intricate, complex and, of course, three dimensional. But the thin slices of tissue that pathologists most often use to diagnose disease are two dimensional, offering only a limited glimpse at the tissue’s true complexity. There is a growing push in the field of pathology toward examining tissue in its three-dimensional form. But 3D pathology datasets can contain hundreds of times more data than their 2D counterparts, making manual examination infeasible.
    In a new study, researchers from Mass General Brigham and their collaborators present Tripath: new, deep learning models that can use 3D pathology datasets to make clinical outcome predictions. In collaboration with the University of Washington, the research team imaged curated prostate cancer specimens, using two 3D high-resolution imaging techniques. The models were then trained to predict prostate cancer recurrence risk on volumetric human tissue biopsies. By comprehensively capturing 3D morphologies from the entire tissue volume, Tripath performed better than pathologists and outperformed deep learning models that rely on 2D morphology and thin tissue slices. Results are published in Cell.
    While the new approach needs to be validated in larger datasets before it can be further developed for clinical use, the researchers are optimistic about its potential to help inform clinical decision making.
    “Our approach underscores the importance of comprehensively analyzing the whole volume of a tissue sample for accurate patient risk prediction, which is the hallmark of the models we developed and only possible with the 3D pathology paradigm,” said lead author Andrew H. Song, PhD, of the Division of Computational Pathology in the Department of Pathology at Mass General Brigham.
    “Using advancements in AI and 3D spatial biology techniques, Tripath provides a framework for clinical decision support and may help reveal novel biomarkers for prognosis and therapeutic response,” said co-corresponding author Faisal Mahmood, PhD, of the Division of Computational Pathology in the Department of Pathology at Mass General Brigham.
    “In our prior work in computational 3D pathology, we looked at specific structures such as the prostate gland network, but Tripath is our first attempt to use deep learning to extract sub-visual 3D features for risk stratification, which shows promising potential for guiding critical treatment decisions,” said co-corresponding author Jonathan Liu, PhD, at the University of Washington.
    Disclosures: Song and Mahmood are inventors on a provisional patent that corresponds to the technical and methodological aspects of this study. Liu is a co-founder and board member of Alpenglow Biosciences, Inc., which has licensed the OTLS microscopy portfolio developed in his lab at the University of Washington.
    Funding: Authors report funding support from the Brigham and Women’s Hospital (BWH) President’s Fund, Mass General Hospital (MGH) Pathology, the National Institute of General Medical Sciences (R35GM138216), Department of Defense (DoD) Prostate Cancer Research Program (W81WH-18-10358 and W81XWH-20-1-0851), the National Cancer Institute (R01CA268207), the National Institute of Biomedical Imaging and Bioengineering (R01EB031002), the Canary Foundation, the NCI Ruth L. Kirschstein National Service Award (T32CA251062), the Leon Troper Professorship in Computational Pathology at Johns Hopkins University, UKRI, mdxhealth, NHSX, and Clarendon Fund. More

  • in

    Robotic system feeds people with severe mobility limitations

    Cornell researchers have developed a robotic feeding system that uses computer vision, machine learning and multimodal sensing to safely feed people with severe mobility limitations, including those with spinal cord injuries, cerebral palsy and multiple sclerosis.
    “Feeding individuals with severe mobility limitations with a robot is difficult, as many cannot lean forward and require food to be placed directly inside their mouths,” said Tapomayukh “Tapo” Bhattacharjee, assistant professor of computer science in the Cornell Ann S. Bowers College of Computing and Information Science and senior developer behind the system. “The challenge intensifies when feeding individuals with additional complex medical conditions.”
    A paper on the system, “Feel the Bite: Robot-Assisted Inside-Mouth Bite Transfer using Robust Mouth Perception and Physical Interaction-Aware Control,” was presented at the Human Robot Interaction conference, held March 11-14, in Boulder, Colorado. It received a Best Paper Honorable Mention recognition, while a demo of the research team’s broader robotic feeding system received a Best Demo Award.
    A leader in assistive robotics, Bhattacharjee and his EmPRISE Lab have spent years teaching machines the complex process by which we humans feed ourselves. It’s a complicated challenge to teach a machine — everything from identifying food items on a plate, picking them up and then transferring it inside the mouth of a care recipient.
    “This last 5 centimeters, from the utensil to inside the mouth, is extremely challenging,” Bhattacharjee said.
    Some care recipients may have very limited mouth openings, measuring less than 2 centimeters, while others experience involuntary muscle spasms that can occur unexpectedly, even when the utensil is inside their mouth, Bhattacharjee said. Further, some can only bite food at specific locations inside their mouth, which they indicate by pushing the utensil using their tongue, he said.
    “Current technology only looks at a person’s face once and assumes they will remain still, which is often not the case and can be very limiting for care recipients,” said Rajat Kumar Jenamani, the paper’s lead author and a doctoral student in the field of computer science.

    To address these challenges, researchers developed and outfitted their robot with two essential features: real-time mouth tracking that adjusts to users’ movements, and a dynamic response mechanism that enables the robot to detect the nature of physical interactions as they occur, and react appropriately. This enables the system to distinguish between sudden spasms, intentional bites and user attempts to manipulate the utensil inside their mouth, researchers said.
    The robotic system successfully fed 13 individuals with diverse medical conditions in a user study spanning three locations: the EmPRISE Lab on the Cornell Ithaca campus, a medical center in New York City, and a care recipient’s home in Connecticut. Users of the robot found it to be safe and comfortable, researchers said.
    “This is one of the most extensive real-world evaluations of any autonomous robot-assisted feeding system with end-users,” Bhattacharjee said.
    The team’s robot is a multi-jointed arm that holds a custom-built utensil at the end that can sense the forces being applied on it. The mouth tracking method — trained on thousands of images featuring various participants’ head poses and facial expressions — combines data from two cameras positioned above and below the utensil. This allows for precise detection of the mouth and overcomes any visual obstructions caused by the utensil itself, researchers said. This physical interaction-aware response mechanism uses both visual and force sensing to perceive how users are interacting with the robot, Jenamani said.
    “We’re empowering individuals to control a 20-pound robot with just their tongue,” he said.
    He cited the user studies as the most gratifying aspect of the project, noting the significant emotional impact of the robot on the care recipients and their caregivers. During one session, the parents of a daughter with schizencephaly quadriplegia, a rare birth defect, witnessed her successfully feed herself using the system.

    “It was a moment of real emotion; her father raised his cap in celebration, and her mother was almost in tears,” Jenamani said.
    While further work is needed to explore the system’s long-term usability, its promising results highlight the potential to improve care recipients’ level of independence and quality of life, researchers said.
    “It’s amazing,” Bhattacharjee said, “and very, very fulfilling.”
    Paper co-authors are: Daniel Stabile, M.S. ’23; Ziang Liu, a doctoral student in the field of computer science; Abrar Anwar of the University of South California, and Katherine Dimitropoulou of Columbia University.
    This research was funded primarily by the National Science Foundation. More

  • in

    Generative AI that imitates human motion

    Walking and running is notoriously difficult to recreate in robots. Now, a group of researchers has overcome some of these challenges by creating an innovative method that employs central pattern generators — neural circuits located in the spinal cord that generate rhythmic patterns of muscle activity — with deep reinforcement learning. An international group of researchers has created a new approach to imitating human motion through combining central pattern generators (CPGs) and deep reinforcement learning (DRL). The method not only imitates walking and running motions but also generates movements for frequencies where motion data is absent, enables smooth transition movements from walking to running, and allows for adapting to environments with unstable surfaces.
    Details of their breakthrough were published in the journal IEEE Robotics and Automation Letters on April 15, 2024.
    We might not think about it much, but walking and running involves inherent biological redundancies that enable us to adjust to the environment or alter our walking/running speed. Given the intricacy and complexity of this, reproducing these human-like movements in robots is notoriously challenging.
    Current models often struggle to accommodate unknown or challenging environments, which makes them less efficient and effective. This is because AI is suited for generating one or a small number of correct solutions. With living organisms and their motion, there isn’t just one correct pattern to follow. There’s a whole range of possible movements, and it is not always clear which one is the best or most efficient.
    DRL is one way researchers have sought to overcome this. DRL extends traditional reinforcement learning by leveraging deep neural networks to handle more complex tasks and learn directly from raw sensory inputs, enabling more flexible and powerful learning capabilities. Its disadvantage is the huge computational cost of exploring vast input space, especially when the system has a high degree of freedom.
    Another approach is imitation learning, in which a robot learns by imitating motion measurement data from a human performing the same motion task. Although imitation learning is good at learning on stable environments, it struggles when faced with new situations or environments it hasn’t encountered during training. Its ability to modify and navigate effectively becomes constrained by the narrow scope of its learned behaviors.
    “We overcame many of the limitations of these two approaches by combining them,” explains Mitsuhiro Hayashibe, a professor at Tohoku University’s Graduate School of Engineering. “Imitation learning was used to train a CPG-like controller, and, instead of applying deep learning to the CPGs itself, we applied it to a form of a reflex neural network that supported the CPGs.”
    CPGs are neural circuits located in the spinal cord that, like a biological conductor, generate rhythmic patterns of muscle activity. In animals, a reflex circuit works in tandem with CPGs to provide adequate feedback that allows them to adjust their speed and walking/running movements to suit the terrain.

    By adopting the structure of CPG and its reflexive counterpart, the adaptive imitated CPG (AI-CPG) method achieves remarkable adaptability and stability in motion generation while imitating human motion.
    “This breakthrough sets a new benchmark in generating human-like movement in robotics, with unprecedented environmental adaptation capability,” adds Hayashibe “Our method represents a significant step forward in the development of generative AI technologies for robot control, with potential applications across various industries.”
    The research group comprised members from Tohoku University’s Graduate School of Engineering and the École Polytechnique Fédérale de Lausanne, or the Swiss Federal Institute of Technology in Lausanne. More

  • in

    Discover optimal conditions for mass production of ultraviolet holograms

    Professor Junsuk Rho from the Department of Mechanical Engineering, Chemical Engineering, and Electrical Engineering, Hyunjung Kang and Nara Jeon, PhD candidates, from Department of Mechanical Engineering and Dongkyo Oh, a PhD student, from the Department of Mechanical Engineering at Pohang University of Science and Technology (POSTECH) successfully conducted a thorough quantitative analysis. Their aim is to determine the ideal printing material for crafting ultraviolet metasurfaces. Their findings featured in the journal Microsystems & Nanoengineering on April 22.
    Metasurfaces, these ultra-thin optical devices, possess the remarkable ability to control light down to a mere nanometer thickness. Metasurfaces have consistently been the subject of research as a pivotal technology for the advancement of next-generation displays, imaging, and biosensing. Their reach extends beyond visible light, delving into the realms of infrared and ultraviolet light.
    Nanoimprint lithography is a technology in metasurface production, akin to a stamp generating numerous replicas from a single mold. This innovative technique promises affordable and large-scale manufacturing of metasurfaces, paving the way for their commercial viability. However, the resin utilized as the printing material suffers from a drawback — a low refractive index, hindering efficient light manipulation. To tackle this challenge, researchers are actively exploring nanocomposites, integrating nanoparticles into the resin to boost its refractive index. Yet, the efficacy of this approach depends on various factors such as nanoparticle type and solvent choice, necessitating a systematic analysis for optimal metasurface performance.
    In their research, the team meticulously designed experiments to evaluate the impact of nanoparticle concentration and solvent selection on pattern transfer and UV metaholograms. Specifically, they manipulated the concentration of zirconium dioxide (ZrO2), a nanocomposite renowned for its effectiveness in UV metahologram production, ranging from 20% to 90%. The findings showed that the highest pattern transfer efficiency was attained at an 80% concentration level.
    Moreover, when combining ZrO2 at an 80% concentration with various solvents such as methylisobutyl ketone, methyl ethyl ketone, and acetone for metahologram realization, the conversion efficiency soared in the ultraviolet spectrum (325 nm), reaching impressive levels of 62.3%, 51.4%, and 61.5%, respectively. This research marks a significant milestone by establishing an optimal metric for achieving metaholograms specifically tailored for the ultraviolet domain, as opposed to the visible range, while also pioneering the development of new nanocomposites.
    Professor Junsuk Rho from POSTECH remarked, “The use of titanium dioxide (TiO2) and silicon (Si) nanocomposites instead of ZrO2 expands the applicability to visible and infrared light.” He expressed expectation by stating, “Our future research endeavors will focus on refining the preparation conditions for optimal nanocomposites, thus propelling the advancement, application and expansion of optical metasurface fabrication technology.”
    The research was conducted with support from the STEAM Research Program, the RLRC Program, and the Nano-materials Source Technology Development Project of the National Research Foundation of Korea and the Ministry of Science and ICT, the Alchemist Project of Ministry of Trade, Industry and Energy and the Korea Planning & Evaluation Institute of Industrial Technology, and the N.EX.T IMPACT of POSCO Holdings. More