More stories

  • in

    Robotic ‘SuperLimbs’ could help moonwalkers recover from falls

    Need a moment of levity? Try watching videos of astronauts falling on the moon. NASA’s outtakes of Apollo astronauts tripping and stumbling as they bounce in slow motion are delightfully relatable.
    For MIT engineers, the lunar bloopers also highlight an opportunity to innovate.
    “Astronauts are physically very capable, but they can struggle on the moon, where gravity is one-sixth that of Earth’s but their inertia is still the same. Furthermore, wearing a spacesuit is a significant burden and can constrict their movements,” says Harry Asada, professor of mechanical engineering at MIT. “We want to provide a safe way for astronauts to get back on their feet if they fall.”
    Asada and his colleagues are designing a pair of wearable robotic limbs that can physically support an astronaut and lift them back on their feet after a fall. The system, which the researchers have dubbed Supernumerary Robotic Limbs or “SuperLimbs” is designed to extend from a backpack, which would also carry the astronaut’s life support system, along with the controller and motors to power the limbs.
    The researchers have built a physical prototype, as well as a control system to direct the limbs, based on feedback from the astronaut using it. The team tested a preliminary version on healthy subjects who also volunteered to wear a constrictive garment similar to an astronaut’s spacesuit. When the volunteers attempted to get up from a sitting or lying position, they did so with less effort when assisted by SuperLimbs, compared to when they had to recover on their own.
    The MIT team envisions that SuperLimbs can physically assist astronauts after a fall and, in the process, help them conserve their energy for other essential tasks. The design could prove especially useful in the coming years, with the launch of NASA’s Artemis mission, which plans to send astronauts back to the moon for the first time in over 50 years. Unlike the largely exploratory mission of Apollo, Artemis astronauts will endeavor to build the first permanent moon base — a physically demanding task that will require multiple extended extravehicular activities (EVAs).
    “During the Apollo era, when astronauts would fall, 80 percent of the time it was when they were doing excavation or some sort of job with a tool,” says team member and MIT doctoral student Erik Ballesteros. “The Artemis missions will really focus on construction and excavation, so the risk of falling is much higher. We think that SuperLimbs can help them recover so they can be more productive, and extend their EVAs.”
    Asada, Ballesteros, and their colleagues will present their design and study this week at the IEEE International Conference on Robotics and Automation (ICRA). Their co-authors include MIT postdoc Sang-Yoep Lee and Kalind Carpenter of the Jet Propulsion Laboratory.

    Taking a stand
    The team’s design is the latest application of SuperLimbs, which Asada first developed about a decade ago and has since adapted for a range of applications, including assisting workers in aircraft manufacturing, construction, and ship building.
    Most recently, Asada and Ballesteros wondered whether SuperLimbs might assist astronauts, particularly as NASA plans to send astronauts back to the surface of the moon.
    “In communications with NASA, we learned that this issue of falling on the moon is a serious risk,” Asada says. “We realized that we could make some modifications to our design to help astronauts recover from falls and carry on with their work.”
    The team first took a step back, to study the ways in which humans naturally recover from a fall. In their new study, they asked several healthy volunteers to attempt to stand upright after lying on their side, front, and back.
    The researchers then looked at how the volunteers’ attempts to stand changed when their movements were constricted, similar to the way astronauts’ movements are limited by the bulk of their spacesuits. The team built a suit to mimic the stiffness of traditional spacesuits, and had volunteers don the suit before again attempting to stand up from various fallen positions. The volunteers’ sequence of movements was similar, though required much more effort compared to their unencumbered attempts.

    The team mapped the movements of each volunteer as they stood up, and found that they each carried out a common sequence of motions, moving from one pose, or “waypoint,” to the next, in a predictable order.
    “Those ergonomic experiments helped us to model in a straightforward way, how a human stands up,” Ballesteros says. “We could postulate that about 80 percent of humans stand up in a similar way. Then we designed a controller around that trajectory.”
    Helping hand
    The team developed software to generate a trajectory for a robot, following a sequence that would help support a human and lift them back on their feet. They applied the controller to a heavy, fixed robotic arm, which they attached to a large backpack. The researchers then attached the backpack to the bulky suit and helped volunteers back into the suit. They asked the volunteers to again lie on their back, front, or side, and then had them attempt to stand as the robot sensed the person’s movements and adapted to help them to their feet.
    Overall, the volunteers were able to stand stably with much less effort when assisted by the robot, compared to when they tried to stand alone while wearing the bulky suit.
    “It feels kind of like an extra force moving with you,” says Ballesteros, who also tried out the suit and arm assist. “Imagine wearing a backpack and someone grabs the top and sort of pulls you up. Over time, it becomes sort of natural.”
    The experiments confirmed that the control system can successfully direct a robot to help a person stand back up after a fall. The researchers plan to pair the control system with their latest version of SuperLimbs, which comprises two multijointed robotic arms that can extend out from a backpack. The backpack would also contain the robot’s battery and motors, along with an astronaut’s ventilation system.
    “We designed these robotic arms based on an AI search and design optimization, to look for designs of classic robot manipulators with certain engineering constraints,” Ballesteros says. “We filtered through many designs and looked for the design that consumes the least amount of energy to lift a person up. This version of SuperLimbs is the product of that process.”
    Over the summer, Ballesteros will build out the full SuperLimbs system at NASA’s Jet Propulsion Laboratory, where he plans to streamline the design and minimize the weight of its parts and motors using advanced, lightweight materials. Then, he hopes to pair the limbs with astronaut suits, and test them in low-gravity simulators, with the goal of someday assisting astronauts on future missions to the moon and Mars.
    “Wearing a spacesuit can be a physical burden,” Asada notes. “Robotic systems can help ease that burden, and help astronauts be more productive during their missions.”
    This research was supported, in part, by NASA. More

  • in

    Wavefunction matching for solving quantum many-body problems

    Strongly interacting systems play an important role in quantum physics and quantum chemistry. Stochastic methods such as Monte Carlo simulations are a proven method for investigating such systems. However, these methods reach their limits when so-called sign oscillations occur. This problem has now been solved by an international team of researchers from Germany, Turkey, the USA, China, South Korea and France using the new method of wavefunction matching. As an example, the masses and radii of all nuclei up to mass number 50 were calculated using this method. The results agree with the measurements, the researchers now report in the journal “Nature.”
    All matter on Earth consists of tiny particles known as atoms. Each atom contains even smaller particles: protons, neutrons and electrons. Each of these particles follows the rules of quantum mechanics. Quantum mechanics forms the basis of quantum many-body theory, which describes systems with many particles, such as atomic nuclei.
    One class of methods used by nuclear physicists to study atomic nuclei is the ab initio approach. It describes complex systems by starting from a description of their elementary components and their interactions. In the case of nuclear physics, the elementary components are protons and neutrons. Some key questions that ab initio calculations can help answer are the binding energies and properties of atomic nuclei and the link between nuclear structure and the underlying interactions between protons and neutrons.
    However, these ab initio methods have difficulties in performing reliable calculations for systems with complex interactions. One of these methods is quantum Monte Carlo simulations. Here, quantities are calculated using random or stochastic processes. Although quantum Monte Carlo simulations can be efficient and powerful, they have a significant weakness: the sign problem. It arises in processes with positive and negative weights, which cancel each other. This cancellation leads to inaccurate final predictions.
    A new approach, known as wavefunction matching, is intended to help solve such calculation problems for ab initio methods. “This problem is solved by the new method of wavefunction matching by mapping the complicated problem in a first approximation to a simple model system that does not have such sign oscillations and then treating the differences in perturbation theory,” says Prof. Ulf-G. Meißner from the Helmholtz Institute for Radiation and Nuclear Physics at the University of Bonn and from the Institute of Nuclear Physics and the Center for Advanced Simulation and Analytics at Forschungszentrum Jülich. “As an example, the masses and radii of all nuclei up to mass number 50 were calculated — and the results agree with the measurements,” reports Meißner, who is also a member of the Transdisciplinary Research Areas “Modeling” and “Matter” at the University of Bonn.
    “In quantum many-body theory, we are often faced with the situation that we can perform calculations using a simple approximate interaction, but realistic high-fidelity interactions cause severe computational problems,” says Dean Lee, Professor of Physics from the Facility for Rare Istope Beams and Department of Physics and Astronomy (FRIB) at Michigan State University and head of the Department of Theoretical Nuclear Sciences.
    Wavefunction matching solves this problem by removing the short-distance part of the high-fidelity interaction and replacing it with the short-distance part of an easily calculable interaction. This transformation is done in a way that preserves all the important properties of the original realistic interaction. Since the new wavefunctions are similar to those of the easily computable interaction, the researchers can now perform calculations with the easily computable interaction and apply a standard procedure for handling small corrections — called perturbation theory.
    The research team applied this new method to lattice quantum Monte Carlo simulations for light nuclei, medium-mass nuclei, neutron matter and nuclear matter. Using precise ab initio calculations, the results closely matched real-world data on nuclear properties such as size, structure and binding energy. Calculations that were once impossible due to the sign problem can now be performed with wavefunction matching.
    While the research team focused exclusively on quantum Monte Carlo simulations, wavefunction matching should be useful for many different ab initio approaches. “This method can be used in both classical computing and quantum computing, for example to better predict the properties of so-called topological materials, which are important for quantum computing,” says Meißner.
    The first author is Prof. Dr. Serdar Elhatisari, who worked for two years as a Fellow in Prof. Meißner’s ERC Advanced Grant EXOTIC. According to Meißner, a large part of the work was carried out during this time. Part of the computing time on supercomputers at Forschungszentrum Jülich was provided by the IAS-4 institute, which Meißner heads. More

  • in

    Animal brain inspired AI game changer for autonomous robots

    A team of researchers at Delft University of Technology has developed a drone that flies autonomously using neuromorphic image processing and control based on the workings of animal brains. Animal brains use less data and energy compared to current deep neural networks running on GPUs (graphic chips). Neuromorphic processors are therefore very suitable for small drones because they don’t need heavy and large hardware and batteries. The results are extraordinary: during flight the drone’s deep neural network processes data up to 64 times faster and consumes three times less energy than when running on a GPU. Further developments of this technology may enable the leap for drones to become as small, agile, and smart as flying insects or birds. The findings were recently published in Science Robotics.
    Learning from animal brains: spiking neural networks
    Artificial intelligence holds great potential to provide autonomous robots with the intelligence needed for real-world applications. However, current AI relies on deep neural networks that require substantial computing power. The processors made for running deep neural networks (Graphics Processing Units, GPUs) consume a substantial amount of energy. Especially for small robots like flying drones this is a problem, since they can only carry very limited resources in terms of sensing and computing.
    Animal brains process information in a way that is very different from the neural networks running on GPUs. Biological neurons process information asynchronously, and mostly communicate via electrical pulses called spikes. Since sending such spikes costs energy, the brain minimizes spiking, leading to sparse processing.
    Inspired by these properties of animal brains, scientists and tech companies are developing new, neuromorphic processors. These new processors allow to run spiking neural networks and promise to be much faster and more energy efficient.
    “The calculations performed by spiking neural networks are much simpler than those in standard deep neural networks.,” says Jesse Hagenaars, PhD candidate and one of the authors of the article, “Whereas digital spiking neurons only need to add integers, standard neurons have to multiply and add floating point numbers. This makes spiking neural networks quicker and more energy efficient. To understand why, think of how humans also find it much easier to calculate 5 + 8 than to calculate 6.25 x 3.45 + 4.05 x 3.45.”
    This energy efficiency is further boosted if neuromorphic processors are used in combination with neuromorphic sensors, like neuromorphic cameras. Such cameras do not make images at a fixed time interval. Instead, each pixel only sends a signal when it becomes brighter or darker. The advantages of such cameras are that they can perceive motion much more quickly, are more energy efficient, and function well both in dark and bright environments. Moreover, the signals from neuromorphic cameras can feed directly into spiking neural networks running on neuromorphic processors. Together, they can form a huge enabler for autonomous robots, especially small, agile robots like flying drones.

    First neuromorphic vision and control of a flying drone
    In an article published in Science Robotics on May 15, 2024, researchers from Delft University of Technology, the Netherlands, demonstrate for the first time a drone that uses neuromorphic vision and control for autonomous flight. Specifically, they developed a spiking neural network that processes the signals from a neuromorphic camera and outputs control commands that determine the drone’s pose and thrust. They deployed this network on a neuromorphic processor, Intel’s Loihi neuromorphic research chip, on board of a drone. Thanks to the network, the drone can perceive and control its own motion in all directions.
    “We faced many challenges,” says Federico Paredes-Vallés, one of the researchers that worked on the study, “but the hardest one was to imagine how we could train a spiking neural network so that training would be both sufficiently fast and the trained network would function well on the real robot. In the end, we designed a network consisting of two modules. The first module learns to visually perceive motion from the signals of a moving neuromorphic camera. It does so completely by itself, in a self-supervised way, based only on the data from the camera. This is similar to how also animals learn to perceive the world by themselves. The second module learns to map the estimated motion to control commands, in a simulator. This learning relied on an artificial evolution in simulation, in which networks that were better in controlling the drone had a higher chance of producing offspring. Over the generations of the artificial evolution, the spiking neural networks got increasingly good at control, and were finally able to fly in any direction at different speeds. We trained both modules and developed a way with which we could merge them together. We were happy to see that the merged network immediately worked well on the real robot.”
    With its neuromorphic vision and control, the drone is able to fly at different speeds under varying light conditions, from dark to bright. It can even fly with flickering lights, which make the pixels in the neuromorphic camera send great numbers of signals to the network that are unrelated to motion.
    Improved energy efficiency and speed by neuromorphic AI
    “Importantly, our measurements confirm the potential of neuromorphic AI. The network runs on average between 274 and 1600 times per second. If we run the same network on a small, embedded GPU, it runs on average only 25 times per second, a difference of a factor ~10-64! Moreover, when running the network, , Intel’s Loihi neuromorphic research chip consumes 1.007 watts, of which 1 watt is the idle power that the processor spends just when turning on the chip. Running the network itself only costs 7 milliwatts. In comparison, when running the same network, the embedded GPU consumes 3 watts, of which 1 watt is idle power and 2 watts are spent for running the network. The neuromorphic approach results in AI that runs faster and more efficiently, allowing deployment on much smaller autonomous robots.,” says Stein Stroobants, PhD candidate in the field of neuromorphic drones.
    Future applications of neuromorphic AI for tiny robots
    “Neuromorphic AI will enable all autonomous robots to be more intelligent,” says Guido de Croon, Professor in bio-inspired drones, “but it is an absolute enabler for tiny autonomous robots. At Delft University of Technology’s Faculty of Aerospace Engineering, we work on tiny autonomous drones which can be used for applications ranging from monitoring crop in greenhouses to keeping track of stock in warehouses. The advantages of tiny drones are that they are very safe and can navigate in narrow environments like in between ranges of tomato plants. Moreover, they can be very cheap, so that they can be deployed in swarms. This is useful for more quickly covering an area, as we have shown in exploration and gas source localization settings.”
    “The current work is a great step in this direction. However, the realization of these applications will depend on further scaling down the neuromorphic hardware and expanding the capabilities towards more complex tasks such as navigation.” More

  • in

    Robots’ and prosthetic hands’ sense of touch could be as fast as humans

    Research at Uppsala University and Karolinska Institutet could pave the way for a prosthetic hand and robot to be able to feel touch like a human hand. Their study has been published in the journal Science. The technology could also be used to help restore lost functionality to patients after a stroke.
    “Our system can determine what type of object it encounters as fast as a blindfolded person, just by feeling it and deciding whether it is a tennis ball or an apple, for example,” says Zhibin Zhang, docent at the Department of Electrical Engineering at Uppsala University.
    He and his colleague Libo Chen performed the study in close cooperation with researchers from the Signals and Systems Division at Uppsala University, who provided data processing and machine learning expertise, and a group of researchers from the Department of Neurobiology, Care Sciences and Society, Division of Neurogeriatrics at Karolinska Institutet.
    Drawing inspiration from neuroscience, they have developed an artificial tactile system that imitates the way the human nervous system reacts to touch. The system uses electrical pulses that process dynamic tactile information in the same way as the human nervous system. “With this technology, a prosthetic hand would feel like part of the wearer’s body,” Zhang explains.
    The artificial system has three main components: an electronic skin (e-skin) with sensors that can detect pressure by touch; a set of artificial neurons that convert analogue touch signals into electrical pulses; and a processor that processes the signals and identifies the object. In principle, it can learn to identify an unlimited number of objects, but in their tests the researchers have used 22 different objects for grasping and 16 different surfaces for touching.
    “We’re also looking into developing the system so it can feel pain and heat as well. It should also be able to feel what material the hand is touching, for example, whether it is wood or metal,” says Assistant Professor Libo Chen, who led the study.
    According to the researchers, interactions between humans and robots or prosthetic hands can be made safer and more natural thanks to tactile feedback. The prostheses can also be given the ability to handle objects with the same dexterity as a human hand.
    “The skin contains millions of receptors. Current e-skin technology cannot deliver enough receptors, but this technology makes it possible, so we would like to produce artificial skin for a whole robot,” says Chen.
    The technology could also be used medically, for example, to monitor movement dysfunctions caused by Parkinson’s disease and Alzheimer’s disease, or to help patients recover lost functionality after a stroke.
    “The technology can be further developed to tell if a patient is about to fall. This information can be then used to either stimulate a muscle externally to prevent the fall or prompt an assistive device to take over and prevent it,” says Zhang. More

  • in

    Researchers use artificial intelligence to boost image quality of metalens camera

    Researchers have leveraged deep learning techniques to enhance the image quality of a metalens camera. The new approach uses artificial intelligence to turn low-quality images into high-quality ones, which could make these cameras viable for a multitude of imaging tasks including intricate microscopy applications and mobile devices.
    Metalenses are ultrathin optical devices — often just a fraction of a millimeter thick — that use nanostructures to manipulate light. Although their small size could potentially enable extremely compact and lightweight cameras without traditional optical lenses, it has been difficult to achieve the necessary image quality with these optical components.
    “Our technology allows our metalens-based devices to overcome the limitations of image quality,” said research team leader Ji Chen from Southeast University in China. “This advance will play an important role in the future development of highly portable consumer imaging electronics and can also be used in specialized imaging applications such as microscopy.”
    In Optica Publishing Group journal Optics Letters, the researchers describe how they used a type of machine learning known as a multi-scale convolutional neural network to improve resolution, contrast and distortion in images from a small camera — about 3 cm × 3 cm × 0.5 cm — they created by directly integrating a metalens onto a CMOS imaging chip.
    “Metalens-integrated cameras can be directly incorporated into the imaging modules of smartphones, where they could replace the traditional refractive bulk lenses,” said Chen. “They could also be used in devices such as drones, where the small size and lightweight camera would ensure imaging quality without compromising the drone’s mobility.”
    Enhancing image quality
    The camera used in the new work was previously developed by the researchers and uses a metalens with 1000-nm tall cylindrical silicon nitride nano-posts. The metalens focuses light directly onto a CMOS imaging sensor without requiring any other optical elements. Although this design created a very small camera the compact architecture limited the image quality. Thus, the researchers decided to see if machine learning could be used to improve the images.

    Deep learning is a type of machine learning that uses artificial neural networks with multiple layers to automatically learn features from data and make complex decisions or predictions. The researchers applied this approach by using a convolution imaging model to generate a large number of high- and low-quality image pairs. These image pairs were used to train a multi-scale convolutional neural network so that it could recognize the characteristics of each type of image and use that to turn low-quality images into high-quality images.
    “A key part of this work was developing a way to generate the large amount of training data needed for the neural network learning process,” said Chen. “Once trained, a low-quality image can be sent from the device to into the neural network for processing, and high-quality imaging results are obtained immediately.”
    Applying the neural network
    To validate the new deep learning technique, the researchers used it on 100 test images. They analyzed two commonly used image processing metrics: the peak signal-to-noise ratio and the structural similarity index. They found that the images processed by the neural network exhibited a significant improvement in both metrics. They also showed that the approach could rapidly generate high-quality imaging data that closely resembled what was captured directly through experimentation.
    The researchers are now designing metalenses with complex functionalities — such as color or wide-angle imaging — and developing neural network methods for enhancing the imaging quality of these advanced metalenses. To make this technology practical for commercial application would require new assembly techniques for integrating metalenses into smartphone imaging modules and image quality enhancement software designed specifically for mobile phones.
    “Ultra-lightweight and ultra-thin metalenses represent a revolutionary technology for future imaging and detection,” said Chen. “Leveraging deep learning techniques to optimize metalens performance marks a pivotal developmental trajectory. We foresee machine learning as a vital trend in advancing photonics research.” More

  • in

    A simple quantum internet with significant possibilities

    It’s one thing to dream up a quantum internet that could send hacker-proof information around the world via photons superimposed in different quantum states. It’s quite another to physically show it’s possible.
    That’s exactly what Harvard physicists have done, using existing Boston-area telecommunication fiber, in a demonstration of the world’s longest fiber distance between two quantum memory nodes to date. Think of it as a simple, closed internet between point A and B, carrying a signal encoded not by classical bits like the existing internet, but by perfectly secure, individual particles of light.
    The groundbreaking work, published in Nature, was led by Mikhail Lukin, the Joshua and Beth Friedman University Professor in the Department of Physics, in collaboration with Harvard professors Marko Lončar and Hongkun Park, who are all members of the Harvard Quantum Initiative, alongside researchers at Amazon Web Services.
    The Harvard team established the practical makings of the first quantum internet by entangling two quantum memory nodes separated by optical fiber link deployed over a roughly 22-mile loop through Cambridge, Somerville, Watertown, and Boston. The two nodes were located a floor apart in Harvard’s Laboratory for Integrated Science and Engineering.
    Quantum memory, analogous to classical computer memory, is an important component of an interconnected quantum computing future because it allows for complex network operations and information storage and retrieval. While other quantum networks have been created in the past, the Harvard team’s is the longest fiber network between devices that can store, process and move information.
    Each node is a very small quantum computer, made out of a sliver of diamond that has a defect in its atomic structure called a silicon-vacancy center. Inside the diamond, carved structures smaller than a hundredth the width of a human hair enhance the interaction between the silicon-vacancy center and light.
    The silicon-vacancy center contains two qubits, or bits of quantum information: one in the form of an electron spin used for communication, and the other in a longer-lived nuclear spin used as a memory qubit to store entanglement (the quantum-mechanical property that allows information to be perfectly correlated across any distance). Both spins are fully controllable with microwave pulses. These diamond devices — just a few millimeters square — are housed inside dilution refrigeration units that reach temperatures of -459 Fahrenheit.

    Using silicon-vacancy centers as quantum memory devices for single photons has been a multi-year research program at Harvard. The technology solves a major problem in the theorized quantum internet: signal loss that can’t be boosted in traditional ways. A quantum network cannot use standard optical-fiber signal repeaters because copying of arbitrary quantum information is impossible — making the information secure, but also very hard to transport over long distances.
    Silicon vacancy center-based network nodes can catch, store and entangle bits of quantum information while correcting for signal loss. After cooling the nodes to close to absolute zero, light is sent through the first node and, by nature of the silicon vacancy center’s atomic structure, becomes entangled with it.
    “Since the light is already entangled with the first node, it can transfer this entanglement to the second node,” explained first author Can Knaut, a Kenneth C. Griffin Graduate School of Arts and Sciences student in Lukin’s lab. “We call this photon-mediated entanglement.”
    Over the last several years, the researchers have leased optical fiber from a company in Boston to run their experiments, fitting their demonstration network on top of the existing fiber to indicate that creating a quantum internet with similar network lines would be possible.
    “Showing that quantum network nodes can be entangled in the real-world environment of a very busy urban area, is an important step towards practical networking between quantum computers,” Lukin said.
    A two-node quantum network is only the beginning. The researchers are working diligently to extend the performance of their network by adding nodes and experimenting with more networking protocols. More

  • in

    Next-generation sustainable electronics are doped with air

    Semiconductors are the foundation of all modern electronics. Now, researchers at Linköping University, Sweden, have developed a new method where organic semiconductors can become more conductive with the help of air as a dopant. The study, published in the journal Nature, is a significant step towards future cheap and sustainable organic semiconductors.
    “We believe this method could significantly influence the way we dope organic semiconductors. All components are affordable, easily accessible, and potentially environmentally friendly, which is a prerequisite for future sustainable electronics,” says Simone Fabiano, associate professor at Linköping University.
    Semiconductors based on conductive plastics instead of silicon have many potential applications. Among other things, organic semiconductors can be used in digital displays, solar cells, LEDs, sensors, implants, and for energy storage.
    To enhance conductivity and modify semiconductor properties, so-called dopants are typically introduced. These additives facilitate the movement of electrical charges within the semiconductor material and can be tailored to induce positive (p-doping) or negative (n-doping) charges. The most common dopants used today are often either very reactive (unstable), expensive, challenging to manufacture, or all three.
    Now, researchers at Linköping University have developed a doping method that can be performed at room temperature, where inefficient dopants such as oxygen are the primary dopant, and light activates the doping process.
    “Our approach was inspired by nature, as it shares many analogies with photosynthesis, for example. In our method, light activates a photocatalyst, which then facilitates electron transfer from a typically inefficient dopant to the organic semiconductor material,” says Simone Fabiano.
    The new method involves dipping the conductive plastic into a special salt solution — a photocatalyst — and then illuminating it with light for a short time. The duration of illumination determines the degree to which the material is doped. Afterwards, the solution is recovered for future use, leaving behind a p-doped conductive plastic in which the only consumed substance is oxygen in the air.

    This is possible because the photocatalyst acts as an “electron shuttle,” taking electrons or donating them to material in the presence of sacrificial weak oxidants or reductants. This is common in chemistry but has not been used in organic electronics before.
    “It’s also possible to combine p-doping and n-doping in the same reaction, which is quite unique. This simplifies the production of electronic devices, particularly those where both p-doped and n-doped semiconductors are required, such as thermoelectric generators. All parts can be manufactured at once and doped simultaneously instead of one by one, making the process more scalable,” says Simone Fabiano.
    The doped organic semiconductor has better conductivity than traditional semiconductors, and the process can be scaled up. Simone Fabiano and his research group at the Laboratory of Organic Electronics showed earlier in 2024 how conductive plastics could be processed from environmentally friendly solvents like water; this is their next step.
    “We are at the beginning of trying to fully understand the mechanism behind it and what other potential application areas exist. But it’s a very promising approach showing that photocatalytic doping is a new cornerstone in organic electronics,” says Simone Fabiano, a Wallenberg Academy Fellow. More

  • in

    Tech can’t replace human coaches in obesity treatment

    A new Northwestern Medicine study shows that technology alone can’t replace the human touch to produce meaningful weight loss in obesity treatment.
    “Giving people technology alone for the initial phase of obesity treatment produces unacceptably worse weight loss than giving them treatment that combines technology with a human coach,” said corresponding study author Bonnie Spring, director of the Center for Behavior and Health and professor of preventive medicine at Northwestern University Feinberg School of Medicine.
    The need for low cost but effective obesity treatments delivered by technology has become urgent as the ongoing obesity epidemic exacerbates burgeoning health care costs.
    But current technology is not advanced enough to replace human coaches, Spring said.
    In the new SMART study, people who initially only received technology without coach support were less likely to have a meaningful weight loss, considered to be at least 5% of body weight, compared to those who had a human coach at the start.
    Investigators intensified treatment quickly (by adding resources after just two weeks) if a person showed less than optimal weight loss, but the weight loss disadvantage for those who began their weight loss effort without coach support persisted for six months, the study showed.
    The study will be published May 14 in JAMA.

    Eventually more advanced technology may be able to supplant human coaches, Spring said.
    “At this stage, the average person still needs a human coach to achieve clinically meaningful weight loss goals because the tech isn’t sufficiently developed yet,” Spring said. “We may not be so far away from having an AI chat bot that can sub for a human, but we are not quite there yet. It’s within reach. The tech is developing really fast.”
    Previous research showed that mobile health tools for tracking diet, exercise and weight increase engagement in behavioral obesity treatment. Before this new study, it wasn’t clear whether they produced clinically acceptable weight loss in the absence of support from a human coach.
    Scientists are now trying to parse what human coaches do that makes them successful, and how AI can better imitate a human, not just in terms of content but in emotional tone and context awareness, Spring said.
    Surprising results
    “We predicted that starting treatment with technology alone would save money and reduce burden without undermining clinically beneficial weight loss, because treatment augmentation occurred so quickly once poor weight loss was detected,” Spring said. “That hypothesis was disproven, however.”
    Drug and surgical interventions also are available for obesity but have some drawbacks. “They’re very expensive, convey medical risks and side effects and aren’t equitably accessible,” Spring said. Most people who begin taking a GLP-1 agonist stop taking the drug within a year against medical advice, she noted.

    Many people can achieve clinically meaningful weight loss without antiobesity medications, bariatric surgery or even behavioral treatment, Spring said. In the SMART study, 25% of people who began treatment with technology alone achieved 5% weight loss after six months without anytreatment augmentation. (In fact, the team had to take back the study technologies after three months to recycle to new participants.)
    An unsolved problem in obesity treatment is matching treatment type and intensity to individuals’ needs and preferences. “If we could just tell ahead of time who needs which treatment at what intensity, we might start to manage the obesity epidemic,” Spring said.
    How the study worked
    The SMART Weight Loss Management study was a randomized controlled trial that compared two different stepped care treatment approaches for adult obesity. Stepped care offers a way to spread treatment resources across more of the population in need. The treatment that uses the least resources but that will benefit some people is delivered first; then treatment is intensified only for those who show insufficient response. Half of participants in the SMART study began their weight loss treatment with technology alone. The other half began with gold standard treatment involving both technology and a human coach.
    The technology used in the SMART trial was a Wireless Feedback System (an integrated app, Wi-Fi scale and Fitbit) that participants used to track and receive feedback about their diet, activity and weight.
    Four-hundred adults between ages 18-60 with obesity were randomly assigned to begin three months of stepped care behavioral obesity treatment beginning with either the Wireless Feedback System (WFS) alone or the WFS plus telehealth coaching. Weight loss was measured after two, four and eight weeks of treatment, and treatment was intensified at the first sign of suboptimal weight loss (less than .5 pounds per week).
    Treatment for both groups began with the same WFS tracking technology, but standard-of-care treatment also transmitted the participant’s digital data to a coach, who used it to provide behavioral coaching by telehealth. Those showing suboptimal weight loss in either group were re-randomized once to either of two levels of treatment intensification: modest (adding an inexpensive technology component — supportive messaging) or vigorous (adding both messaging plus a more costly traditional weight loss treatment component — coaching for those who hadn’t received it, meal replacement for those who’d already received coaching). More