More stories

  • in

    Scientists successfully maneuver robot through living lung tissue

    Lung cancer is the leading cause of cancer-related deaths in the United States. Some tumors are extremely small and hide deep within lung tissue, making it difficult for surgeons to reach them. To address this challenge, UNC -Chapel Hill and Vanderbilt University researchers have been working on an extremely bendy but sturdy robot capable of traversing lung tissue.
    Their research has reached a new milestone. In a new paper, published in Science Robotics, Ron Alterovitz, PhD, in the UNC Department of Computer Science, and Jason Akulian, MD MPH, in the UNC Department of Medicine, have proven that their robot can autonomously go from “Point A” to “Point B” while avoiding important structures, such as tiny airways and blood vessels, in a living laboratory model.
    “This technology allows us to reach targets we can’t otherwise reach with a standard or even robotic bronchoscope,” said Dr. Akulian, co-author on the paper and Section Chief of Interventional Pulmonology and Pulmonary Oncology in the UNC Division of Pulmonary Disease and Critical Care Medicine. “It gives you that extra few centimeters or few millimeters even, which would help immensely with pursuing small targets in the lungs.”
    The development of the autonomous steerable needle robot leveraged UNC’s highly collaborative culture by blending medicine, computer science, and engineering expertise. In addition to Alterovitz and Akulian, the development effort included Yueh Z. Lee, MD, PhD, at the UNC Department of Radiology, as well as Robert J. Webster III at Vanderbilt University and Alan Kuntz at the University of Utah.
    The robot is made of several separate components. A mechanical control provides controlled thrust of the needle to go forward and backward and the needle design allows for steering along curved paths. The needle is made from a nickel-titanium alloy and has been laser etched to increase its flexibility, allowing it to move effortlessly through tissue.
    As it moves forward, the etching on the needle allows it to steer around obstacles with ease. Other attachments, such as catheters, could be used together with the needle to perform procedures such as lung biopsies.
    To drive through tissue, the needle needs to know where it is going. The research team used CT scans of the subject’s thoracic cavity and artificial intelligence to create three-dimensional models of the lung, including the airways, blood vessels, and the chosen target. Using this 3-D model and once the needle has been positioned for launch, their AI-driven software instructs it to automatically travel from “Point A” to “Point B” while avoiding important structures. More

  • in

    ‘Garbatrage’ spins e-waste into prototyping gold

    To Ilan Mandel, a Cornell University robotics researcher and builder, the math didn’t add up. How could a new, off-the-shelf hoverboard cost less than the parts that compose it?
    “This becomes an ambient frustration as a designer — the incredible cheapness of products that exist in the world, and the incredible expenses for prototyping or building anything from scratch,” said Mandel, a doctoral student in the field of information science, based at Cornell Tech.
    While sourcing wheels and motors from old hoverboards to build what would become a fleet of trash robots in New York City, Mandel inadvertently uncovered the subject of his newest research: “Recapturing Product as Material Supply: Hoverboards as Garbatrage,” which received an honorable mention at the Association for Computing Machinery conference on Designing Interactive Systems in July. Wendy Ju, associate professor at the Jacobs Technion-Cornell Institute at Cornell Tech and the Technion, and a member of the Department of Information Science in the Cornell Ann S. Bowers College of Computing and Information Science, co-authored the paper.
    “For the large part, we design and manufacture as if we have an infinite supply of perfectly uniform materials and components,” Ju said. “That’s a terrible assumption.”
    Building on work in human-computer interaction that aims to incorporate sustainability and reuse into the field, the Cornell pair introduces “garbatrage,” a framework for prototype builders centered around repurposing underused devices. Mandel and Ju use their repurposing of hoverboards — the hands-free, motorized scooters that rolled in and out of popularity around 2016 — as a test case to highlight the economic factors that create opportunities for garbatrage. They also encourage designers to prioritize material reuse, create more circular economies and sustainable supply chains, and, in turn, minimize electronic waste, or e-waste.
    The time is ripe for a practice like garbatrage, both for sustainability reasons and considering the global supply shortages and international trade issues of the last few years, the researchers said.
    “I think that there’s a real need to appreciate the heterogeneity of hardware that we are surrounded by all the time and look at it as a resource,” Mandel said. “What is often deemed as garbage can be full of value and can be made useful if you are willing to do some bridge work.”
    From old desktop computers, smartphones and printers to smart speakers, Internet of Things appliances, and e-vaping devices, most of today’s e-waste has workable components that can be repurposed and used in the prototypes that become tomorrow’s innovations, researchers said.
    Instead, these devices — along with their batteries, microcontrollers, accelerometers, motors and LCD displays — become part of the estimated 53 million metric tons of e-waste produced globally each year. Nearly 20% of it is properly recycled, but it’s unclear where the other 80% goes, according to a report from the UN’s Global E-waste Monitor 2020. Some ends up in developing countries, where people burn electronics in open-air pits to salvage any valuable metals, poisoning lands and putting public health at risk.
    “Designers are a kind of node of interaction between massive scales of industrialization and end users,” Mandel said. “I think that designers can take that role seriously and use it to leverage e-waste in a way that promotes sustainability, beyond just asking the consumer to reflect more on their own practices.” More

  • in

    Let it flow: Recreating water flow for virtual reality

    The physical laws of everyday water flow were established two centuries ago. However, scientists today struggle to simulate disrupted water flow virtually, e.g., when a hand or object alters its flow.
    Now, a research team from Tohoku University has harnessed the power of deep reinforcement learning to replicate the flow of water when disturbed. Replicating this agitated liquid motion, as it is known, allowed them to recreate water flow in real time based on only a small amount of data from real water. The technology opens up the possibility for virtual reality interactions involving water.
    Details of their findings were published in the journal ACM Transactions on Graphics on September 17, 2023.
    Crucial to the breakthrough was creating both a flow measurement technique and a flow reconstruction method that replicated agitated liquid motion.
    To collect flow data, the group — which comprised researchers from Tohoku University’s Research Institute of Electrical Communication (RIEC) and the Institute of Fluid Science — placed buoys embedded with special magnetic markers on water. The movement of each buoy could then be tracked using a magnetic motion capture system. Yet this was only half of the process. The crucial step involved finding an innovative solution to recovering the detailed water motion from the movement of a few buoys.
    “We overcame this by combining a fluid simulation with deep reinforcement learning to perform the recovery,” says Yoshifumi Kitamura, deputy director of RIEC.
    Reinforcement learning is the trial-and-error process through which learning takes place. A computer performs actions, receives feedback (reward or punishment) from its environment, and then adjusts its future actions to maximize its total rewards over time, much like a dog associates treats with good behavior. Deep reinforcement learning combines reinforcement learning with deep neural networks to solve complex problems.
    First, the researchers used a computer to simulate calm liquid. Then, they made each buoy act like a force that pushes the simulated liquid, making it flow like real liquid. The computer then refines the way of pushing via deep reinforcement learning.
    Previous techniques had typically tracked tiny particles suspended inside the liquid with cameras. But it still remained difficult to measure 3D flow in real-time, especially when the liquid was in an opaque container or was opaque itself. Thanks to the developed magnetic motion capture and flow reconstruction technique, real-time 3D flow measurement is now possible.
    Kitamura stresses that the technology will make VR more immersive and improve online communication. “This technology will enable the creation of VR games where you can control things using water and actually feel the water in the game. We may be able to transmit the movement of water over the internet in real time so that even those far away can experience the same lifelike water motion.” More

  • in

    Artificial Intelligence tools shed light on millions of proteins

    A research team at the University of Basel and the SIB Swiss Institute of Bioinformatics uncovered a treasure trove of uncharacterised proteins. Embracing the recent deep learning revolution, they discovered hundreds of new protein families and even a novel predicted protein fold. The study has now been published in Nature.
    In the past years, AlphaFold has revolutionised protein science. This Artificial Intelligence (AI) tool was trained on protein data collected by life scientists for over 50 years, and is able to predict the 3D shape of proteins with high accuracy. Its success prompted the modelling of an astounding 215 million proteins last year, providing insights into the shapes of almost any protein. This is particularly interesting for proteins that have not been studied experimentally, a complex and time-consuming process.
    “There are now many sources of protein information, enclosing valuable insights into how proteins evolve and work” says Joana Pereira, the leader of the study. Nevertheless, research has long been faced with a data jungle. The research team led by Professor Torsten Schwede, group leader at the Biozentrum, University of Basel, and the Swiss Institute of Bioinformatics (SIB), has now succeeded in decrypting some of the concealed information.
    A bird’s eye view reveals new protein families and folds
    The researchers constructed an interactive network of 53 million proteins with high quality AlphaFold structures. “This network serves as a valuable source for theoretically predicting unknown protein families and their functions on a large scale,” underlines Dr. Janani Durairaj, the first author. The team was able to identify 290 new protein families and one new protein fold that resembles the shape of a flower.
    Building on the expertise of the Schwede group in developing and maintaining the leading software SWISS-MODEL, they made the network available as an interactive web resource, termed the “Protein Universe Atlas.”
    AI as a valuable tool in research
    The team has employed Deep Learning-based tools for finding novelties in this network, paving the way to innovations in life sciences, from basic to applied research. “Understanding the structure and function of proteins is typically one of the first steps to develop a new drug, or modify their functions by protein engineering, for example,” says Pereira. The work was supported by a ‘kickstarter’ grant from SIB to encourage the adoption of AI in life science resources. It underscores the transformative potential of Deep Learning and intelligent algorithms in research.
    With the Protein Universe Atlas, scientists can now learn more about proteins relevant to their research. “We hope this resource will help not only researchers and biocurators but also students and teachers by providing a new platform for learning about protein diversity, from structure, to function, to evolution,” says Janani Durairaj. More

  • in

    Cloud services without servers: What’s behind it

    In cloud computing, commercial providers make computing resources available on demand to their customers over the Internet. This service is partly offered “serverless,” that is, without servers. How can that work? Computing resources without a server, isn’t that like a restaurant without a kitchen?
    “The term is misleading,” says computer science Professor Samuel Kounev from Julius-Maximilians-Universität (JMU) Würzburg in Bavaria, Germany. Because even serverless cloud services don’t get by without servers.
    In classical cloud computing, for example, a web shop rents computing resources from a cloud provider in the form of virtual machines (VMs). However, the shop itself remains responsible for the management of “its” servers, that is, the VMs. It has to take care of security aspects as well as the avoidance of overload situations or the recovery from system failures.
    The situation is different with serverless computing. Here, the cloud provider takes over responsibility for the complete server management. The cloud users can no longer even access the server, it remains hidden from them — hence the term “serverless.”
    Research article in ACM’s “Communications of the ACM” magazine
    “The basic idea of serverless computing has been around since the beginning of cloud computing. However, it has not become widely accepted,” explains Samuel Kounev, who heads the JMU Chair of Computer Science II (Software Engineering). But a shift can currently be observed in the industry and in science, the focus is increasingly moving towards serverless computing.
    A recent article in the Communications of the ACM magazine of the Association for Computing Machinery (ACM) deals with the history, status and potential of serverless computing. Among the authors are Samuel Kounev and Dr. Nikolas Herbst, who heads the JMU research group “Data Analytics Clouds.” ACM has also produced a video with Professor Samuel Kounev to accompany the publication: https://vimeo.com/849237573
    Experts define serverless computing inconsistently More

  • in

    Novel organic light-emitting diode with ultralow turn-on voltage for blue emission

    An upconversion organic light-emitting diode (OLED) based on a typical blue-fluorescence emitter achieves emission at an ultralow turn-on voltage of 1.47 V, as demonstrated by researchers from Tokyo Tech. Their technology circumvents the traditional high voltage requirement for blue OLEDs, leading to potential advancements in commercial smartphone and large screen displays.
    Blue light is vital for light-emitting devices, lighting applications, as well as smartphone screens and large screen displays. However, it is challenging to develop efficient blue organic light-emitting diodes (OLEDs) owing to the high applied voltage required for their function. Conventional blue OLEDs typically require around 4 V for a luminance of 100 cd/m2; this is higher than the industrial target of 3.7 V — the voltage of lithium-ion batteries commonly used in smartphones. Therefore, there is an urgent need to develop novel blue OLEDs that can operate at lower voltages.
    In this regard, Associate Professor Seiichiro Izawa from Tokyo Institute of Technology and Osaka University, collaborated with researchers from University of Toyama, Shizuoka University, and the Institute for Molecular Science has recently presented a novel OLED device with a remarkable ultralow turn-on voltage of 1.47 V for blue emission and a peak wavelength at 462 nm (2.68 eV). Their work will be published in Nature Communications.
    The choice of materials used in this OLED significantly influences its turn-on voltage. The device utilizes NDI-HF (2,7-di(9H-fluoren-2-yl)benzo[lmn][3,8]-phenanthroline-1,3,6,8(2H,7H)-tetraone) as the acceptor, 1,2-ADN (9-(naphthalen-1-yl)-10-(naphthalen-2-yl)anthracene) as the donor, and TbPe (2,5,8,11-tetra-tert-butylperylene) as the fluorescent dopant. This OLED operates via a mechanism called upconversion (UC). Herein, holes and electrons are injected into donor (emitter) and acceptor (electron transport) layers, respectively. They recombine at the donor/acceptor (D/A) interface to form a charge transfer (CT) state. Dr. Izawa points out: “The intermolecular interactions at the D/A interface play a significant role in CT state formation, with stronger interactions yielding superior results.”
    Subsequently, the energy of the CT state is selectively transferred to the low-energy first triplet excited states of the emitter, which results in blue light emission through the formation of a high-energy first singlet excited state by triplet-triplet annihilation (TTA). “As the energy of the CT state is much lower than the emitter’s bandgap energy, the UC mechanism with TTA significantly decreases the applied voltage required for exciting the emitter. As a result, this UC-OLED reaches a luminance of 100 cd/m2, equivalent to that of a commercial display, at just 1.97 V,” explains Dr. Izawa.
    In effect, this study efficiently produces a novel OLED, with blue light emission at an ultralow turn-on voltage, using a typical fluorescent emitter widely utilized in commercial displays, thus marking a significant step toward meeting the commercial requirements for blue OLEDs. It emphasizes the importance of optimizing the design of the D/A interface for controlling excitonic processes and holds promise not only for OLEDs but also for organic photovoltaics and other organic electronic devices. More

  • in

    Machine learning models can produce reliable results even with limited training data

    Researchers have determined how to build reliable machine learning models that can understand complex equations in real-world situations while using far less training data than is normally expected.
    The researchers, from the University of Cambridge and Cornell University, found that for partial differential equations — a class of physics equations that describe how things in the natural world evolve in space and time — machine learning models can produce reliable results even when they are provided with limited data.
    Their results, reported in the Proceedings of the National Academy of Sciences, could be useful for constructing more time- and cost-efficient machine learning models for applications such as engineering and climate modelling.
    Most machine learning models require large amounts of training data before they can begin returning accurate results. Traditionally, a human will annotate a large volume of data — such as a set of images, for example — to train the model.
    “Using humans to train machine learning models is effective, but it’s also time-consuming and expensive,” said first author Dr Nicolas Boullé, from the Isaac Newton Institute for Mathematical Sciences. “We’re interested to know exactly how little data we actually need to train these models and still get reliable results.”
    Other researchers have been able to train machine learning models with a small amount of data and get excellent results, but how this was achieved has not been well-explained. For their study, Boullé and his co-authors, Diana Halikias and Alex Townsend from Cornell University, focused on partial differential equations (PDEs).
    “PDEs are like the building blocks of physics: they can help explain the physical laws of nature, such as how the steady state is held in a melting block of ice,” said Boullé, who is an INI-Simons Foundation Postdoctoral Fellow. “Since they are relatively simple models, we might be able to use them to make some generalisations about why these AI techniques have been so successful in physics.”
    The researchers found that PDEs that model diffusion have a structure that is useful for designing AI models. “Using a simple model, you might be able to enforce some of the physics that you already know into the training data set to get better accuracy and performance,” said Boullé. More

  • in

    Combustion powers bug-sized robots to leap, lift and race

    Cornell researchers combined soft microactuators with high-energy-density chemical fuel to create an insect-scale quadrupedal robot that is powered by combustion and can outrace, outlift, outflex and outleap its electric-driven competitors.
    The group’s paper, “Powerful, Soft Combustion Actuators for Insect-Scale Robots,” was published Sept. 14 in Science. The lead author is postdoctoral researcher Cameron Aubin, Ph.D. ’23.
    The project was led by Rob Shepherd, associate professor of mechanical and aerospace engineering in Cornell Engineering, whose Organic Robotics Lab has previously used combustion to create a braille display for electronics.
    As anyone who has witnessed an ant carry off food from a picnic knows, insects are far stronger than their puny size suggests. However, robots at that scale have yet to reach their full potential. One of the challenges is “motors and engines and pumps don’t really work when you shrink them down to this size,” Aubin said, so researchers have tried to compensate by creating bespoke mechanisms to perform such functions. So far, the majority of these robots have been tethered to their power sources — which usually means electricity.
    “We thought using a high-energy-density chemical fuel, just like we would put in an automobile, would be one way that we could increase the onboard power and performance of these robots,” he said. “We’re not necessarily advocating for the return of fossil fuels on a large scale, obviously. But in this case, with these tiny, tiny robots, where a milliliter of fuel could lead to an hour of operation, instead of a battery that is too heavy for the robot to even lift, that’s kind of a no brainer.”
    While the team has yet to create a fully untethered model — Aubin says they are halfway there — the current iteration “absolutely throttles the competition, in terms of their force output.”
    The four-legged robot, which is just over an inch long and weighs the equivalent of one and a half paperclips, is 3D-printed with a flame-resistant resin. The body contains a pair of separated combustion chambers that lead to the four actuators, which serve as the feet. Each actuator/foot is a hollow cylinder capped with a piece of silicone rubber, like a drum skin, on the bottom. When offboard electronics are used to create a spark in the combustion chambers, premixed methane and oxygen are ignited, the combustion reaction inflates the drum skin, and the robot pops up into the air. More