More stories

  • in

    Self-driving cars lack social intelligence in traffic

    Should I go or give way? It is one of the most basic questions in traffic, whether merging in on a motorway or at the door of the metro. The decision is one that humans typically make quickly and intuitively, because doing so relies on social interactions trained from the time we begin to walk.
    Self-driving cars on the other hand, which are already on the road in several parts of the world, still struggle when navigating these social interactions in traffic. This has been demonstrated in new research conducted at the University of Copenhagen’s Department of Computer Science. Researchers analyzed an array of videos uploaded by YouTube users of self-driving cars in various traffic situations. The results show that self-driving cars have a particularly tough time understanding when to ‘yield’ — when to give way and when to drive on.
    “The ability to navigate in traffic is based on much more than traffic rules. Social interactions, including body language, play a major role when we signal each other in traffic. This is where the programming of self-driving cars still falls short. That is why it is difficult for them to consistently understand when to stop and when someone is stopping for them, which can be both annoying and dangerous,” says Professor Barry Brown, who has studied the evolution of self-driving car road behavior for the past five years.
    Sorry, it’s a self-driving car!
    Companies like Waymo and Cruise have launched taxi services with self-driving cars in parts of the United States. Tesla has rolled out their FSD model (full self-driving) to about 100,000 volunteer drivers in the US and Canada. And the media is brimming with stories about how good self-driving cars perform. But according to Professor Brown and his team, their actual road performance is a well-kept trade secret that very few have insight into. Therefore, the researchers performed in-depth analyses using 18 hours of YouTube footage filmed by enthusiasts testing cars from the back seat.
    One of their video examples shows a family of four standing by the curb of a residential street in the United States. There is no pedestrian crossing, but the family would like to cross the road. As the driverless car approaches, it slows, causing the two adults in the family to wave their hands as a sign for the car to drive on. Instead, the car stops right next to them for 11 seconds. Then, as the family begins walking across the road, the car starts moving again, causing them to jump back onto the sidewalk, whereupon the person in the back seat rolls down the window and yells, “Sorry, self-driving car!.”
    “The situation is similar to the main problem we found in our analysis and demonstrates the inability of self-driving cars to understand social interactions in traffic. The driverless vehicle stops so as to not hit pedestrians, but ends up driving into them anyway because it doesn’t understand the signals. Besides creating confusion and wasted time in traffic, it can also be downright dangerous,” says Professor Brown.

    A drive in foggy Frisco
    In tech centric San Francisco, the performance of self-driving cars can be judged up close. Here, driverless cars have been unleashed in several parts of the city as buses and taxis, navigating the hilly streets among people and other natural phenomena. And according to the researcher, this has created plenty of resistance among the city’s residents:
    “Self-driving cars are causing traffic jams and problems in San Francisco because they react inappropriately to other road users. Recently, the city’s media wrote of a chaotic traffic event caused by self-driving cars due to fog. Fog caused the self-driving cars to overreact, stop and block traffic, even though fog is extremely common in the city,” says Professor Brown.
    Robotic cars have been in the works for 10 years and the industry behind them has spent over DKK 40 billion to push their development. Yet the outcome has been cars that still drive with many mistakes, blocking other drivers and disrupting the smooth flow of traffic.
    Why do you think it’s so difficult to program self-driving cars to understand social interactions in traffic?
    “I think that part of the answer is that we take the social element for granted. We don’t think about it when we get into a car and drive — we just do it automatically. But when it comes to designing systems, you need to describe everything we take for granted and incorporate it into the design. The car industry could learn from having a more sociological approach. Understanding social interactions that are part of traffic should be used to design self-driving cars’ interactions with other road users, similar to how research has helped improve the usability of mobile phones and technology more broadly.”
    About the study: The researchers analyzed 18 hours of video footage of self-driving cars from 70 different YouTube videos. Using different video analysis techniques the researchers studied the video sequences in depth, rather than making a broader superficial analysis. The study is called: “The Halting Problem: Video analysis of self-driving cars in traffic” has just been presented at the 2023 CHI Conference on Human Factors in Computing Systems, where it won the conference’s best paper award. The study was conducted by Barry Brown of the University of Copenhagen and Stockholm University, Mathias Broth of Linköping University, and Erik Vinkhuyzen of Kings College, London. More

  • in

    New tool may help spot ‘invisible’ brain damage in college athletes

    An artificial intelligence computer program that processes magnetic resonance imaging (MRI) can accurately identify changes in brain structure that result from repeated head injury, a new study in student athletes shows. These variations have not been captured by other traditional medical images such as computerized tomography (CT) scans. The new technology, researchers say, may help design new diagnostic tools to better understand subtle brain injuries that accumulate over time.
    Experts have long known about potential risks of concussion among young athletes, particularly for those who play high-contact sports such as football, hockey, and soccer. Evidence is now mounting that repeated head impacts, even if they at first appear mild, may add up over many years and lead to cognitive loss. While advanced MRI identifies microscopic changes in brain structure that result from head trauma, researchers say the scans produce vast amounts of data that is difficult to navigate.
    Led by researchers in the Department of Radiology at NYU Grossman School of Medicine, the new study showed for the first time that the new tool, using an AI technique called machine learning, could accurately distinguish between the brains of male athletes who played contact sports like football versus noncontact sports like track and field. The results linked repeated head impacts with tiny, structural changes in the brains of contact-sport athletes who had not been diagnosed with a concussion.
    “Our findings uncover meaningful differences between the brains of athletes who play contact sports compared to those who compete in noncontact sports,” said study senior author and neuroradiologist Yvonne Lui, MD. “Since we expect these groups to have similar brain structure, these results suggest that there may be a risk in choosing one sport over another,” adds Lui, a professor and vice chair for research in the Department of Radiology at NYU Langone Health.
    Lui adds that beyond spotting potential damage, the machine-learning technique used in their investigation may also help experts to better understand the underlying mechanisms behind brain injury.
    The new study, which published online May 22 in The Neuroradiology Journal, involved hundreds of brain images from 36 contact-sport college athletes (mostly football players) and 45 noncontact-sport college athletes (mostly runners and baseball players). The work was meant to clearly link changes detected by the AI tool in the brain scans of football players to head impacts. It builds on a previous study that had identified brain-structure differences in football players, comparing those with and without concussions to athletes who competed in noncontact sports.

    For the investigation, the researchers analyzed MRI scans from 81 male athletes taken between 2016 through 2018, none of whom had a known diagnosis of concussion within that time period. Contact-sport athletes played football, lacrosse, and soccer, while noncontact-sport athletes participated in baseball, basketball, track and field, and cross-country.
    As part of their analysis, the research team designed statistical techniques that gave their computer program the ability to “learn” how to predict exposure to repeated head impacts using mathematical models. These were based on data examples fed into them, with the program getting “smarter” as the amount of training data grew.
    The study team trained the program to identify unusual features in brain tissue and distinguish between athletes with and without repeated exposure to head injuries based on these factors. They also ranked how useful each feature was for detecting damage to help uncover which of the many MRI metrics might contribute most to diagnoses.
    Two metrics most accurately flagged structural changes that resulted from head injury, say the authors. The first, mean diffusivity, measures how easily water can move through brain tissue and is often used to spot strokes on MRI scans. The second, mean kurtosis, examines the complexity of brain-tissue structure and can indicate changes in the parts of the brain involved in learning, memory, and emotions.
    “Our results highlight the power of artificial intelligence to help us see things that we could not see before, particularly ‘invisible injuries’ that do not show up on conventional MRI scans,” said study lead author Junbo Chen, MS, a doctoral candidate at NYU Tandon School of Engineering. “This method may provide an important diagnostic tool not only for concussion, but also for detecting the damage that stems from subtler and more frequent head impacts.”
    Chen adds that the study team next plans to explore the use of their machine-learning technique for examining head injury in female athletes.
    Funding for the study was provided by National Institute of Health grants P41EB017183 and C63000NYUPG118117. Further funding was provided by Department of Defense grant W81XWH2010699.
    In addition to Lui and Chen, other NYU researchers involved in the study were Sohae Chung, PhD; Tianhao Li, MS; Els Fieremans, PhD; Dmitry Novikov, PhD; and Yao Wang, PhD. More

  • in

    Source-shifting metastructures composed of only one resin for location camouflaging

    The field of transformation optics has flourished over the past decade, allowing scientists to design metamaterial-based structures that shape and guide the flow of light. One of the most dazzling inventions potentially unlocked by transformation optics is the invisibility cloak — a theoretical fabric that bends incoming light away from the wearer, rendering them invisible. Interestingly, such illusions are not restricted to the manipulations of light alone.
    Many of the techniques used in transformation optics have been applied to sound waves, giving rise to the parallel field of transformation acoustics. In fact, researchers have already made substantial progress by developing the “acoustic cloak,” the analog of the invisibility cloak for sounds. While research on acoustic illusion has focused on the concept of masking the presence of an object, not much progress has been made on the problem of location camouflaging.
    The concept of an acoustic source-shifter utilizes a structure that makes the location of the sound source appear different from its actual location. Such devices capable of “acoustic location camouflaging” could find applications in advanced holography and virtual reality. Unfortunately, the nature of location camouflaging has been scarcely studied, and the development of accessible materials and surfaces that would provide a decent performance has proven challenging.
    Against this backdrop, Professor Garuda Fujii, affiliated with the Institute of Engineering and Energy Landscape Architectonics Brain Bank (ELab2) at Shinshu University, Japan, has now made progress in developing high-performance source-shifters. In a recent study published in the Journal of Sound and Vibration online on May 5, 2023, Prof. Fujii presented an innovative approach to designing source-shifter structures out of acrylonitrile butadiene styrene (ABS), an elastic polymer commonly used in 3D printing.
    Prof. Fujii’s approach is centered around a core concept: inverse design based on topology optimization. The numerical approach builds on the reproduction of pressure fields (sound) emitted by a virtual source, i.e., the source that nearby listeners would mistakenly perceive as real. Next, the pressure fields emitted by the actual source are manipulated to camouflage the location and make it sound as if coming from a different location in space. This can be achieved with the optimum design of a metastructure that, by the virtue of its geometry and elastic properties, minimizes the difference between the pressure fields emitted from the actual and virtual sources.
    Utilizing this approach, Prof. Fujii implemented an iterative algorithm to numerically determine the optimal design of ABS resin source-shifters according to various design criteria. His models and simulations had to account for the acoustic-elastic interactions between fluids (air) and solid elastic structures, as well as the actual limitations of modern manufacturing technology.
    The simulation results revealed that the optimized structures could reduce the difference between the emitted pressure fields of the masked source and those of a bare source at the virtual location to as low as 0.6%. “The optimal structure configurations obtained via topology optimization exhibited good performances at camouflaging the actual source location despite the simple composition of ABS that did not comprise complex acoustic metamaterials”, remarks Prof. Fujii.
    To shed more light on the underlying camouflaging mechanisms, Prof. Fujii analyzed the importance of the distance between the virtual and actual sources. He found that a greater distance did not necessarily degrade the source-shifter’s performance. He also investigated the effect of changing the frequency of the emitted sound on the performance as the source-shifters had been optimized for only one target frequency. Finally, he explored whether a source-shifter could be topologically optimized to operate at multiple sound frequencies.
    While his approach requires further fine-tuning, the findings of this study will surely help advance illusion acoustics. He concludes, “The proposed optimization method for designing high-performance source-shifters will help in the development of acoustic location camouflage and the advancement of holography technology.” More

  • in

    Robot centipedes go for a walk

    Researchers from the Department of Mechanical Science and Bioengineering at Osaka University have invented a new kind of walking robot that takes advantage of dynamic instability to navigate. By changing the flexibility of the couplings, the robot can be made to turn without the need for complex computational control systems. This work may assist the creation of rescue robots that are able to traverse uneven terrain.
    Most animals on Earth have evolved a robust locomotion system using legs that provides them with a high degree of mobility over a wide range of environments. Somewhat disappointingly, engineers who have attempted to replicate this approach have often found that legged robots are surprisingly fragile. The breakdown of even one leg due to the repeated stress can severely limit the ability of these robots to function. In addition, controlling a large number of joints so the robot can transverse complex environments requires a lot of computer power. Improvements in this design would be extremely useful for building autonomous or semi-autonomous robots that could act as exploration or rescue vehicles and enter dangerous areas.
    Now, investigators from Osaka University have developed a biomimetic “myriapod” robot that takes advantage of a natural instability that can convert straight walking into curved motion. In a study published recently in Soft Robotics, researchers from Osaka University describe their robot, which consists of six segments (with two legs connected to each segment) and flexible joints. Using an adjustable screw, the flexibility of the couplings can be modified with motors during the walking motion. The researchers showed that increasing the flexibility of the joints led to a situation called a “pitchfork bifurcation,” in which straight walking becomes unstable. Instead, the robot transitions to walking in a curved pattern, either to the right or to the left. Normally, engineers would try to avoid creating instabilities. However, making controlled use of them can enable efficient maneuverability. “We were inspired by the ability of certain extremely agile insects that allows them to control the dynamic instability in their own motion to induce quick movement changes,” says Shinya Aoi, an author of the study. Because this approach does not directly steer the movement of the body axis, but rather controls the flexibility, it can greatly reduce both the computational complexity as well as the energy requirements.
    The team tested the robot’s ability to reach specific locations and found that it could navigate by taking curved paths toward targets. “We can foresee applications in a wide variety of scenarios, such as search and rescue, working in hazardous environments or exploration on other planets,” says Mau Adachi, another study author. Future versions may include additional segments and control mechanisms. More

  • in

    Super low-cost smartphone attachment brings blood pressure monitoring to your fingertips

    Engineers at the University of California San Diego have developed a simple, low-cost clip that uses a smartphone’s camera and flash to monitor blood pressure at the user’s fingertip. The clip works with a custom smartphone app and currently costs about 80 cents to make. The researchers estimate that the cost could be as low as 10 cents apiece when manufactured at scale.
    The technology was published May 29 in Scientific Reports.
    Researchers say it could help make regular blood pressure monitoring easy, affordable and accessible to people in resource-poor communities. It could benefit older adults and pregnant women, for example, in managing conditions such as hypertension.
    “We’ve created an inexpensive solution to lower the barrier to blood pressure monitoring,” said study first author Yinan (Tom) Xuan, an electrical and computer engineering Ph.D. student at UC San Diego.
    “Because of their low cost, these clips could be handed out to anyone who needs them but cannot go to a clinic regularly,” said study senior author Edward Wang, a professor of electrical and computer engineering at UC San Diego and director of the Digital Health Lab. “A blood pressure monitoring clip could be given to you at your checkup, much like how you get a pack of floss and toothbrush at your dental visit.”
    Another key advantage of the clip is that it does not need to be calibrated to a cuff.

    “This is what distinguishes our device from other blood pressure monitors,” said Wang. Other cuffless systems being developed for smartwatches and smartphones, he explained, require obtaining a separate set of measurements with a cuff so that their models can be tuned to fit these measurements.
    “Our is a calibration-free system, meaning you can just use our device without touching another blood pressure monitor to get a trustworthy blood pressure reading.”
    To measure blood pressure, the user simply presses on the clip with a fingertip. A custom smartphone app guides the user on how hard and long to press during the measurement.
    The clip is a 3D-printed plastic attachment that fits over a smartphone’s camera and flash. It features an optical design similar to that of a pinhole camera. When the user presses on the clip, the smartphone’s flash lights up the fingertip. That light is then projected through a pinhole-sized channel to the camera as an image of a red circle. A spring inside the clip allows the user to press with different levels of force. The harder the user presses, the bigger the red circle appears on the camera.
    The smartphone app extracts two main pieces of information from the red circle. By looking at the size of the circle, the app can measure the amount of pressure that the user’s fingertip applies. And by looking at the brightness of the circle, the app can measure the volume of blood going in and out of the fingertip. An algorithm converts this information into systolic and diastolic blood pressure readings.

    The researchers tested the clip on 24 volunteers from the UC San Diego Medical Center. Results were comparable to those taken by a blood pressure cuff.
    “Using a standard blood pressure cuff can be awkward to put on correctly, and this solution has the potential to make it easier for older adults to self-monitor blood pressure,” said study co-author and medical collaborator Alison Moore, chief of the Division of Geriatrics in the Department of Medicine at UC San Diego School of Medicine.
    While the team has only proven the solution on a single smartphone model, the clip’s current design theoretically should work on other phone models, said Xuan.
    Wang and one of his lab members, Colin Barry, a co-author on the paper who is an electrical and computer engineering student at UC San Diego, co-founded a company, Billion Labs Inc., to refine and commercialize the technology.
    Next steps include making the technology more user friendly, especially for older adults; testing its accuracy across different skin tones; and creating a more universal design.
    Paper: “Ultra-low-cost Mechanical Smartphone Attachment for No-Calibration Blood Pressure Measurement.” Co-authors include Jessica De Souza, Jessica Wen and Nick Antipa, all at UC San Diego.
    This work is supported by the National Institute of Aging Massachusetts AI and Technology Center for Connected Care in Aging and Alzheimer’s Disease (MassAITC P30AG073107 Subaward 23-016677 N 00), the Altman Clinical and Translational Research Institute Galvanizing Engineering in Medicine (GEM) Awards, and a Google Research Scholar Award.
    Disclosures: Edward Wang and Colin Barry are co-founders of and have a financial interest in Billion Labs Inc. Wang is also the CEO of Billion Labs Inc. The other authors declare that they have no competing interests. The terms of this arrangement have been reviewed and approved by the University of California San Diego in accordance with its conflict-of-interest policies. More

  • in

    Emergence of solvated dielectrons observed for the first time

    Solvated dielectrons are the subject of many hypotheses among scientists, but have never been directly observed. They are described as a pair of electrons that is dissolved in liquids such as water or liquid ammonia. To make space for the electrons a cavity forms in the liquid, which the two electrons occupy. An international research team around Dr. Sebastian Hartweg, initially at Synchrotron SOLEIL (France), now at the Institute of Physics at the University of Freiburg and Prof. Dr. Ruth Signorell from ETH Zurich, including scientists from the synchrotron SOLEIL and Auburn University (US) has now succeeded in discovering a formation and decay process of the solvated dielectron. In experiments at the synchrotron SOLEIL (DESIRS beamline), the consortium found direct evidence supported by quantum chemical calculations for the formation of these electron pairs by excitation with ultraviolet light in tiny ammonia droplets containing a single sodium atom. The results were recently published in the scientific journal Science.
    Traces of an unusual process
    When dielectrons are formed by excitation with ultraviolet light in tiny ammonia droplets containing a sodium atom, they leave traces in an unusual process that scientists have now been able to observe for the first time. In this process, one of the two electrons migrates to the neighbouring solvent molecules, while at the same time the other electron is ejected. “The surprising thing about this is that similar processes have previously been observed mainly at much higher excitation energies,” says Hartweg. The team focused on this second electron because there could be interesting applications for it. On the one hand, the ejected electron is produced with very low kinetic energy, so it moves very slowly. On the other hand, this energy can be controlled by the irradiated UV light, which starts the whole process. Solvated dielectrons could thus serve as a good source of low-energy electrons.
    Generated specifically with variable energy
    Such slow electrons can set a wide variety of chemical processes in motion. For example, they play a role in the cascade of processes that lead to radiation damage in biological tissue. They are also important in synthetic chemistry, where they serve as effective reducing agents. By being able to selectively generate slow electrons with variable energy, the mechanisms of such chemical processes can be studied in more detail in the future. In addition, the energy made available to the electrons in a controlled manner might also be used to increase the effectiveness of reduction reactions. “These are interesting prospects for possible applications in the future,” says Hartweg. “Our work provides the basis for this and helps to understand these exotic and still enigmatic solvated dielectrons a little better.” More

  • in

    Nanorobotic system presents new options for targeting fungal infections

    Infections caused by fungi, such as Candida albicans, pose a significant global health risk due to their resistance to existing treatments, so much so that the World Health Organization has highlighted this as a priority issue.
    Although nanomaterials show promise as antifungal agents, current iterations lack the potency and specificity needed for quick and targeted treatment, leading to prolonged treatment times and potential off-target effects and drug resistance.
    Now, in a groundbreaking development with far-reaching implications for global health, a team of researchers jointly led by Hyun (Michel) Koo of the University of Pennsylvania School of Dental Medicine and Edward Steager of Penn’s School of Engineering and Applied Science has created a microrobotic system capable of rapid, targeted elimination of fungal pathogens.
    “Candidae forms tenacious biofilm infections that are particularly hard to treat,” Koo says. “Current antifungal therapies lack the potency and specificity required to quickly and effectively eliminate these pathogens, so this collaboration draws from our clinical knowledge and combines Ed’s team and their robotic expertise to offer a new approach.”
    The team of researchers is a part of Penn Dental’s Center for Innovation & Precision Dentistry, an initiative that leverages engineering and computational approaches to uncover new knowledge for disease mitigation and advance oral and craniofacial health care innovation.
    For this paper, published in Advanced Materials, the researchers capitalized on recent advancements in catalytic nanoparticles, known as nanozymes, and they built miniature robotic systems that could accurately target and quickly destroy fungal cells. They achieved this by using electromagnetic fields to control the shape and movements of these nanozyme microrobots with great precision.

    “The methods we use to control the nanoparticles in this study are magnetic, which allows us to direct them to the exact infection location,” Steager says. “We use iron oxide nanoparticles, which have another important property, namely that they’re catalytic.”
    Steager’s team developed the motion, velocity, and formations of nanozymes, which resulted in enhanced catalytic activity, much like the enzyme peroxidase, which helps break down hydrogen peroxide into water and oxygen. This directly allows the generation of high amounts of reactive oxygen species (ROS), compounds that have proven biofilm-destroying properties, at the site of infection.
    However, the truly pioneering element of these nanozyme assemblies was an unexpected discovery: their strong binding affinity to fungal cells. This feature enables a localized accumulation of nanozymes precisely where the fungi reside and, consequently, targeted ROS generation.
    “Our nanozyme assemblies show an incredible attraction to fungal cells, particularly when compared to human cells,” Steager says. “This specific binding interaction paves the way for a potent and concentrated antifungal effect without affecting other uninfected areas.”
    Coupled with the nanozyme’s inherent maneuverability, this results in a potent antifungal effect, demonstrating the rapid eradication of fungal cells within an unprecedented 10-minute window.
    Looking forward, the team sees the potential of this unique nanozyme-based robotics approach, as they incorporate new methods to automate control and delivery of nanozymes. The promise it holds for antifungal therapy is just the beginning. Its precise targeting, rapid action suggest potential for treating other types of stubborn infections.
    “We’ve uncovered a powerful tool in the fight against pathogenic fungal infections,” Koo says. “What we have achieved here is a significant leap forward, but it’s also just the first step. The magnetic and catalytic properties combined with unexpected binding specificity to fungi open exciting opportunities for an automated ‘target-bind-and-kill’ antifungal mechanism. We are eager to delve deeper and unlock its full potential.”
    This robotics approach opens up a new frontier in the fight against fungal infections and marks a pivotal point in antifungal therapy. With a new tool in their arsenal, medical and dental professionals are closer than ever to effectively combating these difficult pathogens. More

  • in

    Protein-based nano-‘computer’ evolves in ability to influence cell behavior

    The first protein-based nano-computing agent that functions as a circuit has been created by Penn State researchers. The milestone puts them one step closer to developing next-generation cell-based therapies to treat diseases like diabetes and cancer.
    Traditional synthetic biology approaches for cell-based therapies, such as ones that destroy cancer cells or encourage tissue regeneration after injury, rely on the expression or suppression of proteins that produce a desired action within a cell. This approach can take time (for proteins to be expressed and degrade) and cost cellular energy in the process. A team of Penn State College of Medicine and Huck Institutes of the Life Sciences researchers are taking a different approach.
    “We’re engineering proteins that directly produce a desired action,” said Nikolay Dokholyan, G. Thomas Passananti Professor and vice chair for research in the Department of Pharmacology. “Our protein-based devices or nano-computing agents respond directly to stimuli (inputs) and then produce a desired action (outputs).”
    In a study published in Science Advances today (May 26) Dokholyan and bioinformatics and genomics doctoral student Jiaxing Chen describe their approach to creating their nano-computing agent. They engineered a target protein by integrating two sensor domains, or areas that respond to stimuli. In this case, the target protein responds to light and a drug called rapamycin by adjusting its orientation, or position in space.
    To test their design, the team introduced their engineered protein into live cells in culture. By exposing the cultured cells to the stimuli, they used equipment to measure changes in cellular orientation after cells were exposed to the sensor domains’ stimuli.
    Previously, their nano-computing agent required two inputs to produce one output. Now, Chen says there are two possible outputs and the output depends on which order the inputs are received. If rapamycin is detected first, followed by light, the cell will adopt one angle of cell orientation, but if the stimuli are received in a reverse order, then the cell adopts a different orientation angle. Chen says this experimental proof-of-concept opens the door for the development of more complex nano-computing agents.
    “Theoretically, the more inputs you embed into a nano-computing agent, the more potential outcomes that could result from different combinations,” Chen said. “Potential inputs could include physical or chemical stimuli and outputs could include changes in cellular behaviors, such as cell direction, migration, modifying gene expression and immune cell cytotoxicity against cancer cells.”
    The team plans to further develop their nano-computing agents and experiment with different applications of the technology. Dokholyan, a researcher with Penn State Cancer Institute and Penn State Neuroscience Institute, said their concept could someday form the basis of the next-generation cell-based therapies for various diseases, such as autoimmune diseases, viral infections, diabetes, nerve injury and cancer.
    Yashavantha Vishweshwaraiah, Richard Mailman and Erdem Tabdanov of Penn State College of Medicine also contributed to this research. The authors declare no conflicts of interest.
    This work was funded by the National Institutes of Health (grant 1R35GM134864) and the Passan Foundation. More