More stories

  • in

    ‘Doughnut’ beams help physicists see incredibly small objects

    In a new study, researchers at the University of Colorado Boulder have used doughnut-shaped beams of light to take detailed images of objects too tiny to view with traditional microscopes.
    The new technique could help scientists improve the inner workings of a range of “nanoelectronics,” including the miniature semiconductors in computer chips. The discovery was highlighted Dec. 1 in a special issue of “Optics & Photonics News” called “Optics in 2023.”
    The research is the latest advance in the field of ptychography, a difficult to pronounce (the “p” is silent) but powerful technique for viewing very small things. Unlike traditional microscopes, ptychography tools don’t directly view small objects. Instead, they shine lasers at a target, then measure how the light scatters away — a bit like the microscopic equivalent of making shadow puppets on a wall.
    So far, the approach has worked remarkably well, with one major exception, said study senior author and Distinguished Professor of physics Margaret Murnane.
    “Until recently, it has completely failed for highly periodic samples, or objects with a regularly repeating pattern,” said Murnane, fellow at JILA, a joint research institute of CU Boulder and the National Institute of Standards and Technology (NIST). “It’s a problem because that includes a lot of nanoelectronics.”
    She noted that many important technologies like some semiconductors are made up of atoms like silicon or carbon joined together in regular patterns like a grid or mesh. To date, those structures have proved tricky for scientists to view up close using ptychography.
    In the new study, however, Murnane and her colleagues came up with a solution. Instead of using traditional lasers in their microscopes, they produced beams of extreme ultraviolet light in the shape of doughnuts.

    The team’s novel approach can collect accurate images of tiny and delicate structures that are roughly 10 to 100 nanometers in size, or many times smaller than a millionth of an inch. In the future, the researchers expect to zoom in to view even smaller structures. The doughnut, or optical angular momentum, beams also won’t harm tiny electronics in the process — as some existing imaging tools, like electron microscopes, sometimes can.
    “In the future, this method could be used to inspect the polymers used to make and print semiconductors for defects, without damaging those structures in the process,” Murnane said.
    Bin Wang and Nathan Brooks, who earned their doctoral degrees from JILA in 2023, were first authors of the new study.
    Pushing the limits of microscopes
    The research, Murnane said, pushes the fundamental limits of microscopes: Because of the physics of light, imaging tools using lenses can only see the world down to a resolution of about 200 nanometers — which isn’t accurate enough to capture many of the viruses, for example, that infect humans. Scientists can freeze and kill viruses to view them with powerful cryo-electron microscopes but can’t yet capture these pathogens in action and in real time.
    Ptychography, which was pioneered in the mid-2000s, could help researchers push past that limit.

    To understand how, go back to those shadow puppets. Imagine that scientists want to collect a ptychographic image of a very small structure, perhaps letters spelling out “CU.” To do that, they first zap a laser beam at the letters, scanning them multiple times. When the light hits the “C” and the “U” (in this case, the puppets), the beam will break apart and scatter, producing a complex pattern (the shadows). Employing sensitive detectors, scientists record those patterns, then analyze them with a series of mathematical equations. With enough time, Murnane explained, they recreate the shape of their puppets entirely from the shadows they cast.
    “Instead of using a lens to retrieve the image, we use algorithms,” Murnane said.
    She and her colleagues have previously used such an approach to view submicroscopic shapes like letters or stars.
    But the approach won’t work with repeating structures like those silicon or carbon grids. If you shine a regular laser beam on a semiconductor with such regularity, for example, it will often produce a scatter pattern that is incredibly uniform — ptychographic algorithms struggle to make sense of patterns that don’t have much variation in them.
    The problem has left physicists scratching their heads for close to a decade.
    Doughnut microscopy
    In the new study, however, Murnane and her colleagues decided to try something different. They didn’t make their shadow puppets using regular lasers. Instead, they generated beams of extreme ultraviolet light, then employed a device called a spiral phase plate to twist those beams into the shape of a corkscrew, or vortex. (When such a vortex of light shines on a flat surface, it makes a shape like a doughnut).
    The doughnut beams didn’t have pink glaze or sprinkles, but they did the trick. The team discovered that when these types of beams bounced off repeating structures, they created much more complex shadow puppets than regular lasers.
    To test out the new approach, the researchers created a mesh of carbon atoms with a tiny snap in one of the links. The group was able to spot that defect with precision not seen in other ptychographic tools.
    “If you tried to image the same thing in a scanning electron microscope, you would damage it even further,” Murnane said.
    Moving forward, her team wants to make their doughnut strategy even more accurate, allowing them to view smaller and even more fragile objects — including, one day, the workings of living, biological cells.
    Other co-authors of the new study include Henry Kapteyn, professor of physics and fellow of JILA, and current and former JILA graduate students Peter Johnsen, Nicholas Jenkins, Yuka Esashi, Iona Binnie and Michael Tanksalvala. More

  • in

    Mathematics supporting fresh theoretical approach in oncology

    Mathematics, histopathology and genomics converge to confirm that the most aggressive clear cell renal cell carcinomas display low levels of intratumour heterogeneity, i.e. they contain fewer distinct cell types. The study, conducted by the UPV/EHU Ikerbasque Research Professor Annick Laruelle, supports the hypothesis that it would be advisable to apply therapeutic strategies to maintain high levels of cellular heterogeneity within the tumour in order to slow down the evolution of the cancer and improve survival.
    Mathematical approaches are gaining traction in modern oncology as they provide fresh knowledge about the evolution of cancer and new opportunities for therapeutic improvement. So data obtained from mathematical analyses endorse many of the histological findings and genomic results. Game theory, for example, helps to understand the “social” interactions that occur between cancer cells. This novel perspective allows the scientific and clinical community to understand the hidden events driving the disease. In actual fact, considering a tumour as a collectivity of individuals governed by rules previously defined in ecology opens up new therapeutic possibilities for patients.
    Within the framework of game theory, the hawk-dove game is a mathematical tool developed to analyse cooperation and competition in biology. When applied to cancer cell collectivities, it explains the possible behaviours of tumour cells when competing for an external resource. “It is a decision theory in which the outcome does not depend on one’s own decision alone, but also on the decision of the other actors,” explained Ikerbasque Research Professor Annick Laruelle, an expert in game theory in the UPV/EHU’s Department of Economic Analysis. “In the game, cells may act aggressively, like a hawk, or passively, like a dove, to acquire a resource.”
    Professor Laruelle has used this game to analyse bilateral cell interactions in highly aggressive clear cell renal cell carcinoma in two different scenarios: one involving low tumour heterogeneity, when only two tumour cell types compete for a resource; and the other, high tumour heterogeneity, when such competition occurs between three tumour cell types. Clear cell renal cell carcinoma is so named because the tumour cells appear clear, like bubbles, under the microscope. This type of carcinoma has been taken as a representative case for the study, as it is a widely studied paradigm of intratumour heterogeneity (which refers to the coexistence of different subpopulations of cells within the same tumour).
    Fresh theoretical approach for new therapeutic strategies
    Laruelle has thus shown how some of the fundamentals of intratumour heterogeneity, corroborated from the standpoint of histopathology and genomics, are supported by mathematics using the hawk-dove game. The work, carried out in collaboration with researchers from Biocruces, the San Giovanni Bosco Hospital in Turin (Italy) and the Pontificia Universidade Catolica do Rio de Janeiro has been published in the journal Trends in Cancer by the Ikerbasque Research Professor.
    The group of researchers believe that “this convergence of findings obtained from very different disciplines reinforces the key role of translational research in modern medicine and gives intratumour heterogeneity a key position in the approach to new therapeutic strategies” and they conjecture that “intratumour heterogeneity behaves by following similar pathways in many other tumours.”
    This may have important practical implications for the clinical management of malignant tumours. The constant arrival of new molecules enriches cancer treatment opportunities in the era of precision oncology. However, the researchers say that “it is one thing to discover a new molecule and quite another to find the best strategy for using it. So far, the proposed approach is based on administering the maximum tolerable dose to the patient. However, this strategy forces the tumour cells to develop resistance as early as possible, thus transforming the original tumour into a neoplasm of low intratumour heterogeneity comprising only resistant cells.” So, a therapy specifically aimed at preserving high intratumour heterogeneity may make sense according to this theoretical approach, as it may slow cancer growth and thus lead to longer survivals. This perspective is currently gaining interest in oncology. More

  • in

    A color-based sensor to emulate skin’s sensitivity

    Robotics researchers have already made great strides in developing sensors that can perceive changes in position, pressure, and temperature — all of which are important for technologies like wearable devices and human-robot interfaces. But a hallmark of human perception is the ability to sense multiple stimuli at once, and this is something that robotics has struggled to achieve.
    Now, Jamie Paik and colleagues in the Reconfigurable Robotics Lab (RRL) in EPFL’s School of Engineering have developed a sensor that can perceive combinations of bending, stretching, compression, and temperature changes, all using a robust system that boils down to a simple concept: color.
    Dubbed ChromoSense, the RRL’s technology relies on a translucent rubber cylinder containing three sections dyed red, green, and blue. An LED at the top of the device sends light through its core, and changes in the light’s path through the colors as the device is bent or stretched are picked up by a miniaturized spectral meter at the bottom.
    “Imagine you are drinking three different flavors of slushie through three different straws at once: the proportion of each flavor you get changes if you bend or twist the straws. This is the same principle that ChromoSense uses: it perceives changes in light traveling through the colored sections as the geometry of those sections deforms,” says Paik.
    A thermosensitive section of the device also allows it to detect temperature changes, using a special dye — similar to that in color-changing t-shirts or mood rings — that desaturates in color when it is heated. The research has been published in Nature Communications and selected for the Editor’s Highlights page.
    A more streamlined approach to wearables
    Paik explains that while robotic technologies that rely on cameras or multiple sensing elements are effective, they can make wearable devices heavier and more cumbersome, in addition to requiring more data processing.

    “For soft robots to serve us better in our daily lives, they need to be able to sense what we are doing,” she says. “Traditionally, the fastest and most inexpensive way to do this has been through vision-based systems, which capture all of our activities and then extract the necessary data. ChromoSense allows for more targeted, information-dense readings, and the sensor can be easily embedded into different materials for different tasks.”
    Thanks to its simple mechanical structure and use of color over cameras, ChromoSense could potentially lend itself to inexpensive mass production. In addition to assistive technologies, such as mobility-aiding exosuits, Paik sees everyday applications for ChromoSense in athletic gear or clothing, which could be used to give users feedback about their form and movements.
    A strength of ChromoSense — its ability to sense multiple stimuli at once — can also be a weakness, as decoupling simultaneously applied stimuli is still a challenge the researchers are working on. At the moment, Paik says they are focusing on improving the technology to sense locally applied forces, or the exact boundaries of a material when it changes shape.
    “If ChromoSense gains popularity and many people want to use it as a general-purpose robotic sensing solution, then I think further increasing the information density of the sensor could become a really interesting challenge,” she says.
    Looking ahead, Paik also plans to experiment with different formats for ChromoSense, which has been prototyped as a cylindrical shape and as part of a wearable soft exosuit, but could also be imagined in a flat form more suitable for the RRL’s signature origami robots.
    “With our technology, anything can become a sensor as long as light can pass through it,” she summarizes. More

  • in

    Researchers have taught an algorithm to ‘taste’

    For non-connoisseurs, picking out a bottle of wine can be challenging when scanning an array of unfamiliar labels on the shop shelf. What does it taste like? What was the last one I bought that tasted so good?
    Here, wine apps like Vivino, Hello Vino, Wine Searcher and a host of others can help. Apps like these let wine buyers scan bottle labels and get information about a particular wine and read the reviews of others. These apps build upon artificially intelligent algorithms.
    Now, scientists from the Technical University of Denmark (DTU), the University of Copenhagen and Caltech have shown that you can add a new parameter to the algorithms that makes it easier to find a precise match for your own taste buds: Namely, people’s impressions of flavour.
    “We have demonstrated that, by feeding an algorithm with data consisting of people’s flavour impressions, the algorithm can make more accurate predictions of what kind of wine we individually prefer,” says Thoranna Bender, a graduate student at DTU who conducted the study under the auspices of the Pioneer Centre for AI at the University of Copenhagen.
    More accurate predictions of people’s favourite wines
    The researchers held wine tastings during which 256 participants were asked to arrange shot-sized cups of different wines on a piece of A3 paper based upon which wines they thought tasted most similarly. The greater the distance between the cups, the greater the difference in their flavour. The method is widely used in consumer tests. The researchers then digitized the points on the sheets of paper by photographing them.
    The data collected from the wine tastings was then combined with hundreds of thousands of wine labels and user reviews provided to the researchers by Vivino, a global wine app and marketplace. Next, the researchers developed an algorithm based on the enormous data set.

    “The dimension of flavour that we created in the model provides us with information about which wines are similar in taste and which are not. So, for example, I can stand with my favourite bottle of wine and say: I would like to know which wine is most similar to it in taste — or both in taste and price,” says Thoranna Bender.
    Professor and co-author Serge Belongie from the Department of Computer Science, who heads the Pioneer Centre for AI at the University of Copenhagen, adds:
    “We can see that when the algorithm combines the data from wine labels and reviews with the data from the wine tastings, it makes more accurate predictions of people’s wine preferences than when it only uses the traditional types of data in the form of images and text. So, teaching machines to use human sensory experiences results in better algorithms that benefit the user.”
    Can also be used for beer and coffee
    According to Serge Belongie, there is a growing trend in machine learning of using so-called multimodal data, which usually consists of a combination of images, text and sound. Using taste or other sensory inputs as data sources is entirely new. And it has great potential — e.g., in the food sector. Belongie states:
    “Understanding taste is a key aspect of food science and essential for achieving healthy, sustainable food production. But the use of AI in this context remains very much in its infancy. This project shows the power of using human-based inputs in artificial intelligence, and I predict that the results will spur more research at the intersection of food science and AI.”
    Thoranna Bender points out that the researchers’ method can easily be transferred to other types of food and drink as well:

    “We’ve chosen wine as a case, but the same method can just as well be applied to beer and coffee. For example, the approach can be used to recommend products and perhaps even food recipes to people. And if we can better understand the taste similarities in food, we can also use it in the healthcare sector to put together meals that meet with the tastes and nutritional needs of patients. It might even be used to develop foods tailored to different taste profiles.”
    The researchers have published their data on an open server and can be used for free.
    “We hope that someone out there will want to build upon our data. I’ve already fielded requests from people who have additional data that they would like to include in our dataset. I think that’s really cool,” concludes Thoranna Bender. More

  • in

    Photonic chip that ‘fits together like Lego’ opens door to semiconductor industry

    Researchers at the University of Sydney Nano Institute have invented a compact silicon semiconductor chip that integrates electronics with photonic, or light, components. The new technology significantly expands radio-frequency (RF) bandwidth and the ability to accurately control information flowing through the unit.
    Expanded bandwidth means more information can flow through the chip and the inclusion of photonics allows for advanced filter controls, creating a versatile new semiconductor device.
    Researchers expect the chip will have application in advanced radar, satellite systems, wireless networks and the roll-out of 6G and 7G telecommunications and also open the door to advanced sovereign manufacturing. It could also assist in the creation of high-tech value-add factories at places like Western Sydney’s Aerotropolis precinct.
    The chip is built using an emerging technology in silicon photonics that allows integration of diverse systems on semiconductors less than 5 millimetres wide. Pro-Vice-Chancellor (Research) Professor Ben Eggleton, who guides the research team, likened it to fitting together Lego building blocks, where new materials are integrated through advanced packaging of components, using electronic ‘chiplets’.
    The research for this invention has been published in Nature Communications.
    Dr Alvaro Casas Bedoya, Associate Director for Photonic Integration in the School of Physics, who led the chip design, said the unique method of heterogenous materials integration has been 10 years in the making.
    “The combined use of overseas semiconductor foundries to make the basic chip wafer with local research infrastructure and manufacturing has been vital in developing this photonic integrated circuit,” he said.

    “This architecture means Australia could develop its own sovereign chip manufacturing without exclusively relying on international foundries for the value-add process.”
    Professor Eggleton highlighted the fact that most of the items on the Federal Government’s List of Critical Technologies in the National Interest depend upon semiconductors.
    He said the invention means the work at Sydney Nano fits well with initiatives like the Semiconductor Sector Service Bureau (S3B), sponsored by the NSW Government, which aims to develop the local semiconductor ecosystem.
    Dr Nadia Court, Director of S3B, said, “This work aligns with our mission to drive advancements in semiconductor technology, holding great promise for the future of semiconductor innovation in Australia. The result reinforces local strength in research and design at a pivotal time of increased global focus and investment in the sector.”
    Designed in collaboration with scientists at the Australian National University, the integrated circuit was built at the Core Research Facility cleanroom at the University of Sydney Nanoscience Hub, a purpose-built $150 million building with advanced lithography and deposition facilities.
    The photonic circuit in the chip means a device with an impressive 15 gigahertz bandwidth of tunable frequencies with spectral resolution down to just 37 megahertz, which is less than a quarter of one percent of the total bandwidth.

    Professor Eggleton said: “Led by our impressive PhD student Matthew Garrett, this invention is a significant advance for microwave photonics and integrated photonics research.
    “Microwave photonic filters play a crucial role in modern communication and radar applications, offering the flexibility to precisely filter different frequencies, reducing electromagnetic interference and enhancing signal quality.
    “Our innovative approach of integrating advanced functionalities into semiconductor chips, particularly the heterogenous integration of chalcogenide glass with silicon, holds the potential to reshape the local semiconductor landscape.”
    Co-author and Senior Research Fellow Dr Moritz Merklein said: “This work paves the way for a new generation of compact, high-resolution RF photonic filters with wideband frequency tunability, particularly beneficial in air and spaceborne RF communication payloads, opening possibilities for enhanced communications and sensing capabilities.” More

  • in

    To help autonomous vehicles make moral decisions, researchers ditch the ‘trolley problem’

    Researchers have developed a new experiment to better understand what people view as moral and immoral decisions related to driving vehicles, with the goal of collecting data to train autonomous vehicles how to make “good” decisions. The work is designed to capture a more realistic array of moral challenges in traffic than the widely discussed life-and-death scenario inspired by the so-called “trolley problem.”
    “The trolley problem presents a situation in which someone has to decide whether to intentionally kill one person (which violates a moral norm) in order to avoid the death of multiple people,” says Dario Cecchini, first author of a paper on the work and a postdoctoral researcher at North Carolina State University.
    “In recent years, the trolley problem has been utilized as a paradigm for studying moral judgment in traffic,” Cecchini says. “The typical situation comprises a binary choice for a self-driving car between swerving left, hitting a lethal obstacle, or proceeding forward, hitting a pedestrian crossing the street. However, these trolley-like cases are unrealistic. Drivers have to make many more realistic moral decisions every day. Should I drive over the speed limit? Should I run a red light? Should I pull over for an ambulance?”
    “Those mundane decisions are important because they can ultimately lead to life-or-death situations,” says Veljko Dubljevic, corresponding author of the paper and an associate professor in the Science, Technology & Society program at NC State.
    “For example, if someone is driving 20 miles over the speed limit and runs a red light, then they may find themselves in a situation where they have to either swerve into traffic or get into a collision. There’s currently very little data in the literature on how we make moral judgments about the decisions drivers make in everyday situations.”
    To address that lack of data, the researchers developed a series of experiments designed to collect data on how humans make moral judgments about decisions that people make in low-stakes traffic situations. The researchers created seven different driving scenarios, such as a parent who has to decide whether to violate a traffic signal while trying to get their child to school on time. Each scenario is programmed into a virtual reality environment, so that study participants engaged in the experiment have audiovisual information about what drivers are doing when they make decisions, rather than simply reading about the scenario.
    For this work, the researchers built on something called the Agent Deed Consequence (ADC) model, which posits that people take three things into account when making a moral judgment: the agent, which is the character or intent of the person who is doing something; the deed, or what is being done; and the consequence, or the outcome that resulted from the deed.

    Researchers created eight different versions of each traffic scenario, varying the combinations of agent, deed and consequence. For example, in one version of the scenario where a parent is trying to get the child to school, the parent is caring, brakes at a yellow light, and gets the child to school on time. In a second version, the parent is abusive, runs a red light, and causes an accident. The other six versions alter the nature of the parent (the agent), their decision at the traffic signal (the deed), and/or the outcome of their decision (the consequence).
    “The goal here is to have study participants view one version of each scenario and determine how moral the behavior of the driver was in each scenario, on a scale from one to 10,” Cecchini says. “This will give us robust data on what we consider moral behavior in the context of driving a vehicle, which can then be used to develop AI algorithms for moral decision making in autonomous vehicles.”
    The researchers have done pilot testing to fine-tune the scenarios and ensure that they reflect believable and easily understood situations.
    “The next step is to engage in large-scale data collection, getting thousands of people to participate in the experiments,” says Dubljevic. “We can then use that data to develop more interactive experiments with the goal of further fine-tuning our understanding of moral decision making. All of this can then be used to create algorithms for use in autonomous vehicles. We’ll then need to engage in additional testing to see how those algorithms perform.” More

  • in

    Brainstorming with a bot

    A researcher has just finished writing a scientific paper. She knows her work could benefit from another perspective. Did she overlook something? Or perhaps there’s an application of her research she hadn’t thought of. A second set of eyes would be great, but even the friendliest of collaborators might not be able to spare the time to read all the required background publications to catch up.
    Kevin Yager — leader of the electronic nanomaterials group at the Center for Functional Nanomaterials (CFN), a U.S. Department of Energy (DOE) Office of Science User Facility at DOE’s Brookhaven National Laboratory — has imagined how recent advances in artificial intelligence (AI) and machine learning (ML) could aid scientific brainstorming and ideation. To accomplish this, he has developed a chatbot with knowledge in the kinds of science he’s been engaged in.
    Rapid advances in AI and ML have given way to programs that can generate creative text and useful software code. These general-purpose chatbots have recently captured the public imagination. Existing chatbots — based on large, diverse language models — lack detailed knowledge of scientific sub-domains. By leveraging a document-retrieval method, Yager’s bot is knowledgeable in areas of nanomaterial science that other bots are not. The details of this project and how other scientists can leverage this AI colleague for their own work have recently been published in Digital Discovery.
    Rise of the Robots
    “CFN has been looking into new ways to leverage AI/ML to accelerate nanomaterial discovery for a long time. Currently, it’s helping us quickly identify, catalog, and choose samples, automate experiments, control equipment, and discover new materials. Esther Tsai, a scientist in the electronic nanomaterials group at CFN, is developing an AI companion to help speed up materials research experiments at the National Synchrotron Light Source II (NSLS-II).” NSLS-II is another DOE Office of Science User Facility at Brookhaven Lab.
    At CFN, there has been a lot of work on AI/ML that can help drive experiments through the use of automation, controls, robotics, and analysis, but having a program that was adept with scientific text was something that researchers hadn’t explored as deeply. Being able to quickly document, understand, and convey information about an experiment can help in a number of ways — from breaking down language barriers to saving time by summarizing larger pieces of work.
    Watching Your Language
    To build a specialized chatbot, the program required domain-specific text — language taken from areas the bot is intended to focus on. In this case, the text is scientific publications. Domain-specific text helps the AI model understand new terminology and definitions and introduces it to frontier scientific concepts. Most importantly, this curated set of documents enables the AI model to ground its reasoning using trusted facts.

    To emulate natural human language, AI models are trained on existing text, enabling them to learn the structure of language, memorize various facts, and develop a primitive sort of reasoning. Rather than laboriously retrain the AI model on nanoscience text, Yager gave it the ability to look up relevant information in a curated set of publications. Providing it with a library of relevant data was only half of the battle. To use this text accurately and effectively, the bot would need a way to decipher the correct context.
    “A challenge that’s common with language models is that sometimes they ‘hallucinate’ plausible sounding but untrue things,” explained Yager. “This has been a core issue to resolve for a chatbot used in research as opposed to one doing something like writing poetry. We don’t want it to fabricate facts or citations. This needed to be addressed. The solution for this was something we call ’embedding,’ a way of categorizing and linking information quickly behind the scenes.”
    Embedding is a process that transforms words and phrases into numerical values. The resulting “embedding vector” quantifies the meaning of the text. When a user asks the chatbot a question, it’s also sent to the ML embedding model to calculate its vector value. This vector is used to search through a pre-computed database of text chunks from scientific papers that were similarly embedded. The bot then uses text snippets it finds that are semantically related to the question to get a more complete understanding of the context.
    The user’s query and the text snippets are combined into a “prompt” that is sent to a large language model, an expansive program that creates text modeled on natural human language, that generates the final response. The embedding ensures that the text being pulled is relevant in the context of the user’s question. By providing text chunks from the body of trusted documents, the chatbot generates answers that are factual and sourced.
    “The program needs to be like a reference librarian,” said Yager. “It needs to heavily rely on the documents to provide sourced answers. It needs to be able to accurately interpret what people are asking and be able to effectively piece together the context of those questions to retrieve the most relevant information. While the responses may not be perfect yet, it’s already able to answer challenging questions and trigger some interesting thoughts while planning new projects and research.”
    Bots Empowering Humans
    CFN is developing AI/ML systems as tools that can liberate human researchers to work on more challenging and interesting problems and to get more out of their limited time while computers automate repetitive tasks in the background. There are still many unknowns about this new way of working, but these questions are the start of important discussions scientists are having right now to ensure AI/ML use is safe and ethical.
    “There are a number of tasks that a domain-specific chatbot like this could clear from a scientist’s workload. Classifying and organizing documents, summarizing publications, pointing out relevant info, and getting up to speed in a new topical area are just a few potential applications,” remarked Yager. “I’m excited to see where all of this will go, though. We never could have imagined where we are now three years ago, and I’m looking forward to where we’ll be three years from now.” More

  • in

    Scientists build tiny biological robots from human cells

    Researchers at Tufts University and Harvard University’s Wyss Institute have created tiny biological robots that they call Anthrobots from human tracheal cells that can move across a surface and have been found to encourage the growth of neurons across a region of damage in a lab dish.
    The multicellular robots, ranging in size from the width of a human hair to the point of a sharpened pencil, were made to self-assemble and shown to have a remarkable healing effect on other cells. The discovery is a starting point for the researchers’ vision to use patient-derived biobots as new therapeutic tools for regeneration, healing, and treatment of disease.
    The work follows from earlier research in the laboratories of Michael Levin, Vannevar Bush Professor of Biology at Tufts University School of Arts & Sciences, and Josh Bongard at the University of Vermont in which they created multicellular biological robots from frog embryo cells called Xenobots, capable of navigating passageways, collecting material, recording information, healing themselves from injury, and even replicating for a few cycles on their own. At the time, researchers did not know if these capabilities were dependent on their being derived from an amphibian embryo, or if biobots could be constructed from cells of other species.
    In the current study, published in Advanced Science, Levin, along with PhD student Gizem Gumuskaya discovered that bots can in fact be created from adult human cells without any genetic modification and they are demonstrating some capabilities beyond what was observed with the Xenobots. The discovery starts to answer a broader question that the lab has posed — what are the rules that govern how cells assemble and work together in the body, and can the cells be taken out of their natural context and recombined into different “body plans” to carry out other functions by design?
    In this case, researchers gave human cells, after decades of quiet life in the trachea, a chance to reboot and find ways of creating new structures and tasks. “We wanted to probe what cells can do besides create default features in the body,” said Gumuskaya, who earned a degree in architecture before coming into biology. “By reprogramming interactions between cells, new multicellular structures can be created, analogous to the way stone and brick can be arranged into different structural elements like walls, archways or columns.” The researchers found that not only could the cells create new multicellular shapes, but they could move in different ways over a surface of human neurons grown in a lab dish and encourage new growth to fill in gaps caused by scratching the layer of cells.
    Exactly how the Anthrobots encourage growth of neurons is not yet clear, but the researchers confirmed that neurons grew under the area covered by a clustered assembly of Anthrobots, which they called a “superbot.”
    “The cellular assemblies we construct in the lab can have capabilities that go beyond what they do in the body,” said Levin, who also serves as the director of the Allen Discovery Center at Tufts and is an associate faculty member of the Wyss Institute. “It is fascinating and completely unexpected that normal patient tracheal cells, without modifying their DNA, can move on their own and encourage neuron growth across a region of damage,” said Levin. “We’re now looking at how the healing mechanism works, and asking what else these constructs can do.”
    The advantages of using human cells include the ability to construct bots from a patient’s own cells to perform therapeutic work without the risk of triggering an immune response or requiring immunosuppressants. They only last a few weeks before breaking down, and so can easily be re-absorbed into the body after their work is done.

    In addition, outside of the body, Anthrobots can only survive in very specific laboratory conditions, and there is no risk of exposure or unintended spread outside the lab. Likewise, they do not reproduce, and they have no genetic edits, additions or deletions, so there is no risk of their evolving beyond existing safeguards.
    How Are Anthrobots Made?
    Each Anthrobot starts out as a single cell, derived from an adult donor. The cells come from the surface of the trachea and are covered with hairlike projections called cilia that wave back and forth. The cilia help the tracheal cells push out tiny particles that find their way into air passages of the lung. We all experience the work of ciliated cells when we take the final step of expelling the particles and excess fluid by coughing or clearing our throats. Earlier studies by others had shown that when the cells are grown in the lab, they spontaneously form tiny multicellular spheres called organoids.
    The researchers developed growth conditions that encouraged the cilia to face outward on organoids. Within a few days they started moving around, driven by the cilia acting like oars. They noted different shapes and types of movement — the first. important feature observed of the biorobotics platform. Levin says that if other features could be added to the Anthrobots (for example, contributed by different cells), they could be designed to respond to their environment, and travel to and perform functions in the body, or help build engineered tissues in the lab.
    The team, with the help of Simon Garnier at the New Jersey Institute of Technology, characterized the different types of Anthrobots that were produced. They observed that bots fell into a few discrete categories of shape and movement, ranging in size from 30 to 500 micrometers (from the thickness of a human hair to the point of a sharpened pencil), filling an important niche between nanotechnology and larger engineered devices.
    Some were spherical and fully covered in cilia, and some were irregular or football shaped with more patchy coverage of cilia, or just covered with cilia on one side. They traveled in straight lines, moved in tight circles, combined those movements, or just sat around and wiggled. The spherical ones fully covered with cilia tended to be wigglers. The Anthrobots with cilia distributed unevenly tended to move forward for longer stretches in straight or curved paths. They usually survived about 45-60 days in laboratory conditions before they naturally biodegraded.

    “Anthrobots self-assemble in the lab dish,” said Gumuskaya, who created the Anthrobots. “Unlike Xenobots, they don’t require tweezers or scalpels to give them shape, and we can use adult cells — even cells from elderly patients — instead of embryonic cells. It’s fully scalable — we can produce swarms of these bots in parallel, which is a good start for developing a therapeutic tool.”
    Little Healers
    Because Levin and Gumuskaya ultimately plan to make Anthrobots with therapeutic applications, they created a lab test to see how the bots might heal wounds. The model involved growing a two-dimensional layer of human neurons, and simply by scratching the layer with a thin metal rod, they created an open ‘wound’ devoid of cells.
    To ensure the gap would be exposed to a dense concentration of Anthrobots, they created “superbots” a cluster that naturally forms when the Anthrobots are confined to a small space. The superbots were made up primarily of circlers and wigglers, so they would not wander too far away from the open wound.
    Although it might be expected that genetic modifications of Anthrobot cells would be needed to help the bots encourage neural growth, surprisingly the unmodified Anthrobots triggered substantial regrowth, creating a bridge of neurons as thick as the rest of the healthy cells on the plate. Neurons did not grow in the wound where Anthrobots were absent. At least in the simplified 2D world of the lab dish, the Anthrobot assemblies encouraged efficient healing of live neural tissue.
    According to the researchers, further development of the bots could lead to other applications, including clearing plaque buildup in the arteries of atherosclerosis patients, repairing spinal cord or retinal nerve damage, recognizing bacteria or cancer cells, or delivering drugs to targeted tissues. The Anthrobots could in theory assist in healing tissues, while also laying down pro-regenerative drugs.
    Making New Blueprints, Restoring Old Ones
    Gumuskaya explained that cells have the innate ability to self-assemble into larger structures in certain fundamental ways. “The cells can form layers, fold, make spheres, sort and separate themselves by type, fuse together, or even move,” Gumuskaya said. “Two important differences from inanimate bricks are that cells can communicate with each other and create these structures dynamically, and each cell is programmed with many functions, like movement, secretion of molecules, detection of signals and more. We are just figuring out how to combine these elements to create new biological body plans and functions — different than those found in nature.”
    Taking advantage of the inherently flexible rules of cellular assembly helps the scientists construct the bots, but it can also help them understand how natural body plans assemble, how the genome and environment work together to create tissues, organs, and limbs, and how to restore them with regenerative treatments. More