More stories

  • in

    A color-based sensor to emulate skin’s sensitivity

    Robotics researchers have already made great strides in developing sensors that can perceive changes in position, pressure, and temperature — all of which are important for technologies like wearable devices and human-robot interfaces. But a hallmark of human perception is the ability to sense multiple stimuli at once, and this is something that robotics has struggled to achieve.
    Now, Jamie Paik and colleagues in the Reconfigurable Robotics Lab (RRL) in EPFL’s School of Engineering have developed a sensor that can perceive combinations of bending, stretching, compression, and temperature changes, all using a robust system that boils down to a simple concept: color.
    Dubbed ChromoSense, the RRL’s technology relies on a translucent rubber cylinder containing three sections dyed red, green, and blue. An LED at the top of the device sends light through its core, and changes in the light’s path through the colors as the device is bent or stretched are picked up by a miniaturized spectral meter at the bottom.
    “Imagine you are drinking three different flavors of slushie through three different straws at once: the proportion of each flavor you get changes if you bend or twist the straws. This is the same principle that ChromoSense uses: it perceives changes in light traveling through the colored sections as the geometry of those sections deforms,” says Paik.
    A thermosensitive section of the device also allows it to detect temperature changes, using a special dye — similar to that in color-changing t-shirts or mood rings — that desaturates in color when it is heated. The research has been published in Nature Communications and selected for the Editor’s Highlights page.
    A more streamlined approach to wearables
    Paik explains that while robotic technologies that rely on cameras or multiple sensing elements are effective, they can make wearable devices heavier and more cumbersome, in addition to requiring more data processing.

    “For soft robots to serve us better in our daily lives, they need to be able to sense what we are doing,” she says. “Traditionally, the fastest and most inexpensive way to do this has been through vision-based systems, which capture all of our activities and then extract the necessary data. ChromoSense allows for more targeted, information-dense readings, and the sensor can be easily embedded into different materials for different tasks.”
    Thanks to its simple mechanical structure and use of color over cameras, ChromoSense could potentially lend itself to inexpensive mass production. In addition to assistive technologies, such as mobility-aiding exosuits, Paik sees everyday applications for ChromoSense in athletic gear or clothing, which could be used to give users feedback about their form and movements.
    A strength of ChromoSense — its ability to sense multiple stimuli at once — can also be a weakness, as decoupling simultaneously applied stimuli is still a challenge the researchers are working on. At the moment, Paik says they are focusing on improving the technology to sense locally applied forces, or the exact boundaries of a material when it changes shape.
    “If ChromoSense gains popularity and many people want to use it as a general-purpose robotic sensing solution, then I think further increasing the information density of the sensor could become a really interesting challenge,” she says.
    Looking ahead, Paik also plans to experiment with different formats for ChromoSense, which has been prototyped as a cylindrical shape and as part of a wearable soft exosuit, but could also be imagined in a flat form more suitable for the RRL’s signature origami robots.
    “With our technology, anything can become a sensor as long as light can pass through it,” she summarizes. More

  • in

    Researchers have taught an algorithm to ‘taste’

    For non-connoisseurs, picking out a bottle of wine can be challenging when scanning an array of unfamiliar labels on the shop shelf. What does it taste like? What was the last one I bought that tasted so good?
    Here, wine apps like Vivino, Hello Vino, Wine Searcher and a host of others can help. Apps like these let wine buyers scan bottle labels and get information about a particular wine and read the reviews of others. These apps build upon artificially intelligent algorithms.
    Now, scientists from the Technical University of Denmark (DTU), the University of Copenhagen and Caltech have shown that you can add a new parameter to the algorithms that makes it easier to find a precise match for your own taste buds: Namely, people’s impressions of flavour.
    “We have demonstrated that, by feeding an algorithm with data consisting of people’s flavour impressions, the algorithm can make more accurate predictions of what kind of wine we individually prefer,” says Thoranna Bender, a graduate student at DTU who conducted the study under the auspices of the Pioneer Centre for AI at the University of Copenhagen.
    More accurate predictions of people’s favourite wines
    The researchers held wine tastings during which 256 participants were asked to arrange shot-sized cups of different wines on a piece of A3 paper based upon which wines they thought tasted most similarly. The greater the distance between the cups, the greater the difference in their flavour. The method is widely used in consumer tests. The researchers then digitized the points on the sheets of paper by photographing them.
    The data collected from the wine tastings was then combined with hundreds of thousands of wine labels and user reviews provided to the researchers by Vivino, a global wine app and marketplace. Next, the researchers developed an algorithm based on the enormous data set.

    “The dimension of flavour that we created in the model provides us with information about which wines are similar in taste and which are not. So, for example, I can stand with my favourite bottle of wine and say: I would like to know which wine is most similar to it in taste — or both in taste and price,” says Thoranna Bender.
    Professor and co-author Serge Belongie from the Department of Computer Science, who heads the Pioneer Centre for AI at the University of Copenhagen, adds:
    “We can see that when the algorithm combines the data from wine labels and reviews with the data from the wine tastings, it makes more accurate predictions of people’s wine preferences than when it only uses the traditional types of data in the form of images and text. So, teaching machines to use human sensory experiences results in better algorithms that benefit the user.”
    Can also be used for beer and coffee
    According to Serge Belongie, there is a growing trend in machine learning of using so-called multimodal data, which usually consists of a combination of images, text and sound. Using taste or other sensory inputs as data sources is entirely new. And it has great potential — e.g., in the food sector. Belongie states:
    “Understanding taste is a key aspect of food science and essential for achieving healthy, sustainable food production. But the use of AI in this context remains very much in its infancy. This project shows the power of using human-based inputs in artificial intelligence, and I predict that the results will spur more research at the intersection of food science and AI.”
    Thoranna Bender points out that the researchers’ method can easily be transferred to other types of food and drink as well:

    “We’ve chosen wine as a case, but the same method can just as well be applied to beer and coffee. For example, the approach can be used to recommend products and perhaps even food recipes to people. And if we can better understand the taste similarities in food, we can also use it in the healthcare sector to put together meals that meet with the tastes and nutritional needs of patients. It might even be used to develop foods tailored to different taste profiles.”
    The researchers have published their data on an open server and can be used for free.
    “We hope that someone out there will want to build upon our data. I’ve already fielded requests from people who have additional data that they would like to include in our dataset. I think that’s really cool,” concludes Thoranna Bender. More

  • in

    Photonic chip that ‘fits together like Lego’ opens door to semiconductor industry

    Researchers at the University of Sydney Nano Institute have invented a compact silicon semiconductor chip that integrates electronics with photonic, or light, components. The new technology significantly expands radio-frequency (RF) bandwidth and the ability to accurately control information flowing through the unit.
    Expanded bandwidth means more information can flow through the chip and the inclusion of photonics allows for advanced filter controls, creating a versatile new semiconductor device.
    Researchers expect the chip will have application in advanced radar, satellite systems, wireless networks and the roll-out of 6G and 7G telecommunications and also open the door to advanced sovereign manufacturing. It could also assist in the creation of high-tech value-add factories at places like Western Sydney’s Aerotropolis precinct.
    The chip is built using an emerging technology in silicon photonics that allows integration of diverse systems on semiconductors less than 5 millimetres wide. Pro-Vice-Chancellor (Research) Professor Ben Eggleton, who guides the research team, likened it to fitting together Lego building blocks, where new materials are integrated through advanced packaging of components, using electronic ‘chiplets’.
    The research for this invention has been published in Nature Communications.
    Dr Alvaro Casas Bedoya, Associate Director for Photonic Integration in the School of Physics, who led the chip design, said the unique method of heterogenous materials integration has been 10 years in the making.
    “The combined use of overseas semiconductor foundries to make the basic chip wafer with local research infrastructure and manufacturing has been vital in developing this photonic integrated circuit,” he said.

    “This architecture means Australia could develop its own sovereign chip manufacturing without exclusively relying on international foundries for the value-add process.”
    Professor Eggleton highlighted the fact that most of the items on the Federal Government’s List of Critical Technologies in the National Interest depend upon semiconductors.
    He said the invention means the work at Sydney Nano fits well with initiatives like the Semiconductor Sector Service Bureau (S3B), sponsored by the NSW Government, which aims to develop the local semiconductor ecosystem.
    Dr Nadia Court, Director of S3B, said, “This work aligns with our mission to drive advancements in semiconductor technology, holding great promise for the future of semiconductor innovation in Australia. The result reinforces local strength in research and design at a pivotal time of increased global focus and investment in the sector.”
    Designed in collaboration with scientists at the Australian National University, the integrated circuit was built at the Core Research Facility cleanroom at the University of Sydney Nanoscience Hub, a purpose-built $150 million building with advanced lithography and deposition facilities.
    The photonic circuit in the chip means a device with an impressive 15 gigahertz bandwidth of tunable frequencies with spectral resolution down to just 37 megahertz, which is less than a quarter of one percent of the total bandwidth.

    Professor Eggleton said: “Led by our impressive PhD student Matthew Garrett, this invention is a significant advance for microwave photonics and integrated photonics research.
    “Microwave photonic filters play a crucial role in modern communication and radar applications, offering the flexibility to precisely filter different frequencies, reducing electromagnetic interference and enhancing signal quality.
    “Our innovative approach of integrating advanced functionalities into semiconductor chips, particularly the heterogenous integration of chalcogenide glass with silicon, holds the potential to reshape the local semiconductor landscape.”
    Co-author and Senior Research Fellow Dr Moritz Merklein said: “This work paves the way for a new generation of compact, high-resolution RF photonic filters with wideband frequency tunability, particularly beneficial in air and spaceborne RF communication payloads, opening possibilities for enhanced communications and sensing capabilities.” More

  • in

    To help autonomous vehicles make moral decisions, researchers ditch the ‘trolley problem’

    Researchers have developed a new experiment to better understand what people view as moral and immoral decisions related to driving vehicles, with the goal of collecting data to train autonomous vehicles how to make “good” decisions. The work is designed to capture a more realistic array of moral challenges in traffic than the widely discussed life-and-death scenario inspired by the so-called “trolley problem.”
    “The trolley problem presents a situation in which someone has to decide whether to intentionally kill one person (which violates a moral norm) in order to avoid the death of multiple people,” says Dario Cecchini, first author of a paper on the work and a postdoctoral researcher at North Carolina State University.
    “In recent years, the trolley problem has been utilized as a paradigm for studying moral judgment in traffic,” Cecchini says. “The typical situation comprises a binary choice for a self-driving car between swerving left, hitting a lethal obstacle, or proceeding forward, hitting a pedestrian crossing the street. However, these trolley-like cases are unrealistic. Drivers have to make many more realistic moral decisions every day. Should I drive over the speed limit? Should I run a red light? Should I pull over for an ambulance?”
    “Those mundane decisions are important because they can ultimately lead to life-or-death situations,” says Veljko Dubljevic, corresponding author of the paper and an associate professor in the Science, Technology & Society program at NC State.
    “For example, if someone is driving 20 miles over the speed limit and runs a red light, then they may find themselves in a situation where they have to either swerve into traffic or get into a collision. There’s currently very little data in the literature on how we make moral judgments about the decisions drivers make in everyday situations.”
    To address that lack of data, the researchers developed a series of experiments designed to collect data on how humans make moral judgments about decisions that people make in low-stakes traffic situations. The researchers created seven different driving scenarios, such as a parent who has to decide whether to violate a traffic signal while trying to get their child to school on time. Each scenario is programmed into a virtual reality environment, so that study participants engaged in the experiment have audiovisual information about what drivers are doing when they make decisions, rather than simply reading about the scenario.
    For this work, the researchers built on something called the Agent Deed Consequence (ADC) model, which posits that people take three things into account when making a moral judgment: the agent, which is the character or intent of the person who is doing something; the deed, or what is being done; and the consequence, or the outcome that resulted from the deed.

    Researchers created eight different versions of each traffic scenario, varying the combinations of agent, deed and consequence. For example, in one version of the scenario where a parent is trying to get the child to school, the parent is caring, brakes at a yellow light, and gets the child to school on time. In a second version, the parent is abusive, runs a red light, and causes an accident. The other six versions alter the nature of the parent (the agent), their decision at the traffic signal (the deed), and/or the outcome of their decision (the consequence).
    “The goal here is to have study participants view one version of each scenario and determine how moral the behavior of the driver was in each scenario, on a scale from one to 10,” Cecchini says. “This will give us robust data on what we consider moral behavior in the context of driving a vehicle, which can then be used to develop AI algorithms for moral decision making in autonomous vehicles.”
    The researchers have done pilot testing to fine-tune the scenarios and ensure that they reflect believable and easily understood situations.
    “The next step is to engage in large-scale data collection, getting thousands of people to participate in the experiments,” says Dubljevic. “We can then use that data to develop more interactive experiments with the goal of further fine-tuning our understanding of moral decision making. All of this can then be used to create algorithms for use in autonomous vehicles. We’ll then need to engage in additional testing to see how those algorithms perform.” More

  • in

    Brainstorming with a bot

    A researcher has just finished writing a scientific paper. She knows her work could benefit from another perspective. Did she overlook something? Or perhaps there’s an application of her research she hadn’t thought of. A second set of eyes would be great, but even the friendliest of collaborators might not be able to spare the time to read all the required background publications to catch up.
    Kevin Yager — leader of the electronic nanomaterials group at the Center for Functional Nanomaterials (CFN), a U.S. Department of Energy (DOE) Office of Science User Facility at DOE’s Brookhaven National Laboratory — has imagined how recent advances in artificial intelligence (AI) and machine learning (ML) could aid scientific brainstorming and ideation. To accomplish this, he has developed a chatbot with knowledge in the kinds of science he’s been engaged in.
    Rapid advances in AI and ML have given way to programs that can generate creative text and useful software code. These general-purpose chatbots have recently captured the public imagination. Existing chatbots — based on large, diverse language models — lack detailed knowledge of scientific sub-domains. By leveraging a document-retrieval method, Yager’s bot is knowledgeable in areas of nanomaterial science that other bots are not. The details of this project and how other scientists can leverage this AI colleague for their own work have recently been published in Digital Discovery.
    Rise of the Robots
    “CFN has been looking into new ways to leverage AI/ML to accelerate nanomaterial discovery for a long time. Currently, it’s helping us quickly identify, catalog, and choose samples, automate experiments, control equipment, and discover new materials. Esther Tsai, a scientist in the electronic nanomaterials group at CFN, is developing an AI companion to help speed up materials research experiments at the National Synchrotron Light Source II (NSLS-II).” NSLS-II is another DOE Office of Science User Facility at Brookhaven Lab.
    At CFN, there has been a lot of work on AI/ML that can help drive experiments through the use of automation, controls, robotics, and analysis, but having a program that was adept with scientific text was something that researchers hadn’t explored as deeply. Being able to quickly document, understand, and convey information about an experiment can help in a number of ways — from breaking down language barriers to saving time by summarizing larger pieces of work.
    Watching Your Language
    To build a specialized chatbot, the program required domain-specific text — language taken from areas the bot is intended to focus on. In this case, the text is scientific publications. Domain-specific text helps the AI model understand new terminology and definitions and introduces it to frontier scientific concepts. Most importantly, this curated set of documents enables the AI model to ground its reasoning using trusted facts.

    To emulate natural human language, AI models are trained on existing text, enabling them to learn the structure of language, memorize various facts, and develop a primitive sort of reasoning. Rather than laboriously retrain the AI model on nanoscience text, Yager gave it the ability to look up relevant information in a curated set of publications. Providing it with a library of relevant data was only half of the battle. To use this text accurately and effectively, the bot would need a way to decipher the correct context.
    “A challenge that’s common with language models is that sometimes they ‘hallucinate’ plausible sounding but untrue things,” explained Yager. “This has been a core issue to resolve for a chatbot used in research as opposed to one doing something like writing poetry. We don’t want it to fabricate facts or citations. This needed to be addressed. The solution for this was something we call ’embedding,’ a way of categorizing and linking information quickly behind the scenes.”
    Embedding is a process that transforms words and phrases into numerical values. The resulting “embedding vector” quantifies the meaning of the text. When a user asks the chatbot a question, it’s also sent to the ML embedding model to calculate its vector value. This vector is used to search through a pre-computed database of text chunks from scientific papers that were similarly embedded. The bot then uses text snippets it finds that are semantically related to the question to get a more complete understanding of the context.
    The user’s query and the text snippets are combined into a “prompt” that is sent to a large language model, an expansive program that creates text modeled on natural human language, that generates the final response. The embedding ensures that the text being pulled is relevant in the context of the user’s question. By providing text chunks from the body of trusted documents, the chatbot generates answers that are factual and sourced.
    “The program needs to be like a reference librarian,” said Yager. “It needs to heavily rely on the documents to provide sourced answers. It needs to be able to accurately interpret what people are asking and be able to effectively piece together the context of those questions to retrieve the most relevant information. While the responses may not be perfect yet, it’s already able to answer challenging questions and trigger some interesting thoughts while planning new projects and research.”
    Bots Empowering Humans
    CFN is developing AI/ML systems as tools that can liberate human researchers to work on more challenging and interesting problems and to get more out of their limited time while computers automate repetitive tasks in the background. There are still many unknowns about this new way of working, but these questions are the start of important discussions scientists are having right now to ensure AI/ML use is safe and ethical.
    “There are a number of tasks that a domain-specific chatbot like this could clear from a scientist’s workload. Classifying and organizing documents, summarizing publications, pointing out relevant info, and getting up to speed in a new topical area are just a few potential applications,” remarked Yager. “I’m excited to see where all of this will go, though. We never could have imagined where we are now three years ago, and I’m looking forward to where we’ll be three years from now.” More

  • in

    A new UN report lays out an ethical framework for climate engineering

    The world is in a climate crisis — and in the waning days of what’s likely to be the world’s hottest year on record, a new United Nations report is weighing the ethics of using technological interventions to try to rein in rising global temperatures.

    “The current speed at which the effects of global warming are increasingly being manifested is giving new life to the discussion on the kinds of climate action best suited to tackle the catastrophic consequences of environmental changes,” the report states.

    .email-conversion {
    border: 1px solid #ffcccb;
    color: white;
    margin-top: 50px;
    background-image: url(“/wp-content/themes/sciencenews/client/src/images/cta-module@2x.jpg”);
    padding: 20px;
    clear: both;
    }

    .zephr-registration-form{max-width:440px;margin:20px auto;padding:20px;background-color:#fff;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form *{box-sizing:border-box}.zephr-registration-form-text > *{color:var(–zephr-color-text-main)}.zephr-registration-form-relative-container{position:relative}.zephr-registration-form-flex-container{display:flex}.zephr-registration-form-input.svelte-blfh8x{display:block;width:100%;height:calc(var(–zephr-input-height) * 1px);padding-left:8px;font-size:16px;border:calc(var(–zephr-input-borderWidth) * 1px) solid var(–zephr-input-borderColor);border-radius:calc(var(–zephr-input-borderRadius) * 1px);transition:border-color 0.25s ease, box-shadow 0.25s ease;outline:0;color:var(–zephr-color-text-main);background-color:#fff;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-input.svelte-blfh8x::placeholder{color:var(–zephr-color-background-tinted)}.zephr-registration-form-input-checkbox.svelte-blfh8x{width:auto;height:auto;margin:8px 5px 0 0;float:left}.zephr-registration-form-input-radio.svelte-blfh8x{position:absolute;opacity:0;cursor:pointer;height:0;width:0}.zephr-registration-form-input-color[type=”color”].svelte-blfh8x{width:50px;padding:0;border-radius:50%}.zephr-registration-form-input-color[type=”color”].svelte-blfh8x::-webkit-color-swatch{border:none;border-radius:50%;padding:0}.zephr-registration-form-input-color[type=”color”].svelte-blfh8x::-webkit-color-swatch-wrapper{border:none;border-radius:50%;padding:0}.zephr-registration-form-input.disabled.svelte-blfh8x,.zephr-registration-form-input.disabled.svelte-blfh8x:hover{border:calc(var(–zephr-input-borderWidth) * 1px) solid var(–zephr-input-borderColor);background-color:var(–zephr-color-background-tinted)}.zephr-registration-form-input.error.svelte-blfh8x{border:1px solid var(–zephr-color-warning-main)}.zephr-registration-form-input-label.svelte-1ok5fdj.svelte-1ok5fdj{margin-top:10px;display:block;line-height:30px;font-size:12px;color:var(–zephr-color-text-tinted);font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-input-label.svelte-1ok5fdj span.svelte-1ok5fdj{display:block}.zephr-registration-form-button.svelte-17g75t9{height:calc(var(–zephr-button-height) * 1px);line-height:0;padding:0 20px;text-decoration:none;text-transform:capitalize;text-align:center;border-radius:calc(var(–zephr-button-borderRadius) * 1px);font-size:calc(var(–zephr-button-fontSize) * 1px);font-weight:normal;cursor:pointer;border-style:solid;border-width:calc(var(–zephr-button-borderWidth) * 1px);border-color:var(–zephr-color-action-tinted);transition:backdrop-filter 0.2s, background-color 0.2s;margin-top:20px;display:block;width:100%;background-color:var(–zephr-color-action-main);color:#fff;position:relative;overflow:hidden;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-button.svelte-17g75t9:hover{background-color:var(–zephr-color-action-tinted);border-color:var(–zephr-color-action-tinted)}.zephr-registration-form-button.svelte-17g75t9:disabled{background-color:var(–zephr-color-background-tinted);border-color:var(–zephr-color-background-tinted)}.zephr-registration-form-button.svelte-17g75t9:disabled:hover{background-color:var(–zephr-color-background-tinted);border-color:var(–zephr-color-background-tinted)}.zephr-registration-form-text.svelte-i1fi5{font-size:19px;text-align:center;margin:20px auto;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-divider-container.svelte-mk4m8o{display:flex;align-items:center;justify-content:center;margin:40px 0}.zephr-registration-form-divider-line.svelte-mk4m8o{height:1px;width:50%;margin:0 5px;background-color:var(–zephr-color-text-tinted);;}.zephr-registration-form-divider-text.svelte-mk4m8o{margin:0 12px;color:var(–zephr-color-text-main);font-size:14px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);white-space:nowrap}.zephr-registration-form-input-inner-text.svelte-lvlpcn{cursor:pointer;position:absolute;top:50%;transform:translateY(-50%);right:10px;color:var(–zephr-color-text-main);font-size:12px;font-weight:bold;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-response-message.svelte-179421u{text-align:center;padding:10px 30px;border-radius:5px;font-size:15px;margin-top:10px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-response-message-title.svelte-179421u{font-weight:bold;margin-bottom:10px}.zephr-registration-form-response-message-success.svelte-179421u{background-color:#baecbb;border:1px solid #00bc05}.zephr-registration-form-response-message-error.svelte-179421u{background-color:#fcdbec;border:1px solid #d90c00}.zephr-registration-form-social-sign-in.svelte-gp4ky7{align-items:center}.zephr-registration-form-social-sign-in-button.svelte-gp4ky7{height:55px;padding:0 15px;color:#000;background-color:#fff;box-shadow:0px 0px 5px rgba(0, 0, 0, 0.3);border-radius:10px;font-size:17px;display:flex;align-items:center;cursor:pointer;margin-top:20px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-social-sign-in-button.svelte-gp4ky7:hover{background-color:#fafafa}.zephr-registration-form-social-sign-in-icon.svelte-gp4ky7{display:flex;justify-content:center;margin-right:30px;width:25px}.zephr-form-link-message.svelte-rt4jae{margin:10px 0 10px 20px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-recaptcha-tcs.svelte-1wyy3bx{margin:20px 0 0 0;font-size:15px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-recaptcha-inline.svelte-1wyy3bx{margin:20px 0 0 0}.zephr-registration-form-progress-bar.svelte-8qyhcl{width:100%;border:0;border-radius:20px;margin-top:10px}.zephr-registration-form-progress-bar.svelte-8qyhcl::-webkit-progress-bar{background-color:var(–zephr-color-background-tinted);border:0;border-radius:20px}.zephr-registration-form-progress-bar.svelte-8qyhcl::-webkit-progress-value{background-color:var(–zephr-color-text-tinted);border:0;border-radius:20px}.zephr-registration-progress-bar-step.svelte-8qyhcl{margin:auto;color:var(–zephr-color-text-tinted);font-size:12px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-progress-bar-step.svelte-8qyhcl:first-child{margin-left:0}.zephr-registration-progress-bar-step.svelte-8qyhcl:last-child{margin-right:0}.zephr-registration-form-input-error-text.svelte-19a73pq{color:var(–zephr-color-warning-main);font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-input-select.svelte-19a73pq{display:block;appearance:auto;width:100%;height:calc(var(–zephr-input-height) * 1px);font-size:16px;border:calc(var(–zephr-input-borderWidth) * 1px) solid var(–zephr-color-text-main);border-radius:calc(var(–zephr-input-borderRadius) * 1px);transition:border-color 0.25s ease, box-shadow 0.25s ease;outline:0;color:var(–zephr-color-text-main);background-color:#fff;padding:10px}.zephr-registration-form-input-select.disabled.svelte-19a73pq{border:1px solid var(–zephr-color-background-tinted)}.zephr-registration-form-input-select.unselected.svelte-19a73pq{color:var(–zephr-color-background-tinted)}.zephr-registration-form-input-select.error.svelte-19a73pq{border-color:var(–zephr-color-warning-main)}.zephr-registration-form-input-textarea.svelte-19a73pq{background-color:#fff;border:1px solid #ddd;color:#222;font-size:14px;font-weight:300;padding:16px;width:100%}.zephr-registration-form-input-slider-output.svelte-19a73pq{margin:13px 0 0 10px}.zephr-registration-form-input-inner-text.svelte-lvlpcn{cursor:pointer;position:absolute;top:50%;transform:translateY(-50%);right:10px;color:var(–zephr-color-text-main);font-size:12px;font-weight:bold;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.spin.svelte-1cj2gr0{animation:svelte-1cj2gr0-spin 2s 0s infinite linear}.pulse.svelte-1cj2gr0{animation:svelte-1cj2gr0-spin 1s infinite steps(8)}@keyframes svelte-1cj2gr0-spin{0%{transform:rotate(0deg)}100%{transform:rotate(360deg)}}.zephr-registration-form-checkbox.svelte-1gzpw2y{position:absolute;opacity:0;cursor:pointer;height:0;width:0}.zephr-registration-form-checkbox-label.svelte-1gzpw2y{display:flex;align-items:center;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-checkmark.svelte-1gzpw2y{position:relative;box-sizing:border-box;height:23px;width:23px;background-color:#fff;border:1px solid var(–zephr-color-text-main);border-radius:6px;margin-right:12px;cursor:pointer}.zephr-registration-form-checkmark.checked.svelte-1gzpw2y{border-color:#009fe3}.zephr-registration-form-checkmark.checked.svelte-1gzpw2y:after{content:””;position:absolute;width:6px;height:13px;border:solid #009fe3;border-width:0 2px 2px 0;transform:rotate(45deg);top:3px;left:8px;box-sizing:border-box}.zephr-registration-form-checkmark.disabled.svelte-1gzpw2y{border:1px solid var(–zephr-color-background-tinted)}.zephr-registration-form-checkmark.disabled.checked.svelte-1gzpw2y:after{border:solid var(–zephr-color-background-tinted);border-width:0 2px 2px 0}.zephr-registration-form-checkmark.error.svelte-1gzpw2y{border:1px solid var(–zephr-color-warning-main)}.zephr-registration-form-input-radio.svelte-1qn5n0t{position:absolute;opacity:0;cursor:pointer;height:0;width:0}.zephr-registration-form-radio-label.svelte-1qn5n0t{display:flex;align-items:center;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-radio-dot.svelte-1qn5n0t{position:relative;box-sizing:border-box;height:23px;width:23px;background-color:#fff;border:1px solid #ebebeb;border-radius:50%;margin-right:12px}.checked.svelte-1qn5n0t{border-color:#009fe3}.checked.svelte-1qn5n0t:after{content:””;position:absolute;width:17px;height:17px;background:#009fe3;background:linear-gradient(#009fe3, #006cb5);border-radius:50%;top:2px;left:2px}.disabled.checked.svelte-1qn5n0t:after{background:var(–zephr-color-background-tinted)}.error.svelte-1qn5n0t{border:1px solid var(–zephr-color-warning-main)}.zephr-form-link.svelte-64wplc{margin:10px 0;color:#6ba5e9;text-decoration:underline;cursor:pointer;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-form-link-disabled.svelte-64wplc{color:var(–zephr-color-text-main);cursor:none;text-decoration:none}.zephr-registration-form-google-icon.svelte-1jnblvg{width:20px}.zephr-registration-form-password-progress.svelte-d1zv9r{display:flex;margin-top:10px}.zephr-registration-form-password-bar.svelte-d1zv9r{width:100%;height:4px;border-radius:2px}.zephr-registration-form-password-bar.svelte-d1zv9r:not(:first-child){margin-left:8px}.zephr-registration-form-password-requirements.svelte-d1zv9r{margin:20px 0;justify-content:center}.zephr-registration-form-password-requirement.svelte-d1zv9r{display:flex;align-items:center;color:var(–zephr-color-text-tinted);font-size:12px;height:20px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-password-requirement-icon.svelte-d1zv9r{margin-right:10px;font-size:15px}.zephr-registration-form-password-progress.svelte-d1zv9r{display:flex;margin-top:10px}.zephr-registration-form-password-bar.svelte-d1zv9r{width:100%;height:4px;border-radius:2px}.zephr-registration-form-password-bar.svelte-d1zv9r:not(:first-child){margin-left:8px}.zephr-registration-form-password-requirements.svelte-d1zv9r{margin:20px 0;justify-content:center}.zephr-registration-form-password-requirement.svelte-d1zv9r{display:flex;align-items:center;color:var(–zephr-color-text-tinted);font-size:12px;height:20px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-password-requirement-icon.svelte-d1zv9r{margin-right:10px;font-size:15px}
    .zephr-registration-form {
    max-width: 100%;
    background-image: url(/wp-content/themes/sciencenews/client/src/images/cta-module@2x.jpg);
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    margin: 0px auto;
    margin-bottom: 4rem;
    padding: 20px;
    }

    .zephr-registration-form-text h6 {
    font-size: 0.8rem;
    }

    .zephr-registration-form h4 {
    font-size: 3rem;
    }

    .zephr-registration-form h4 {
    font-size: 1.5rem;
    }

    .zephr-registration-form-button.svelte-17g75t9:hover {
    background-color: #fc6a65;
    border-color: #fc6a65;
    width: 150px;
    margin-left: auto;
    margin-right: auto;
    }
    .zephr-registration-form-button.svelte-17g75t9:disabled {
    background-color: #e04821;
    border-color: #e04821;
    width: 150px;
    margin-left: auto;
    margin-right: auto;
    }
    .zephr-registration-form-button.svelte-17g75t9 {
    background-color: #e04821;
    border-color: #e04821;
    width: 150px;
    margin-left: auto;
    margin-right: auto;
    }
    .zephr-registration-form-text > * {
    color: #FFFFFF;
    font-weight: bold
    font: 25px;
    }
    .zephr-registration-form-progress-bar.svelte-8qyhcl {
    width: 100%;
    border: 0;
    border-radius: 20px;
    margin-top: 10px;
    display: none;
    }
    .zephr-registration-form-response-message-title.svelte-179421u {
    font-weight: bold;
    margin-bottom: 10px;
    display: none;
    }
    .zephr-registration-form-response-message-success.svelte-179421u {
    background-color: #8db869;
    border: 1px solid #8db869;
    color: white;
    margin-top: -0.2rem;
    }
    .zephr-registration-form-text.svelte-i1fi5:nth-child(1){
    font-size: 18px;
    text-align: center;
    margin: 20px auto;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    }
    .zephr-registration-form-text.svelte-i1fi5:nth-child(5){
    font-size: 18px;
    text-align: left;
    margin: 20px auto;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    }
    .zephr-registration-form-text.svelte-i1fi5:nth-child(7){
    font-size: 18px;
    text-align: left;
    margin: 20px auto;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    }
    .zephr-registration-form-text.svelte-i1fi5:nth-child(9){
    font-size: 18px;
    text-align: left;
    margin: 20px auto;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    }
    .zephr-registration-form-input-label.svelte-1ok5fdj span.svelte-1ok5fdj {
    display: none;
    color: white;
    }
    .zephr-registration-form-input.disabled.svelte-blfh8x, .zephr-registration-form-input.disabled.svelte-blfh8x:hover {
    border: calc(var(–zephr-input-borderWidth) * 1px) solid var(–zephr-input-borderColor);
    background-color: white;
    }
    .zephr-registration-form-checkbox-label.svelte-1gzpw2y {
    display: flex;
    align-items: center;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    font-size: 20px;
    margin-bottom: -20px;
    }

    A broad variety of climate engineering interventions are already in development, from strategies that could directly remove carbon dioxide from the atmosphere to efforts to modify incoming radiation from the sun (SN: 10/6/19; SN: 7/9/21; SN: 8/8/18).

    But “we don’t know the unintended consequences” of many of these technologies, said UNESCO Assistant Director-General Gabriela Ramos at a news conference on November 20 ahead of the report’s release. “There are several areas of great concern. These are very interesting and promising technological developments, but we need an ethical framework to decide how and when to use them.”

    Such a framework should be globally agreed upon, Ramos said — and that’s why UNESCO decided to step in. The new report proposes ethical frameworks for both the study and the later deployment of climate engineering strategies.

    In addition to explicitly addressing concerns over how tinkering with the climate might affect global food security and the environment, ethical considerations must also include accounting for conflicting interests between regions and countries, the report states. Furthermore, it must include assessing at what point the risks of taking action are or are not morally defensible.   

    “It’s not [for] a single country to decide,” Ramos said. “Even those countries that have nothing to do with those technological developments need to be at the table … to agree on a path going forward. Climate is global and needs to be a global conversation.”

    The ethics-focused report was prepared by a UNESCO advisory body known as the World Commission on the Ethics of Scientific Knowledge and Technology. Its release coincided with the start of the U.N.’s international climate action summit, the 28th Conference of the Parties, or COP, in Dubai. COP28 runs from November 30 through December 12.

    To delve more into the goals of the study and what climate engineering strategies the report considers, Science News talked with report coauthor Inés Camilloni, a climate scientist at the University of Buenos Aires and a resident in the solar geoengineering research program at Harvard University. The conversation has been edited for length and clarity.

    SN: There have been a lot of reports recently about climate engineering. What makes this one important?

    Camilloni: One thing is that this report includes the views from the Global South as well as the Global North. This is something really important, there are not many reports with the voices of scientists from the Global South. The U.N. Environment Programme’s report this year [on solar radiation modification] was another one. [This new report] has a bigger picture, because it also includes carbon dioxide removal.

    I’m a climate scientist; ethics is something new to me. I got involved because I was a lead author of a chapter in the [Intergovernmental Panel on Climate Change] 1.5-degrees-Celsius special report in 2018, and there was a box discussion about climate engineering (SN: 10/7/18). I realized I was not an expert on that. The discussion was among scientists in the Global North, who had a clear position in some ways about the idea, but not Global South scientists. We were just witnessing this discussion.

    SN: The report raises a concern about the “moral hazard” of relying too much on climate engineering, which might give countries or companies an excuse to slow carbon emission reductions. Should we even be considering climate engineering in that context?

    Camilloni: What we are saying in the report is that the priority must be the mitigation of greenhouse gas emissions. But the discussion on climate engineering is growing because we are not on track to keep temperatures [below] 1.5 degrees C. We are not [at] the right level of ambition really needed to keep temperatures below that target. There are so many uncertainties that it’s relevant to consider the ethical dimensions in these conversations, to make a decision of potential deployment. And in most IPCC scenarios that can limit warming to below 1.5 degrees, carbon dioxide removal is already there.

    SN: What are some of the carbon dioxide removal strategies under consideration?

    Camilloni: Carbon dioxide removal combines two different methods: Restoring natural carbon sinks, like forests and soils, and investing in technologies that are maybe not yet proven to work at the scale that’s needed. That includes direct air capture [of carbon dioxide] and storage; bioenergy with carbon capture and storage; increasing uptake by the oceans of carbon dioxide, for example by iron fertilization; and enhancing natural weathering processes that remove carbon dioxide from the atmosphere.

    But there are potential consequences that need to be considered. Those include negative impacts of terrestrial biodiversity, and effects on marine biodiversity from ocean fertilization. As for sequestering carbon dioxide — how do you store it for hundreds of years or longer, and what are the consequences of rapid release from underground reservoirs? Also there’s potential competition for land [between bioenergy crops or planting trees] and food production, especially in the Global South.

    SN: Solar radiation modification is considered even more controversial, but some scientists are saying it should now be on the table (SN: 5/21/10). What type of solar radiation modification is the most viable, technologically?

    Camilloni: That’s an umbrella term for a variety of approaches that reduce the amount of incoming sunlight reflected by the atmosphere back to space.

    There’s increasing surface reflectivity, for example with reflective paints on structures, or planting more reflective crops (SN: 9/28/18). That reflects more solar radiation into space. It’s already being used in some cities, but it has a very local effect. Similarly, increasing the reflectivity of marine clouds — there were some experiments in Australia to try to protect the Great Barrier Reef, but it seems that also the scale is not global.

    Another proposed strategy is to thin infrared-absorbing cirrus clouds — I don’t really know much about that or if it’s really possible. And there’s placing reflectors or shields in space to deflect incoming solar radiation; I also don’t really know if it’s possible to do that.

    Injecting aerosols into the stratosphere, to mimic the cooling effect of a volcanic eruption, is the most promising for a global impact. It’s not so challenging in terms of the technology. It’s the only way that we have identified that can cool the planet in a few years.

    SN: How soon could aerosol injection be used?

    Camilloni: We need at least 10 to 20 years before we can think of deployment. The limitation is that we need the aircraft that can fly at around 20 kilometers altitude. Those are already being designed, but we need about 10 years for those designs, and another 10 to build a fleet of them.

    SN: What are some of the ethical concerns around aerosol injection or other solar radiation modification technologies?

    Camilloni: These new technologies may be risky in the potential for exacerbating climate problems or introducing new challenges. There are potential risks to changing precipitation patterns, even overcooling in some regions. A key consideration in deciding whether to pursue them is the need for a full characterization of the positive and negative effects of the different technologies around the globe, and a comparison against the risk of not intervening.

    SN: In 2021, a research group at Harvard was barred from launching a balloon into the stratosphere to test equipment for possible future aerosol release. How might this report address similar studies?

    Camilloni: In our report, we want to make a distinction among the different types of research. You can have indoor research — simulations, social analysis — and this is not so controversial. When you consider outdoor research — releasing particles into the atmosphere — that is more controversial. We are calling for more indoor research. We need to understand the potential impacts.

    [For example,] I studied the impact of solar radiation modification on the hydrology of the La Plata Basin [which includes parts of southeastern Brazil, Bolivia, Paraguay, Uruguay and northeastern Argentina]. It’s the most populated region on the continent, and very relevant for hydropower production. And it’s already a very impacted region by climate change.

    However, that research was based on just one climate model. We need more — more resources, more capacity building in the Global South. My research group was the first to explore those impacts in Latin and South America. There are others doing research on this over the next few months, but I can count those groups on one hand.

    We need more resources to be part of any discussion. Those resources include the Loss and Damage Fund to provide support to nations most vulnerable to the climate crisis [agreed to at the end of COP27 in 2022]. But nobody really knows now how that will be implemented.

    SN: The report’s release was timed to the start of COP28. What are you hoping that policymakers will take away from it over the next two weeks?

    Camilloni: These recommendations are really important to have in mind, of course. We need more research to make a decision about whether this is a good idea or a bad idea. And maybe people will cut admissions faster if they’re afraid of climate engineering. More

  • in

    Scientists build tiny biological robots from human cells

    Researchers at Tufts University and Harvard University’s Wyss Institute have created tiny biological robots that they call Anthrobots from human tracheal cells that can move across a surface and have been found to encourage the growth of neurons across a region of damage in a lab dish.
    The multicellular robots, ranging in size from the width of a human hair to the point of a sharpened pencil, were made to self-assemble and shown to have a remarkable healing effect on other cells. The discovery is a starting point for the researchers’ vision to use patient-derived biobots as new therapeutic tools for regeneration, healing, and treatment of disease.
    The work follows from earlier research in the laboratories of Michael Levin, Vannevar Bush Professor of Biology at Tufts University School of Arts & Sciences, and Josh Bongard at the University of Vermont in which they created multicellular biological robots from frog embryo cells called Xenobots, capable of navigating passageways, collecting material, recording information, healing themselves from injury, and even replicating for a few cycles on their own. At the time, researchers did not know if these capabilities were dependent on their being derived from an amphibian embryo, or if biobots could be constructed from cells of other species.
    In the current study, published in Advanced Science, Levin, along with PhD student Gizem Gumuskaya discovered that bots can in fact be created from adult human cells without any genetic modification and they are demonstrating some capabilities beyond what was observed with the Xenobots. The discovery starts to answer a broader question that the lab has posed — what are the rules that govern how cells assemble and work together in the body, and can the cells be taken out of their natural context and recombined into different “body plans” to carry out other functions by design?
    In this case, researchers gave human cells, after decades of quiet life in the trachea, a chance to reboot and find ways of creating new structures and tasks. “We wanted to probe what cells can do besides create default features in the body,” said Gumuskaya, who earned a degree in architecture before coming into biology. “By reprogramming interactions between cells, new multicellular structures can be created, analogous to the way stone and brick can be arranged into different structural elements like walls, archways or columns.” The researchers found that not only could the cells create new multicellular shapes, but they could move in different ways over a surface of human neurons grown in a lab dish and encourage new growth to fill in gaps caused by scratching the layer of cells.
    Exactly how the Anthrobots encourage growth of neurons is not yet clear, but the researchers confirmed that neurons grew under the area covered by a clustered assembly of Anthrobots, which they called a “superbot.”
    “The cellular assemblies we construct in the lab can have capabilities that go beyond what they do in the body,” said Levin, who also serves as the director of the Allen Discovery Center at Tufts and is an associate faculty member of the Wyss Institute. “It is fascinating and completely unexpected that normal patient tracheal cells, without modifying their DNA, can move on their own and encourage neuron growth across a region of damage,” said Levin. “We’re now looking at how the healing mechanism works, and asking what else these constructs can do.”
    The advantages of using human cells include the ability to construct bots from a patient’s own cells to perform therapeutic work without the risk of triggering an immune response or requiring immunosuppressants. They only last a few weeks before breaking down, and so can easily be re-absorbed into the body after their work is done.

    In addition, outside of the body, Anthrobots can only survive in very specific laboratory conditions, and there is no risk of exposure or unintended spread outside the lab. Likewise, they do not reproduce, and they have no genetic edits, additions or deletions, so there is no risk of their evolving beyond existing safeguards.
    How Are Anthrobots Made?
    Each Anthrobot starts out as a single cell, derived from an adult donor. The cells come from the surface of the trachea and are covered with hairlike projections called cilia that wave back and forth. The cilia help the tracheal cells push out tiny particles that find their way into air passages of the lung. We all experience the work of ciliated cells when we take the final step of expelling the particles and excess fluid by coughing or clearing our throats. Earlier studies by others had shown that when the cells are grown in the lab, they spontaneously form tiny multicellular spheres called organoids.
    The researchers developed growth conditions that encouraged the cilia to face outward on organoids. Within a few days they started moving around, driven by the cilia acting like oars. They noted different shapes and types of movement — the first. important feature observed of the biorobotics platform. Levin says that if other features could be added to the Anthrobots (for example, contributed by different cells), they could be designed to respond to their environment, and travel to and perform functions in the body, or help build engineered tissues in the lab.
    The team, with the help of Simon Garnier at the New Jersey Institute of Technology, characterized the different types of Anthrobots that were produced. They observed that bots fell into a few discrete categories of shape and movement, ranging in size from 30 to 500 micrometers (from the thickness of a human hair to the point of a sharpened pencil), filling an important niche between nanotechnology and larger engineered devices.
    Some were spherical and fully covered in cilia, and some were irregular or football shaped with more patchy coverage of cilia, or just covered with cilia on one side. They traveled in straight lines, moved in tight circles, combined those movements, or just sat around and wiggled. The spherical ones fully covered with cilia tended to be wigglers. The Anthrobots with cilia distributed unevenly tended to move forward for longer stretches in straight or curved paths. They usually survived about 45-60 days in laboratory conditions before they naturally biodegraded.

    “Anthrobots self-assemble in the lab dish,” said Gumuskaya, who created the Anthrobots. “Unlike Xenobots, they don’t require tweezers or scalpels to give them shape, and we can use adult cells — even cells from elderly patients — instead of embryonic cells. It’s fully scalable — we can produce swarms of these bots in parallel, which is a good start for developing a therapeutic tool.”
    Little Healers
    Because Levin and Gumuskaya ultimately plan to make Anthrobots with therapeutic applications, they created a lab test to see how the bots might heal wounds. The model involved growing a two-dimensional layer of human neurons, and simply by scratching the layer with a thin metal rod, they created an open ‘wound’ devoid of cells.
    To ensure the gap would be exposed to a dense concentration of Anthrobots, they created “superbots” a cluster that naturally forms when the Anthrobots are confined to a small space. The superbots were made up primarily of circlers and wigglers, so they would not wander too far away from the open wound.
    Although it might be expected that genetic modifications of Anthrobot cells would be needed to help the bots encourage neural growth, surprisingly the unmodified Anthrobots triggered substantial regrowth, creating a bridge of neurons as thick as the rest of the healthy cells on the plate. Neurons did not grow in the wound where Anthrobots were absent. At least in the simplified 2D world of the lab dish, the Anthrobot assemblies encouraged efficient healing of live neural tissue.
    According to the researchers, further development of the bots could lead to other applications, including clearing plaque buildup in the arteries of atherosclerosis patients, repairing spinal cord or retinal nerve damage, recognizing bacteria or cancer cells, or delivering drugs to targeted tissues. The Anthrobots could in theory assist in healing tissues, while also laying down pro-regenerative drugs.
    Making New Blueprints, Restoring Old Ones
    Gumuskaya explained that cells have the innate ability to self-assemble into larger structures in certain fundamental ways. “The cells can form layers, fold, make spheres, sort and separate themselves by type, fuse together, or even move,” Gumuskaya said. “Two important differences from inanimate bricks are that cells can communicate with each other and create these structures dynamically, and each cell is programmed with many functions, like movement, secretion of molecules, detection of signals and more. We are just figuring out how to combine these elements to create new biological body plans and functions — different than those found in nature.”
    Taking advantage of the inherently flexible rules of cellular assembly helps the scientists construct the bots, but it can also help them understand how natural body plans assemble, how the genome and environment work together to create tissues, organs, and limbs, and how to restore them with regenerative treatments. More

  • in

    Scientists use A.I.-generated images to map visual functions in the brain

    Researchers at Weill Cornell Medicine, Cornell Tech and Cornell’s Ithaca campus have demonstrated the use of AI-selected natural images and AI-generated synthetic images as neuroscientific tools for probing the visual processing areas of the brain. The goal is to apply a data-driven approach to understand how vision is organized while potentially removing biases that may arise when looking at responses to a more limited set of researcher-selected images.
    In the study, published Oct. 23 in Communications Biology, the researchers had volunteers look at images that had been selected or generated based on an AI model of the human visual system. The images were predicted to maximally activate several visual processing areas. Using functional magnetic resonance imaging (fMRI) to record the brain activity of the volunteers, the researchers found that the images did activate the target areas significantly better than control images.
    The researchers also showed that they could use this image-response data to tune their vision model for individual volunteers, so that images generated to be maximally activating for a particular individual worked better than images generated based on a general model.
    “We think this is a promising new approach to study the neuroscience of vision,” said study senior author Dr. Amy Kuceyeski, a professor of mathematics in radiology and of mathematics in neuroscience in the Feil Family Brain and Mind Research Institute at Weill Cornell Medicine.
    The study was a collaboration with the laboratory of Dr. Mert Sabuncu, a professor of electrical and computer engineering at Cornell Engineering and Cornell Tech, and of electrical engineering in radiology at Weill Cornell Medicine. The study’s first author was Dr. Zijin Gu, a who was a doctoral student co-mentored by Dr. Sabuncu and Dr. Kuceyeski at the time of the study.
    Making an accurate model of the human visual system, in part by mapping brain responses to specific images, is one of the more ambitious goals of modern neuroscience. Researchers have found for example, that one visual processing region may activate strongly in response to an image of a face whereas another may respond to a landscape. Scientists must rely mainly on non-invasive methods in pursuit of this goal, given the risk and difficulty of recording brain activity directly with implanted electrodes. The preferred non-invasive method is fMRI, which essentially records changes in blood flow in small vessels of the brain — an indirect measure of brain activity — as subjects are exposed to sensory stimuli or otherwise perform cognitive or physical tasks. An fMRI machine can read out these tiny changes in three dimensions across the brain, at a resolution on the order of cubic millimeters.
    For their own studies, Dr. Kuceyeski and Dr. Sabuncu and their teams used an existing dataset comprising tens of thousands of natural images, with corresponding fMRI responses from human subjects, to train an AI-type system called an artificial neural network (ANN) to model the human brain’s visual processing system. They then used this model to predict which images, across the dataset, should maximally activate several targeted vision areas of the brain. They also coupled the model with an AI-based image generator to generate synthetic images to accomplish the same task.

    “Our general idea here has been to map and model the visual system in a systematic, unbiased way, in principle even using images that a person normally wouldn’t encounter,” Dr. Kuceyeski said.
    The researchers enrolled six volunteers and recorded their fMRI responses to these images, focusing on the responses in several visual processing areas. The results showed that, for both the natural images and the synthetic images, the predicted maximal activator images, on average across the subjects, did activate the targeted brain regions significantly more than a set of images that were selected or generated to be only average activators. This supports the general validity of the team’s ANN-based model and suggests that even synthetic images may be useful as probes for testing and improving such models.
    In a follow-on experiment, the team used the image and fMRI-response data from the first session to create separate ANN-based visual system models for each of the six subjects. They then used these individualized models to select or generate predicted maximal-activator images for each subject. The fMRI responses to these images showed that, at least for the synthetic images, there was greater activation of the targeted visual region, a face-processing region called FFA1, compared to the responses to images based on the group model. This result suggests that AI and fMRI can be useful for individualized visual-system modeling, for example to study differences in visual system organization across populations.
    The researchers are now running similar experiments using a more advanced version of the image generator, called Stable Diffusion.
    The same general approach could be useful in studying other senses such as hearing, they noted.
    Dr. Kuceyeski also hopes ultimately to study the therapeutic potential of this approach.
    “In principle, we could alter the connectivity between two parts of the brain using specifically designed stimuli, for example to weaken a connection that causes excess anxiety,” she said. More