More stories

  • in

    Brainstorming with a bot

    A researcher has just finished writing a scientific paper. She knows her work could benefit from another perspective. Did she overlook something? Or perhaps there’s an application of her research she hadn’t thought of. A second set of eyes would be great, but even the friendliest of collaborators might not be able to spare the time to read all the required background publications to catch up.
    Kevin Yager — leader of the electronic nanomaterials group at the Center for Functional Nanomaterials (CFN), a U.S. Department of Energy (DOE) Office of Science User Facility at DOE’s Brookhaven National Laboratory — has imagined how recent advances in artificial intelligence (AI) and machine learning (ML) could aid scientific brainstorming and ideation. To accomplish this, he has developed a chatbot with knowledge in the kinds of science he’s been engaged in.
    Rapid advances in AI and ML have given way to programs that can generate creative text and useful software code. These general-purpose chatbots have recently captured the public imagination. Existing chatbots — based on large, diverse language models — lack detailed knowledge of scientific sub-domains. By leveraging a document-retrieval method, Yager’s bot is knowledgeable in areas of nanomaterial science that other bots are not. The details of this project and how other scientists can leverage this AI colleague for their own work have recently been published in Digital Discovery.
    Rise of the Robots
    “CFN has been looking into new ways to leverage AI/ML to accelerate nanomaterial discovery for a long time. Currently, it’s helping us quickly identify, catalog, and choose samples, automate experiments, control equipment, and discover new materials. Esther Tsai, a scientist in the electronic nanomaterials group at CFN, is developing an AI companion to help speed up materials research experiments at the National Synchrotron Light Source II (NSLS-II).” NSLS-II is another DOE Office of Science User Facility at Brookhaven Lab.
    At CFN, there has been a lot of work on AI/ML that can help drive experiments through the use of automation, controls, robotics, and analysis, but having a program that was adept with scientific text was something that researchers hadn’t explored as deeply. Being able to quickly document, understand, and convey information about an experiment can help in a number of ways — from breaking down language barriers to saving time by summarizing larger pieces of work.
    Watching Your Language
    To build a specialized chatbot, the program required domain-specific text — language taken from areas the bot is intended to focus on. In this case, the text is scientific publications. Domain-specific text helps the AI model understand new terminology and definitions and introduces it to frontier scientific concepts. Most importantly, this curated set of documents enables the AI model to ground its reasoning using trusted facts.

    To emulate natural human language, AI models are trained on existing text, enabling them to learn the structure of language, memorize various facts, and develop a primitive sort of reasoning. Rather than laboriously retrain the AI model on nanoscience text, Yager gave it the ability to look up relevant information in a curated set of publications. Providing it with a library of relevant data was only half of the battle. To use this text accurately and effectively, the bot would need a way to decipher the correct context.
    “A challenge that’s common with language models is that sometimes they ‘hallucinate’ plausible sounding but untrue things,” explained Yager. “This has been a core issue to resolve for a chatbot used in research as opposed to one doing something like writing poetry. We don’t want it to fabricate facts or citations. This needed to be addressed. The solution for this was something we call ’embedding,’ a way of categorizing and linking information quickly behind the scenes.”
    Embedding is a process that transforms words and phrases into numerical values. The resulting “embedding vector” quantifies the meaning of the text. When a user asks the chatbot a question, it’s also sent to the ML embedding model to calculate its vector value. This vector is used to search through a pre-computed database of text chunks from scientific papers that were similarly embedded. The bot then uses text snippets it finds that are semantically related to the question to get a more complete understanding of the context.
    The user’s query and the text snippets are combined into a “prompt” that is sent to a large language model, an expansive program that creates text modeled on natural human language, that generates the final response. The embedding ensures that the text being pulled is relevant in the context of the user’s question. By providing text chunks from the body of trusted documents, the chatbot generates answers that are factual and sourced.
    “The program needs to be like a reference librarian,” said Yager. “It needs to heavily rely on the documents to provide sourced answers. It needs to be able to accurately interpret what people are asking and be able to effectively piece together the context of those questions to retrieve the most relevant information. While the responses may not be perfect yet, it’s already able to answer challenging questions and trigger some interesting thoughts while planning new projects and research.”
    Bots Empowering Humans
    CFN is developing AI/ML systems as tools that can liberate human researchers to work on more challenging and interesting problems and to get more out of their limited time while computers automate repetitive tasks in the background. There are still many unknowns about this new way of working, but these questions are the start of important discussions scientists are having right now to ensure AI/ML use is safe and ethical.
    “There are a number of tasks that a domain-specific chatbot like this could clear from a scientist’s workload. Classifying and organizing documents, summarizing publications, pointing out relevant info, and getting up to speed in a new topical area are just a few potential applications,” remarked Yager. “I’m excited to see where all of this will go, though. We never could have imagined where we are now three years ago, and I’m looking forward to where we’ll be three years from now.” More

  • in

    A new UN report lays out an ethical framework for climate engineering

    The world is in a climate crisis — and in the waning days of what’s likely to be the world’s hottest year on record, a new United Nations report is weighing the ethics of using technological interventions to try to rein in rising global temperatures.

    “The current speed at which the effects of global warming are increasingly being manifested is giving new life to the discussion on the kinds of climate action best suited to tackle the catastrophic consequences of environmental changes,” the report states.

    .email-conversion {
    border: 1px solid #ffcccb;
    color: white;
    margin-top: 50px;
    background-image: url(“/wp-content/themes/sciencenews/client/src/images/cta-module@2x.jpg”);
    padding: 20px;
    clear: both;
    }

    .zephr-registration-form{max-width:440px;margin:20px auto;padding:20px;background-color:#fff;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form *{box-sizing:border-box}.zephr-registration-form-text > *{color:var(–zephr-color-text-main)}.zephr-registration-form-relative-container{position:relative}.zephr-registration-form-flex-container{display:flex}.zephr-registration-form-input.svelte-blfh8x{display:block;width:100%;height:calc(var(–zephr-input-height) * 1px);padding-left:8px;font-size:16px;border:calc(var(–zephr-input-borderWidth) * 1px) solid var(–zephr-input-borderColor);border-radius:calc(var(–zephr-input-borderRadius) * 1px);transition:border-color 0.25s ease, box-shadow 0.25s ease;outline:0;color:var(–zephr-color-text-main);background-color:#fff;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-input.svelte-blfh8x::placeholder{color:var(–zephr-color-background-tinted)}.zephr-registration-form-input-checkbox.svelte-blfh8x{width:auto;height:auto;margin:8px 5px 0 0;float:left}.zephr-registration-form-input-radio.svelte-blfh8x{position:absolute;opacity:0;cursor:pointer;height:0;width:0}.zephr-registration-form-input-color[type=”color”].svelte-blfh8x{width:50px;padding:0;border-radius:50%}.zephr-registration-form-input-color[type=”color”].svelte-blfh8x::-webkit-color-swatch{border:none;border-radius:50%;padding:0}.zephr-registration-form-input-color[type=”color”].svelte-blfh8x::-webkit-color-swatch-wrapper{border:none;border-radius:50%;padding:0}.zephr-registration-form-input.disabled.svelte-blfh8x,.zephr-registration-form-input.disabled.svelte-blfh8x:hover{border:calc(var(–zephr-input-borderWidth) * 1px) solid var(–zephr-input-borderColor);background-color:var(–zephr-color-background-tinted)}.zephr-registration-form-input.error.svelte-blfh8x{border:1px solid var(–zephr-color-warning-main)}.zephr-registration-form-input-label.svelte-1ok5fdj.svelte-1ok5fdj{margin-top:10px;display:block;line-height:30px;font-size:12px;color:var(–zephr-color-text-tinted);font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-input-label.svelte-1ok5fdj span.svelte-1ok5fdj{display:block}.zephr-registration-form-button.svelte-17g75t9{height:calc(var(–zephr-button-height) * 1px);line-height:0;padding:0 20px;text-decoration:none;text-transform:capitalize;text-align:center;border-radius:calc(var(–zephr-button-borderRadius) * 1px);font-size:calc(var(–zephr-button-fontSize) * 1px);font-weight:normal;cursor:pointer;border-style:solid;border-width:calc(var(–zephr-button-borderWidth) * 1px);border-color:var(–zephr-color-action-tinted);transition:backdrop-filter 0.2s, background-color 0.2s;margin-top:20px;display:block;width:100%;background-color:var(–zephr-color-action-main);color:#fff;position:relative;overflow:hidden;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-button.svelte-17g75t9:hover{background-color:var(–zephr-color-action-tinted);border-color:var(–zephr-color-action-tinted)}.zephr-registration-form-button.svelte-17g75t9:disabled{background-color:var(–zephr-color-background-tinted);border-color:var(–zephr-color-background-tinted)}.zephr-registration-form-button.svelte-17g75t9:disabled:hover{background-color:var(–zephr-color-background-tinted);border-color:var(–zephr-color-background-tinted)}.zephr-registration-form-text.svelte-i1fi5{font-size:19px;text-align:center;margin:20px auto;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-divider-container.svelte-mk4m8o{display:flex;align-items:center;justify-content:center;margin:40px 0}.zephr-registration-form-divider-line.svelte-mk4m8o{height:1px;width:50%;margin:0 5px;background-color:var(–zephr-color-text-tinted);;}.zephr-registration-form-divider-text.svelte-mk4m8o{margin:0 12px;color:var(–zephr-color-text-main);font-size:14px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);white-space:nowrap}.zephr-registration-form-input-inner-text.svelte-lvlpcn{cursor:pointer;position:absolute;top:50%;transform:translateY(-50%);right:10px;color:var(–zephr-color-text-main);font-size:12px;font-weight:bold;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-response-message.svelte-179421u{text-align:center;padding:10px 30px;border-radius:5px;font-size:15px;margin-top:10px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-response-message-title.svelte-179421u{font-weight:bold;margin-bottom:10px}.zephr-registration-form-response-message-success.svelte-179421u{background-color:#baecbb;border:1px solid #00bc05}.zephr-registration-form-response-message-error.svelte-179421u{background-color:#fcdbec;border:1px solid #d90c00}.zephr-registration-form-social-sign-in.svelte-gp4ky7{align-items:center}.zephr-registration-form-social-sign-in-button.svelte-gp4ky7{height:55px;padding:0 15px;color:#000;background-color:#fff;box-shadow:0px 0px 5px rgba(0, 0, 0, 0.3);border-radius:10px;font-size:17px;display:flex;align-items:center;cursor:pointer;margin-top:20px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-social-sign-in-button.svelte-gp4ky7:hover{background-color:#fafafa}.zephr-registration-form-social-sign-in-icon.svelte-gp4ky7{display:flex;justify-content:center;margin-right:30px;width:25px}.zephr-form-link-message.svelte-rt4jae{margin:10px 0 10px 20px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-recaptcha-tcs.svelte-1wyy3bx{margin:20px 0 0 0;font-size:15px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-recaptcha-inline.svelte-1wyy3bx{margin:20px 0 0 0}.zephr-registration-form-progress-bar.svelte-8qyhcl{width:100%;border:0;border-radius:20px;margin-top:10px}.zephr-registration-form-progress-bar.svelte-8qyhcl::-webkit-progress-bar{background-color:var(–zephr-color-background-tinted);border:0;border-radius:20px}.zephr-registration-form-progress-bar.svelte-8qyhcl::-webkit-progress-value{background-color:var(–zephr-color-text-tinted);border:0;border-radius:20px}.zephr-registration-progress-bar-step.svelte-8qyhcl{margin:auto;color:var(–zephr-color-text-tinted);font-size:12px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-progress-bar-step.svelte-8qyhcl:first-child{margin-left:0}.zephr-registration-progress-bar-step.svelte-8qyhcl:last-child{margin-right:0}.zephr-registration-form-input-error-text.svelte-19a73pq{color:var(–zephr-color-warning-main);font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-input-select.svelte-19a73pq{display:block;appearance:auto;width:100%;height:calc(var(–zephr-input-height) * 1px);font-size:16px;border:calc(var(–zephr-input-borderWidth) * 1px) solid var(–zephr-color-text-main);border-radius:calc(var(–zephr-input-borderRadius) * 1px);transition:border-color 0.25s ease, box-shadow 0.25s ease;outline:0;color:var(–zephr-color-text-main);background-color:#fff;padding:10px}.zephr-registration-form-input-select.disabled.svelte-19a73pq{border:1px solid var(–zephr-color-background-tinted)}.zephr-registration-form-input-select.unselected.svelte-19a73pq{color:var(–zephr-color-background-tinted)}.zephr-registration-form-input-select.error.svelte-19a73pq{border-color:var(–zephr-color-warning-main)}.zephr-registration-form-input-textarea.svelte-19a73pq{background-color:#fff;border:1px solid #ddd;color:#222;font-size:14px;font-weight:300;padding:16px;width:100%}.zephr-registration-form-input-slider-output.svelte-19a73pq{margin:13px 0 0 10px}.zephr-registration-form-input-inner-text.svelte-lvlpcn{cursor:pointer;position:absolute;top:50%;transform:translateY(-50%);right:10px;color:var(–zephr-color-text-main);font-size:12px;font-weight:bold;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.spin.svelte-1cj2gr0{animation:svelte-1cj2gr0-spin 2s 0s infinite linear}.pulse.svelte-1cj2gr0{animation:svelte-1cj2gr0-spin 1s infinite steps(8)}@keyframes svelte-1cj2gr0-spin{0%{transform:rotate(0deg)}100%{transform:rotate(360deg)}}.zephr-registration-form-checkbox.svelte-1gzpw2y{position:absolute;opacity:0;cursor:pointer;height:0;width:0}.zephr-registration-form-checkbox-label.svelte-1gzpw2y{display:flex;align-items:center;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-checkmark.svelte-1gzpw2y{position:relative;box-sizing:border-box;height:23px;width:23px;background-color:#fff;border:1px solid var(–zephr-color-text-main);border-radius:6px;margin-right:12px;cursor:pointer}.zephr-registration-form-checkmark.checked.svelte-1gzpw2y{border-color:#009fe3}.zephr-registration-form-checkmark.checked.svelte-1gzpw2y:after{content:””;position:absolute;width:6px;height:13px;border:solid #009fe3;border-width:0 2px 2px 0;transform:rotate(45deg);top:3px;left:8px;box-sizing:border-box}.zephr-registration-form-checkmark.disabled.svelte-1gzpw2y{border:1px solid var(–zephr-color-background-tinted)}.zephr-registration-form-checkmark.disabled.checked.svelte-1gzpw2y:after{border:solid var(–zephr-color-background-tinted);border-width:0 2px 2px 0}.zephr-registration-form-checkmark.error.svelte-1gzpw2y{border:1px solid var(–zephr-color-warning-main)}.zephr-registration-form-input-radio.svelte-1qn5n0t{position:absolute;opacity:0;cursor:pointer;height:0;width:0}.zephr-registration-form-radio-label.svelte-1qn5n0t{display:flex;align-items:center;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-radio-dot.svelte-1qn5n0t{position:relative;box-sizing:border-box;height:23px;width:23px;background-color:#fff;border:1px solid #ebebeb;border-radius:50%;margin-right:12px}.checked.svelte-1qn5n0t{border-color:#009fe3}.checked.svelte-1qn5n0t:after{content:””;position:absolute;width:17px;height:17px;background:#009fe3;background:linear-gradient(#009fe3, #006cb5);border-radius:50%;top:2px;left:2px}.disabled.checked.svelte-1qn5n0t:after{background:var(–zephr-color-background-tinted)}.error.svelte-1qn5n0t{border:1px solid var(–zephr-color-warning-main)}.zephr-form-link.svelte-64wplc{margin:10px 0;color:#6ba5e9;text-decoration:underline;cursor:pointer;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-form-link-disabled.svelte-64wplc{color:var(–zephr-color-text-main);cursor:none;text-decoration:none}.zephr-registration-form-google-icon.svelte-1jnblvg{width:20px}.zephr-registration-form-password-progress.svelte-d1zv9r{display:flex;margin-top:10px}.zephr-registration-form-password-bar.svelte-d1zv9r{width:100%;height:4px;border-radius:2px}.zephr-registration-form-password-bar.svelte-d1zv9r:not(:first-child){margin-left:8px}.zephr-registration-form-password-requirements.svelte-d1zv9r{margin:20px 0;justify-content:center}.zephr-registration-form-password-requirement.svelte-d1zv9r{display:flex;align-items:center;color:var(–zephr-color-text-tinted);font-size:12px;height:20px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-password-requirement-icon.svelte-d1zv9r{margin-right:10px;font-size:15px}.zephr-registration-form-password-progress.svelte-d1zv9r{display:flex;margin-top:10px}.zephr-registration-form-password-bar.svelte-d1zv9r{width:100%;height:4px;border-radius:2px}.zephr-registration-form-password-bar.svelte-d1zv9r:not(:first-child){margin-left:8px}.zephr-registration-form-password-requirements.svelte-d1zv9r{margin:20px 0;justify-content:center}.zephr-registration-form-password-requirement.svelte-d1zv9r{display:flex;align-items:center;color:var(–zephr-color-text-tinted);font-size:12px;height:20px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-password-requirement-icon.svelte-d1zv9r{margin-right:10px;font-size:15px}
    .zephr-registration-form {
    max-width: 100%;
    background-image: url(/wp-content/themes/sciencenews/client/src/images/cta-module@2x.jpg);
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    margin: 0px auto;
    margin-bottom: 4rem;
    padding: 20px;
    }

    .zephr-registration-form-text h6 {
    font-size: 0.8rem;
    }

    .zephr-registration-form h4 {
    font-size: 3rem;
    }

    .zephr-registration-form h4 {
    font-size: 1.5rem;
    }

    .zephr-registration-form-button.svelte-17g75t9:hover {
    background-color: #fc6a65;
    border-color: #fc6a65;
    width: 150px;
    margin-left: auto;
    margin-right: auto;
    }
    .zephr-registration-form-button.svelte-17g75t9:disabled {
    background-color: #e04821;
    border-color: #e04821;
    width: 150px;
    margin-left: auto;
    margin-right: auto;
    }
    .zephr-registration-form-button.svelte-17g75t9 {
    background-color: #e04821;
    border-color: #e04821;
    width: 150px;
    margin-left: auto;
    margin-right: auto;
    }
    .zephr-registration-form-text > * {
    color: #FFFFFF;
    font-weight: bold
    font: 25px;
    }
    .zephr-registration-form-progress-bar.svelte-8qyhcl {
    width: 100%;
    border: 0;
    border-radius: 20px;
    margin-top: 10px;
    display: none;
    }
    .zephr-registration-form-response-message-title.svelte-179421u {
    font-weight: bold;
    margin-bottom: 10px;
    display: none;
    }
    .zephr-registration-form-response-message-success.svelte-179421u {
    background-color: #8db869;
    border: 1px solid #8db869;
    color: white;
    margin-top: -0.2rem;
    }
    .zephr-registration-form-text.svelte-i1fi5:nth-child(1){
    font-size: 18px;
    text-align: center;
    margin: 20px auto;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    }
    .zephr-registration-form-text.svelte-i1fi5:nth-child(5){
    font-size: 18px;
    text-align: left;
    margin: 20px auto;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    }
    .zephr-registration-form-text.svelte-i1fi5:nth-child(7){
    font-size: 18px;
    text-align: left;
    margin: 20px auto;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    }
    .zephr-registration-form-text.svelte-i1fi5:nth-child(9){
    font-size: 18px;
    text-align: left;
    margin: 20px auto;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    }
    .zephr-registration-form-input-label.svelte-1ok5fdj span.svelte-1ok5fdj {
    display: none;
    color: white;
    }
    .zephr-registration-form-input.disabled.svelte-blfh8x, .zephr-registration-form-input.disabled.svelte-blfh8x:hover {
    border: calc(var(–zephr-input-borderWidth) * 1px) solid var(–zephr-input-borderColor);
    background-color: white;
    }
    .zephr-registration-form-checkbox-label.svelte-1gzpw2y {
    display: flex;
    align-items: center;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    font-size: 20px;
    margin-bottom: -20px;
    }

    A broad variety of climate engineering interventions are already in development, from strategies that could directly remove carbon dioxide from the atmosphere to efforts to modify incoming radiation from the sun (SN: 10/6/19; SN: 7/9/21; SN: 8/8/18).

    But “we don’t know the unintended consequences” of many of these technologies, said UNESCO Assistant Director-General Gabriela Ramos at a news conference on November 20 ahead of the report’s release. “There are several areas of great concern. These are very interesting and promising technological developments, but we need an ethical framework to decide how and when to use them.”

    Such a framework should be globally agreed upon, Ramos said — and that’s why UNESCO decided to step in. The new report proposes ethical frameworks for both the study and the later deployment of climate engineering strategies.

    In addition to explicitly addressing concerns over how tinkering with the climate might affect global food security and the environment, ethical considerations must also include accounting for conflicting interests between regions and countries, the report states. Furthermore, it must include assessing at what point the risks of taking action are or are not morally defensible.   

    “It’s not [for] a single country to decide,” Ramos said. “Even those countries that have nothing to do with those technological developments need to be at the table … to agree on a path going forward. Climate is global and needs to be a global conversation.”

    The ethics-focused report was prepared by a UNESCO advisory body known as the World Commission on the Ethics of Scientific Knowledge and Technology. Its release coincided with the start of the U.N.’s international climate action summit, the 28th Conference of the Parties, or COP, in Dubai. COP28 runs from November 30 through December 12.

    To delve more into the goals of the study and what climate engineering strategies the report considers, Science News talked with report coauthor Inés Camilloni, a climate scientist at the University of Buenos Aires and a resident in the solar geoengineering research program at Harvard University. The conversation has been edited for length and clarity.

    SN: There have been a lot of reports recently about climate engineering. What makes this one important?

    Camilloni: One thing is that this report includes the views from the Global South as well as the Global North. This is something really important, there are not many reports with the voices of scientists from the Global South. The U.N. Environment Programme’s report this year [on solar radiation modification] was another one. [This new report] has a bigger picture, because it also includes carbon dioxide removal.

    I’m a climate scientist; ethics is something new to me. I got involved because I was a lead author of a chapter in the [Intergovernmental Panel on Climate Change] 1.5-degrees-Celsius special report in 2018, and there was a box discussion about climate engineering (SN: 10/7/18). I realized I was not an expert on that. The discussion was among scientists in the Global North, who had a clear position in some ways about the idea, but not Global South scientists. We were just witnessing this discussion.

    SN: The report raises a concern about the “moral hazard” of relying too much on climate engineering, which might give countries or companies an excuse to slow carbon emission reductions. Should we even be considering climate engineering in that context?

    Camilloni: What we are saying in the report is that the priority must be the mitigation of greenhouse gas emissions. But the discussion on climate engineering is growing because we are not on track to keep temperatures [below] 1.5 degrees C. We are not [at] the right level of ambition really needed to keep temperatures below that target. There are so many uncertainties that it’s relevant to consider the ethical dimensions in these conversations, to make a decision of potential deployment. And in most IPCC scenarios that can limit warming to below 1.5 degrees, carbon dioxide removal is already there.

    SN: What are some of the carbon dioxide removal strategies under consideration?

    Camilloni: Carbon dioxide removal combines two different methods: Restoring natural carbon sinks, like forests and soils, and investing in technologies that are maybe not yet proven to work at the scale that’s needed. That includes direct air capture [of carbon dioxide] and storage; bioenergy with carbon capture and storage; increasing uptake by the oceans of carbon dioxide, for example by iron fertilization; and enhancing natural weathering processes that remove carbon dioxide from the atmosphere.

    But there are potential consequences that need to be considered. Those include negative impacts of terrestrial biodiversity, and effects on marine biodiversity from ocean fertilization. As for sequestering carbon dioxide — how do you store it for hundreds of years or longer, and what are the consequences of rapid release from underground reservoirs? Also there’s potential competition for land [between bioenergy crops or planting trees] and food production, especially in the Global South.

    SN: Solar radiation modification is considered even more controversial, but some scientists are saying it should now be on the table (SN: 5/21/10). What type of solar radiation modification is the most viable, technologically?

    Camilloni: That’s an umbrella term for a variety of approaches that reduce the amount of incoming sunlight reflected by the atmosphere back to space.

    There’s increasing surface reflectivity, for example with reflective paints on structures, or planting more reflective crops (SN: 9/28/18). That reflects more solar radiation into space. It’s already being used in some cities, but it has a very local effect. Similarly, increasing the reflectivity of marine clouds — there were some experiments in Australia to try to protect the Great Barrier Reef, but it seems that also the scale is not global.

    Another proposed strategy is to thin infrared-absorbing cirrus clouds — I don’t really know much about that or if it’s really possible. And there’s placing reflectors or shields in space to deflect incoming solar radiation; I also don’t really know if it’s possible to do that.

    Injecting aerosols into the stratosphere, to mimic the cooling effect of a volcanic eruption, is the most promising for a global impact. It’s not so challenging in terms of the technology. It’s the only way that we have identified that can cool the planet in a few years.

    SN: How soon could aerosol injection be used?

    Camilloni: We need at least 10 to 20 years before we can think of deployment. The limitation is that we need the aircraft that can fly at around 20 kilometers altitude. Those are already being designed, but we need about 10 years for those designs, and another 10 to build a fleet of them.

    SN: What are some of the ethical concerns around aerosol injection or other solar radiation modification technologies?

    Camilloni: These new technologies may be risky in the potential for exacerbating climate problems or introducing new challenges. There are potential risks to changing precipitation patterns, even overcooling in some regions. A key consideration in deciding whether to pursue them is the need for a full characterization of the positive and negative effects of the different technologies around the globe, and a comparison against the risk of not intervening.

    SN: In 2021, a research group at Harvard was barred from launching a balloon into the stratosphere to test equipment for possible future aerosol release. How might this report address similar studies?

    Camilloni: In our report, we want to make a distinction among the different types of research. You can have indoor research — simulations, social analysis — and this is not so controversial. When you consider outdoor research — releasing particles into the atmosphere — that is more controversial. We are calling for more indoor research. We need to understand the potential impacts.

    [For example,] I studied the impact of solar radiation modification on the hydrology of the La Plata Basin [which includes parts of southeastern Brazil, Bolivia, Paraguay, Uruguay and northeastern Argentina]. It’s the most populated region on the continent, and very relevant for hydropower production. And it’s already a very impacted region by climate change.

    However, that research was based on just one climate model. We need more — more resources, more capacity building in the Global South. My research group was the first to explore those impacts in Latin and South America. There are others doing research on this over the next few months, but I can count those groups on one hand.

    We need more resources to be part of any discussion. Those resources include the Loss and Damage Fund to provide support to nations most vulnerable to the climate crisis [agreed to at the end of COP27 in 2022]. But nobody really knows now how that will be implemented.

    SN: The report’s release was timed to the start of COP28. What are you hoping that policymakers will take away from it over the next two weeks?

    Camilloni: These recommendations are really important to have in mind, of course. We need more research to make a decision about whether this is a good idea or a bad idea. And maybe people will cut admissions faster if they’re afraid of climate engineering. More

  • in

    Scientists build tiny biological robots from human cells

    Researchers at Tufts University and Harvard University’s Wyss Institute have created tiny biological robots that they call Anthrobots from human tracheal cells that can move across a surface and have been found to encourage the growth of neurons across a region of damage in a lab dish.
    The multicellular robots, ranging in size from the width of a human hair to the point of a sharpened pencil, were made to self-assemble and shown to have a remarkable healing effect on other cells. The discovery is a starting point for the researchers’ vision to use patient-derived biobots as new therapeutic tools for regeneration, healing, and treatment of disease.
    The work follows from earlier research in the laboratories of Michael Levin, Vannevar Bush Professor of Biology at Tufts University School of Arts & Sciences, and Josh Bongard at the University of Vermont in which they created multicellular biological robots from frog embryo cells called Xenobots, capable of navigating passageways, collecting material, recording information, healing themselves from injury, and even replicating for a few cycles on their own. At the time, researchers did not know if these capabilities were dependent on their being derived from an amphibian embryo, or if biobots could be constructed from cells of other species.
    In the current study, published in Advanced Science, Levin, along with PhD student Gizem Gumuskaya discovered that bots can in fact be created from adult human cells without any genetic modification and they are demonstrating some capabilities beyond what was observed with the Xenobots. The discovery starts to answer a broader question that the lab has posed — what are the rules that govern how cells assemble and work together in the body, and can the cells be taken out of their natural context and recombined into different “body plans” to carry out other functions by design?
    In this case, researchers gave human cells, after decades of quiet life in the trachea, a chance to reboot and find ways of creating new structures and tasks. “We wanted to probe what cells can do besides create default features in the body,” said Gumuskaya, who earned a degree in architecture before coming into biology. “By reprogramming interactions between cells, new multicellular structures can be created, analogous to the way stone and brick can be arranged into different structural elements like walls, archways or columns.” The researchers found that not only could the cells create new multicellular shapes, but they could move in different ways over a surface of human neurons grown in a lab dish and encourage new growth to fill in gaps caused by scratching the layer of cells.
    Exactly how the Anthrobots encourage growth of neurons is not yet clear, but the researchers confirmed that neurons grew under the area covered by a clustered assembly of Anthrobots, which they called a “superbot.”
    “The cellular assemblies we construct in the lab can have capabilities that go beyond what they do in the body,” said Levin, who also serves as the director of the Allen Discovery Center at Tufts and is an associate faculty member of the Wyss Institute. “It is fascinating and completely unexpected that normal patient tracheal cells, without modifying their DNA, can move on their own and encourage neuron growth across a region of damage,” said Levin. “We’re now looking at how the healing mechanism works, and asking what else these constructs can do.”
    The advantages of using human cells include the ability to construct bots from a patient’s own cells to perform therapeutic work without the risk of triggering an immune response or requiring immunosuppressants. They only last a few weeks before breaking down, and so can easily be re-absorbed into the body after their work is done.

    In addition, outside of the body, Anthrobots can only survive in very specific laboratory conditions, and there is no risk of exposure or unintended spread outside the lab. Likewise, they do not reproduce, and they have no genetic edits, additions or deletions, so there is no risk of their evolving beyond existing safeguards.
    How Are Anthrobots Made?
    Each Anthrobot starts out as a single cell, derived from an adult donor. The cells come from the surface of the trachea and are covered with hairlike projections called cilia that wave back and forth. The cilia help the tracheal cells push out tiny particles that find their way into air passages of the lung. We all experience the work of ciliated cells when we take the final step of expelling the particles and excess fluid by coughing or clearing our throats. Earlier studies by others had shown that when the cells are grown in the lab, they spontaneously form tiny multicellular spheres called organoids.
    The researchers developed growth conditions that encouraged the cilia to face outward on organoids. Within a few days they started moving around, driven by the cilia acting like oars. They noted different shapes and types of movement — the first. important feature observed of the biorobotics platform. Levin says that if other features could be added to the Anthrobots (for example, contributed by different cells), they could be designed to respond to their environment, and travel to and perform functions in the body, or help build engineered tissues in the lab.
    The team, with the help of Simon Garnier at the New Jersey Institute of Technology, characterized the different types of Anthrobots that were produced. They observed that bots fell into a few discrete categories of shape and movement, ranging in size from 30 to 500 micrometers (from the thickness of a human hair to the point of a sharpened pencil), filling an important niche between nanotechnology and larger engineered devices.
    Some were spherical and fully covered in cilia, and some were irregular or football shaped with more patchy coverage of cilia, or just covered with cilia on one side. They traveled in straight lines, moved in tight circles, combined those movements, or just sat around and wiggled. The spherical ones fully covered with cilia tended to be wigglers. The Anthrobots with cilia distributed unevenly tended to move forward for longer stretches in straight or curved paths. They usually survived about 45-60 days in laboratory conditions before they naturally biodegraded.

    “Anthrobots self-assemble in the lab dish,” said Gumuskaya, who created the Anthrobots. “Unlike Xenobots, they don’t require tweezers or scalpels to give them shape, and we can use adult cells — even cells from elderly patients — instead of embryonic cells. It’s fully scalable — we can produce swarms of these bots in parallel, which is a good start for developing a therapeutic tool.”
    Little Healers
    Because Levin and Gumuskaya ultimately plan to make Anthrobots with therapeutic applications, they created a lab test to see how the bots might heal wounds. The model involved growing a two-dimensional layer of human neurons, and simply by scratching the layer with a thin metal rod, they created an open ‘wound’ devoid of cells.
    To ensure the gap would be exposed to a dense concentration of Anthrobots, they created “superbots” a cluster that naturally forms when the Anthrobots are confined to a small space. The superbots were made up primarily of circlers and wigglers, so they would not wander too far away from the open wound.
    Although it might be expected that genetic modifications of Anthrobot cells would be needed to help the bots encourage neural growth, surprisingly the unmodified Anthrobots triggered substantial regrowth, creating a bridge of neurons as thick as the rest of the healthy cells on the plate. Neurons did not grow in the wound where Anthrobots were absent. At least in the simplified 2D world of the lab dish, the Anthrobot assemblies encouraged efficient healing of live neural tissue.
    According to the researchers, further development of the bots could lead to other applications, including clearing plaque buildup in the arteries of atherosclerosis patients, repairing spinal cord or retinal nerve damage, recognizing bacteria or cancer cells, or delivering drugs to targeted tissues. The Anthrobots could in theory assist in healing tissues, while also laying down pro-regenerative drugs.
    Making New Blueprints, Restoring Old Ones
    Gumuskaya explained that cells have the innate ability to self-assemble into larger structures in certain fundamental ways. “The cells can form layers, fold, make spheres, sort and separate themselves by type, fuse together, or even move,” Gumuskaya said. “Two important differences from inanimate bricks are that cells can communicate with each other and create these structures dynamically, and each cell is programmed with many functions, like movement, secretion of molecules, detection of signals and more. We are just figuring out how to combine these elements to create new biological body plans and functions — different than those found in nature.”
    Taking advantage of the inherently flexible rules of cellular assembly helps the scientists construct the bots, but it can also help them understand how natural body plans assemble, how the genome and environment work together to create tissues, organs, and limbs, and how to restore them with regenerative treatments. More

  • in

    Scientists use A.I.-generated images to map visual functions in the brain

    Researchers at Weill Cornell Medicine, Cornell Tech and Cornell’s Ithaca campus have demonstrated the use of AI-selected natural images and AI-generated synthetic images as neuroscientific tools for probing the visual processing areas of the brain. The goal is to apply a data-driven approach to understand how vision is organized while potentially removing biases that may arise when looking at responses to a more limited set of researcher-selected images.
    In the study, published Oct. 23 in Communications Biology, the researchers had volunteers look at images that had been selected or generated based on an AI model of the human visual system. The images were predicted to maximally activate several visual processing areas. Using functional magnetic resonance imaging (fMRI) to record the brain activity of the volunteers, the researchers found that the images did activate the target areas significantly better than control images.
    The researchers also showed that they could use this image-response data to tune their vision model for individual volunteers, so that images generated to be maximally activating for a particular individual worked better than images generated based on a general model.
    “We think this is a promising new approach to study the neuroscience of vision,” said study senior author Dr. Amy Kuceyeski, a professor of mathematics in radiology and of mathematics in neuroscience in the Feil Family Brain and Mind Research Institute at Weill Cornell Medicine.
    The study was a collaboration with the laboratory of Dr. Mert Sabuncu, a professor of electrical and computer engineering at Cornell Engineering and Cornell Tech, and of electrical engineering in radiology at Weill Cornell Medicine. The study’s first author was Dr. Zijin Gu, a who was a doctoral student co-mentored by Dr. Sabuncu and Dr. Kuceyeski at the time of the study.
    Making an accurate model of the human visual system, in part by mapping brain responses to specific images, is one of the more ambitious goals of modern neuroscience. Researchers have found for example, that one visual processing region may activate strongly in response to an image of a face whereas another may respond to a landscape. Scientists must rely mainly on non-invasive methods in pursuit of this goal, given the risk and difficulty of recording brain activity directly with implanted electrodes. The preferred non-invasive method is fMRI, which essentially records changes in blood flow in small vessels of the brain — an indirect measure of brain activity — as subjects are exposed to sensory stimuli or otherwise perform cognitive or physical tasks. An fMRI machine can read out these tiny changes in three dimensions across the brain, at a resolution on the order of cubic millimeters.
    For their own studies, Dr. Kuceyeski and Dr. Sabuncu and their teams used an existing dataset comprising tens of thousands of natural images, with corresponding fMRI responses from human subjects, to train an AI-type system called an artificial neural network (ANN) to model the human brain’s visual processing system. They then used this model to predict which images, across the dataset, should maximally activate several targeted vision areas of the brain. They also coupled the model with an AI-based image generator to generate synthetic images to accomplish the same task.

    “Our general idea here has been to map and model the visual system in a systematic, unbiased way, in principle even using images that a person normally wouldn’t encounter,” Dr. Kuceyeski said.
    The researchers enrolled six volunteers and recorded their fMRI responses to these images, focusing on the responses in several visual processing areas. The results showed that, for both the natural images and the synthetic images, the predicted maximal activator images, on average across the subjects, did activate the targeted brain regions significantly more than a set of images that were selected or generated to be only average activators. This supports the general validity of the team’s ANN-based model and suggests that even synthetic images may be useful as probes for testing and improving such models.
    In a follow-on experiment, the team used the image and fMRI-response data from the first session to create separate ANN-based visual system models for each of the six subjects. They then used these individualized models to select or generate predicted maximal-activator images for each subject. The fMRI responses to these images showed that, at least for the synthetic images, there was greater activation of the targeted visual region, a face-processing region called FFA1, compared to the responses to images based on the group model. This result suggests that AI and fMRI can be useful for individualized visual-system modeling, for example to study differences in visual system organization across populations.
    The researchers are now running similar experiments using a more advanced version of the image generator, called Stable Diffusion.
    The same general approach could be useful in studying other senses such as hearing, they noted.
    Dr. Kuceyeski also hopes ultimately to study the therapeutic potential of this approach.
    “In principle, we could alter the connectivity between two parts of the brain using specifically designed stimuli, for example to weaken a connection that causes excess anxiety,” she said. More

  • in

    2D material reshapes 3D electronics for AI hardware

    Multifunctional computer chips have evolved to do more with integrated sensors, processors, memory and other specialized components. However, as chips have expanded, the time required to move information between functional components has also grown.
    “Think of it like building a house,” said Sang-Hoon Bae, an assistant professor of mechanical engineering and materials science at the McKelvey School of Engineering at Washington University in St. Louis. “You build out laterally and up vertically to get more function, more room to do more specialized activities, but then you have to spend more time moving or communicating between rooms.”
    To address this challenge, Bae and a team of international collaborators, including researchers from the Massachusetts Institute of Technology, Yonsei University, Inha University, Georgia Institute of Technology and the University of Notre Dame, demonstrated monolithic 3D integration of layered 2D material into novel processing hardware for artificial intelligence (AI) computing. They envision that their new approach will not only provide a material-level solution for fully integrating many functions into a single, small electronic chip, but also pave the way for advanced AI computing. Their work was published Nov. 27 in Nature Materials, where it was selected as a front cover article.
    The team’s monolithic 3D-integrated chip offers advantages over existing laterally integrated computer chips. The device contains six atomically thin 2D layers, each with its own function, and achieves significantly reduced processing time, power consumption, latency and footprint. This is accomplished through tightly packing the processing layers to ensure dense interlayer connectivity. As a result, the hardware offers unprecedented efficiency and performance in AI computing tasks.
    This discovery offers a novel solution to integrate electronics and also opens the door to a new era of multifunctional computing hardware. With ultimate parallelism at its core, this technology could dramatically expand the capabilities of AI systems, enabling them to handle complex tasks with lightning speed and exceptional accuracy, Bae said.
    “Monolithic 3D integration has the potential to reshape the entire electronics and computing industry by enabling the development of more compact, powerful and energy-efficient devices,” Bae said. “Atomically thin 2D materials are ideal for this, and my collaborators and I will continue improving this material until we can ultimately integrate all functional layers on a single chip.”
    Bae said these devices also are more flexible and functional, making them suitable for more applications.
    “From autonomous vehicles to medical diagnostics and data centers, the applications of this monolithic 3D integration technology are potentially boundless,” he said. “For example, in-sensor computing combines sensor and computer functions in one device, instead of a sensor obtaining information then transferring the data to a computer. That lets us obtain a signal and directly compute data resulting in faster processing, less energy consumption and enhanced security because data isn’t being transferred.” More

  • in

    Straining memory leads to new computing possibilities

    By strategically straining materials that are as thin as a single layer of atoms, University of Rochester scientists have developed a new form of computing memory that is at once fast, dense, and low-power. The researchers outline their new hybrid resistive switches in a study published in Nature Electronics.
    Developed in the lab of Stephen M. Wu, an assistant professor of electrical and computer engineering and of physics, the approach marries the best qualities of two existing forms of resistive switches used for memory: memristors and phase-change materials. Both forms have been explored for their advantages over today’s most prevalent forms of memory, including dynamic random access memory (DRAM) and flash memory, but have their drawbacks.
    Wu says that memristors, which operate by applying voltage to a thin filament between two electrodes, tend to suffer from a relative lack of reliability compared to other forms of memory. Meanwhile, phase-change materials, which involve selectively melting a material into either an amorphous state or a crystalline state, require too much power.
    “We’ve combined the idea of a memristor and a phase-change device in a way that can go beyond the limitations of either device,” says Wu. “We’re making a two-terminal memristor device, which drives one type of crystal to another type of crystal phase. Those two crystal phases have different resistance that you can then story as memory.”
    The key is leveraging 2D materials that can be strained to the point where they lie precariously between two different crystal phases and can be nudged in either direction with relatively little power.
    “We engineered it by essentially just stretching the material in one direction and compressing it in another,” says Wu. “By doing that, you enhance the performance by orders of magnitude. I see a path where this could end up in home computers as a form of memory that’s ultra-fast and ultra-efficient. That could have big implications for computing in general.”
    Wu and his team of graduate students conducted the experimental work and partnered with researchers from Rochester’s Department of Mechanical Engineering, including assistant professors Hesam Askari and Sobhit Singh, to identify where and how to strain the material. According to Wu, the biggest hurdle remaining to making the phase-change memristors is continuing to improve their overall reliability — but he is nonetheless encouraged by the team’s progress to date. More

  • in

    Researchers show an old law still holds for quirky quantum materials

    Long before researchers discovered the electron and its role in generating electrical current, they knew about electricity and were exploring its potential. One thing they learned early on was that metals were great conductors of both electricity and heat.
    And in 1853, two scientists showed that those two admirable properties of metals were somehow related: At any given temperature, the ratio of electronic conductivity to thermal conductivity was roughly the same in any metal they tested. This so-called Wiedemann-Franz law has held ever since — except in quantum materials, where electrons stop behaving as individual particles and glom together into a sort of electron soup. Experimental measurements have indicated that the 170-year-old law breaks down in these quantum materials, and by quite a bit.
    Now, a theoretical argument put forth by physicists at the Department of Energy’s SLAC National Accelerator Laboratory, Stanford University and the University of Illinois suggests that the law should, in fact, approximately hold for one type of quantum material — the copper oxide superconductors, or cuprates, which conduct electricity with no loss at relatively high temperatures.
    In a paper published in Science today, they propose that the Wiedemann-Franz law should still roughly hold if one considers only the electrons in cuprates. They suggest that other factors, such as vibrations in the material’s atomic latticework, must account for experimental results that make it look like the law does not apply.
    This surprising result is important to understanding unconventional superconductors and other quantum materials, said Wen Wang, lead author of the paper and a PhD student with the Stanford Institute for Materials and Energy Sciences (SIMES) at SLAC.
    “The original law was developed for materials where electrons interact with each other weakly and behave like little balls that bounce off defects in the material’s lattice,” Wang said. “We wanted to test the law theoretically in systems where neither of these things was true.”
    Peeling a quantum onion
    Superconducting materials, which carry electric current without resistance, were discovered in 1911. But they operated at such extremely low temperatures that their usefulness was quite limited.

    That changed in 1986, when the first family of so-called high-temperature or unconventional superconductors — the cuprates — was discovered. Although cuprates still require extremely cold conditions to work their magic, their discovery raised hopes that superconductors could someday work at much closer to room temperature — making revolutionary technologies like no-loss power lines possible.
    After nearly four decades of research, that goal is still elusive, although a lot of progress has been made in understanding the conditions in which superconducting states flip in and out of existence.
    Theoretical studies, performed with the help of powerful supercomputers, have been essential for interpreting the results of experiments on these materials and for understanding and predicting phenomena that are out of experimental reach.
    For this study, the SIMES team ran simulations based on what’s known as the Hubbard model, which has become an essential tool for simulating and describing systems where electrons stop acting independently and join forces to produce unexpected phenomena.
    The results show that when you only take electron transport into account, the ratio of electronic conductivity to thermal conductivity approaches what the Wiedemann-Franz law predicts, Wang said. “So, the discrepancies that have been seen in experiments should be coming from other things like phonons, or lattice vibrations, that are not in the Hubbard model,” she said.
    SIMES staff scientist and paper co-author Brian Moritz said that although the study did not investigate how vibrations cause the discrepancies, “somehow the system still knows that there is this correspondence between charge and heat transport amongst the electrons. That was the most surprising result.”
    From here, he added, “maybe we can peel the onion to understand a little bit more.”
    Major funding for this study came from the DOE Office of Science. Computational work was carried out at Stanford University and on resources of the National Energy Research Scientific Computing Center, which is a DOE Office of Science user facility. More

  • in

    Researchers develop novel deep learning-based detection system for autonomous vehicles

    Autonomous vehicles hold the promise of tackling traffic congestion, enhancing traffic flow through vehicle-to-vehicle communication, and revolutionizing the travel experience by offering comfortable and safe journeys. Additionally, integrating autonomous driving technology into electric vehicles could contribute to more eco-friendly transportation solutions.
    A critical requirement for the success of autonomous vehicles is their ability to detect and navigate around obstacles, pedestrians, and other vehicles across diverse environments. Current autonomous vehicles employ smart sensors such as LiDARs (Light Detection and Ranging) for a 3D view of the surroundings and depth information, RADaR (Radio Detection and Ranging) for detecting objects at night and cloudy weather, and a set of cameras for providing RGB images and a 360-degree view, collectively forming a comprehensive dataset known as point cloud. However, these sensors often face challenges like reduced detection capabilities in adverse weather, on unstructured roads, or due to occlusion.
    To overcome these shortcomings, an international team of researchers led by Professor Gwanggil Jeon from the Department of Embedded Systems Engineering at Incheon National University (INU), Korea, has recently developed a groundbreaking Internet-of-Things-enabled deep learning-based end-to-end 3D object detection system. “Our proposed system operates in real time, enhancing the object detection capabilities of autonomous vehicles, making navigation through traffic smoother and safer,” explains Prof. Jeon. Their paper was made availableonline on October 17, 2022, and published in Volume 24, Issue 11 of the journal IEEE Transactions on Intelligent Transport Systems on November 2023.
    The proposed innovative system is built on the YOLOv3 (You Only Look Once) deep learning object detection technique, which is the most active state-of-the-art technique available for 2D visual detection. The researchers first used this new model for 2D object detection and then modified the YOLOv3 technique to detect 3D objects. Using both point cloud data and RGB images as input, the system generates bounding boxes with confidence scores and labels for visible obstacles as output.
    To assess the system’s performance, the team conducted experiments using the Lyft dataset, which consisted of road information captured from 20 autonomous vehicles traveling a predetermined route in Palo Alto, California, over a four-month period. The results demonstrated that YOLOv3 exhibits high accuracy, surpassing other state-of-the-art architectures. Notably, the overall accuracy for 2D and 3D object detection were an impressive 96% and 97%, respectively.
    Prof. Jeon emphasizes the potential impact of this enhanced detection capability: “By improving detection capabilities, this system could propel autonomous vehicles into the mainstream. The introduction of autonomous vehicles has the potential to transform the transportation and logistics industry, offering economic benefits through reduced dependence on human drivers and the introduction of more efficient transportation methods.”
    Furthermore, the present work is expected to drive research and development in various technological fields such as sensors, robotics, and artificial intelligence. Going ahead, the team aims to explore additional deep learning algorithms for 3D object detection, recognizing the current focus on 2D image development.
    In summary, this groundbreaking study could pave the way for a widespread adoption of autonomous vehicles and, in turn, a more environment-friendly and comfortable mode of transport. More