More stories

  • in

    Realistic simulated driving environment based on ‘crash-prone’ Michigan intersection

    The first statistically realistic roadway simulation has been developed by researchers at the University of Michigan. While it currently represents a particularly perilous roundabout, future work will expand it to include other driving situations for testing autonomous vehicle software.
    The simulation is a machine-learning model that trained on data collected at a roundabout on the south side of Ann Arbor, recognized as one of the most crash-prone intersections in the state of Michigan and conveniently just a few miles from the offices of the research team.
    Known as the Neural Naturalistic Driving Environment or NeuralNDE, it turned that data into a simulation of what drivers experience everyday. Virtual roadways like this are needed to ensure the safety of autonomous vehicle software before other cars, cyclists and pedestrians ever cross its path.
    “The NeuralNDE reproduces the driving environment and, more importantly, realistically simulates these safety-critical situations so we can evaluate the safety performance of autonomous vehicles,” said Henry Liu, U-M professor of civil engineering and director of Mcity, a U-M-led public-private mobility research partnership.
    Liu is also director of Center for Connected and Automated Transportation and corresponding author of the study in Nature Communications.
    Safety critical events, which require a driver to make split-second decisions and take action, don’t happen that often. Drivers can go many hours between events that force them to slam on the brakes or swerve to avoid a collision, and each event has its own unique circumstances.

    Together, these represent two bottlenecks in the effort to simulate our roadways, known as the “curse of rarity” and the “curse of dimensionality” respectively. The curse of dimensionality is caused by the complexity of the driving environment, which includes factors like pavement quality, the current weather conditions, and the different types of road users including pedestrians and bicyclists.
    To model it all, the team tried to see it all. They installed sensor systems on light poles which continuously collect data at the State Street/Ellsworth Road roundabout.
    “The reason that we chose that location is that roundabouts are a very challenging, urban driving scenario for autonomous vehicles. In a roundabout, drivers are required to spontaneously negotiate and cooperate with other drivers moving through the intersection. In addition, this particular roundabout experiences high traffic volume and is two lanes, which adds to its complexity,” said Xintao Yan, a Ph.D. student in civil and environmental engineering and first author of the study, who is advised by Liu.
    The NeuralNDE serves as a key component of the CCAT Safe AI Framework for Trustworthy Edge Scenario Tests, or SAFE TEST, a system developed by Liu’s team that uses artificial intelligence to reduce the testing miles required to ensure the safety of autonomous vehicles by 99.99%. It essentially breaks the “curse of rarity,” introducing safety-critical incidents a thousand times more frequently than they occur in real driving. The NeuralNDE is also critical to a project designed to enable the Mcity Test Facility to be used for remote testing of AV software.
    But unlike a fully virtual environment, these tests take place in mixed reality on closed test tracks such as the Mcity Test Facility and the American Center for Mobility in Ypsilanti, Michigan. In addition to the real conditions of the track, the autonomous vehicles also experience virtual drivers, cyclists and pedestrians behaving in both safe and dangerous ways. By testing these scenarios in a controlled environment, AV developers can fine-tune their systems to better handle all driving situations.
    The NeuralNDE is not only beneficial for AV developers but also for researchers studying human driver behavior. The simulation can interpret data on how drivers respond to different scenarios, which can help develop more functional road infrastructure.
    In 2021, the U-M Transportation Research Institute was awarded $9.95 million in funding by the U.S. Department of Transportation to expand the number of intersections equipped with these sensors to 21. This implementation will expand the capabilities of the NeuralNDE and provide real-time alerts to drivers with connected vehicles.
    The research was funded by Mcity, CCAT and the U-M Transportation Research Institute. Founded in 1965, UMTRI is a global leader in multidisciplinary research and a partner of choice for industry leaders, foundations and government agencies to advance safe, equitable and efficient transportation and mobility. CCAT is a regional university transportation research center that was recently awarded a $15 million, five-year renewal by the USDOT. More

  • in

    Brain activity decoder can reveal stories in people’s minds

    A new artificial intelligence system called a semantic decoder can translate a person’s brain activity — while listening to a story or silently imagining telling a story — into a continuous stream of text. The system developed by researchers at The University of Texas at Austin might help people who are mentally conscious yet unable to physically speak, such as those debilitated by strokes, to communicate intelligibly again.
    The study, published in the journal Nature Neuroscience, was led by Jerry Tang, a doctoral student in computer science, and Alex Huth, an assistant professor of neuroscience and computer science at UT Austin. The work relies in part on a transformer model, similar to the ones that power Open AI’s ChatGPT and Google’s Bard.
    Unlike other language decoding systems in development, this system does not require subjects to have surgical implants, making the process noninvasive. Participants also do not need to use only words from a prescribed list. Brain activity is measured using an fMRI scanner after extensive training of the decoder, in which the individual listens to hours of podcasts in the scanner. Later, provided that the participant is open to having their thoughts decoded, their listening to a new story or imagining telling a story allows the machine to generate corresponding text from brain activity alone.
    “For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences,” Huth said. “We’re getting the model to decode continuous language for extended periods of time with complicated ideas.”
    The result is not a word-for-word transcript. Instead, researchers designed it to capture the gist of what is being said or thought, albeit imperfectly. About half the time, when the decoder has been trained to monitor a participant’s brain activity, the machine produces text that closely (and sometimes precisely) matches the intended meanings of the original words.
    For example, in experiments, a participant listening to a speaker say, “I don’t have my driver’s license yet” had their thoughts translated as, “She has not even started to learn to drive yet.” Listening to the words, “I didn’t know whether to scream, cry or run away. Instead, I said, ‘Leave me alone!'” was decoded as, “Started to scream and cry, and then she just said, ‘I told you to leave me alone.'”
    Beginning with an earlier version of the paper that appeared as a preprint online, the researchers addressed questions about potential misuse of the technology. The paper describes how decoding worked only with cooperative participants who had participated willingly in training the decoder. Results for individuals on whom the decoder had not been trained were unintelligible, and if participants on whom the decoder had been trained later put up resistance — for example, by thinking other thoughts — results were similarly unusable.

    “We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that,” Tang said. “We want to make sure people only use these types of technologies when they want to and that it helps them.”
    In addition to having participants listen or think about stories, the researchers asked subjects to watch four short, silent videos while in the scanner. The semantic decoder was able to use their brain activity to accurately describe certain events from the videos.
    The system currently is not practical for use outside of the laboratory because of its reliance on the time need on an fMRI machine. But the researchers think this work could transfer to other, more portable brain-imaging systems, such as functional near-infrared spectroscopy (fNIRS).
    “fNIRS measures where there’s more or less blood flow in the brain at different points in time, which, it turns out, is exactly the same kind of signal that fMRI is measuring,” Huth said. “So, our exact kind of approach should translate to fNIRS,” although, he noted, the resolution with fNIRS would be lower.
    This work was supported by the Whitehall Foundation, the Alfred P. Sloan Foundation and the Burroughs Wellcome Fund.
    The study’s other co-authors are Amanda LeBel, a former research assistant in the Huth lab, and Shailee Jain, a computer science graduate student at UT Austin.
    Alexander Huth and Jerry Tang have filed a PCT patent application related to this work. More

  • in

    Researchers explore why some people get motion sick playing VR games while others don’t

    The way our senses adjust while playing high-intensity virtual reality games plays a critical role in understanding why some people experience severe cybersickness and others don’t.
    Cybersickness is a form of motion sickness that occurs from exposure to immersive VR and augmented reality applications.
    A new study, led by researchers at the University of Waterloo, found that the subjective visual vertical — a measure of how individuals perceive the orientation of vertical lines — shifted considerably after participants played a high-intensity VR game.
    “Our findings suggest that the severity of a person’s cybersickness is affected by how our senses adjust to the conflict between reality and virtual reality,” said Michael Barnett-Cowan, a professor in the Department of Kinesiology and Health Sciences. “This knowledge could be invaluable for developers and designers of VR experiences, enabling them to create more comfortable and enjoyable environments for users.”
    The researchers collected data from 31 participants. They assessed their perceptions of the vertical before and after playing two VR games, one high-intensity and one low-intensity.
    Those who experienced less sickness were more likely to have the largest change in the subjective visual vertical following exposure to VR, particularly at a high intensity. Conversely, those who had the highest levels of cybersickness were less likely to have changed how they perceived vertical lines. There were no significant differences between males and females, nor between participants with low and high gaming experience.
    “While the subjective vertical visual task significantly predicted the severity of cybersickness symptoms, there is still much to be explained,” said co-author William Chung, a former Waterloo doctoral student who is now a postdoctoral fellow at the Toronto Rehabilitation Institute.
    “By understanding the relationship between sensory reweighting and cybersickness susceptibility, we can potentially develop personalized cybersickness mitigation strategies and VR experiences that take into account individual differences in sensory processing and hopefully lower the occurrence of cybersickness.”
    As VR continues to revolutionize gaming, education and social interaction, addressing the pervasive issue of cybersickness — marked by symptoms such as nausea, disorientation, eye strain and fatigue — is critical for ensuring a positive user experience. More

  • in

    Satellite data reveal nearly 20,000 previously unknown deep-sea mountains

    The number of known mountains in Earth’s oceans has roughly doubled. Global satellite observations have revealed nearly 20,000 previously unknown seamounts, researchers report in the April Earth and Space Science.

    Just as mountains tower over Earth’s surface, seamounts also rise above the ocean floor. The tallest mountain on Earth, as measured from base to peak, is Mauna Kea, which is part of the Hawaiian-Emperor Seamount Chain.

    These underwater edifices are often hot spots of marine biodiversity (SN: 10/7/16). That’s in part because their craggy walls — formed from volcanic activity — provide a plethora of habitats. Seamounts also promote upwelling of nutrient-rich water, which distributes beneficial compounds like nitrates and phosphates throughout the water column. They’re like “stirring rods in the ocean,” says David Sandwell, a geophysicist at the Scripps Institution of Oceanography at the University of California, San Diego.

    .email-conversion {
    border: 1px solid #ffcccb;
    color: white;
    margin-top: 50px;
    background-image: url(“/wp-content/themes/sciencenews/client/src/images/cta-module@2x.jpg”);
    padding: 20px;
    clear: both;
    }

    .zephr-registration-form{max-width:440px;margin:20px auto;padding:20px;background-color:#fff;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form *{box-sizing:border-box}.zephr-registration-form-text > *{color:var(–zephr-color-text-main)}.zephr-registration-form-relative-container{position:relative}.zephr-registration-form-flex-container{display:flex}.zephr-registration-form-input.svelte-blfh8x{display:block;width:100%;height:calc(var(–zephr-input-height) * 1px);padding-left:8px;font-size:16px;border:calc(var(–zephr-input-borderWidth) * 1px) solid var(–zephr-input-borderColor);border-radius:calc(var(–zephr-input-borderRadius) * 1px);transition:border-color 0.25s ease, box-shadow 0.25s ease;outline:0;color:var(–zephr-color-text-main);background-color:#fff;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-input.svelte-blfh8x::placeholder{color:var(–zephr-color-background-tinted)}.zephr-registration-form-input-checkbox.svelte-blfh8x{width:auto;height:auto;margin:8px 5px 0 0;float:left}.zephr-registration-form-input-radio.svelte-blfh8x{position:absolute;opacity:0;cursor:pointer;height:0;width:0}.zephr-registration-form-input-color[type=”color”].svelte-blfh8x{width:50px;padding:0;border-radius:50%}.zephr-registration-form-input-color[type=”color”].svelte-blfh8x::-webkit-color-swatch{border:none;border-radius:50%;padding:0}.zephr-registration-form-input-color[type=”color”].svelte-blfh8x::-webkit-color-swatch-wrapper{border:none;border-radius:50%;padding:0}.zephr-registration-form-input.disabled.svelte-blfh8x,.zephr-registration-form-input.disabled.svelte-blfh8x:hover{border:calc(var(–zephr-input-borderWidth) * 1px) solid var(–zephr-input-borderColor);background-color:var(–zephr-color-background-tinted)}.zephr-registration-form-input.error.svelte-blfh8x{border:1px solid var(–zephr-color-warning-main)}.zephr-registration-form-input-label.svelte-1ok5fdj.svelte-1ok5fdj{margin-top:10px;display:block;line-height:30px;font-size:12px;color:var(–zephr-color-text-tinted);font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-input-label.svelte-1ok5fdj span.svelte-1ok5fdj{display:block}.zephr-registration-form-button.svelte-17g75t9{height:calc(var(–zephr-button-height) * 1px);line-height:0;padding:0 20px;text-decoration:none;text-transform:capitalize;text-align:center;border-radius:calc(var(–zephr-button-borderRadius) * 1px);font-size:calc(var(–zephr-button-fontSize) * 1px);font-weight:normal;cursor:pointer;border-style:solid;border-width:calc(var(–zephr-button-borderWidth) * 1px);border-color:var(–zephr-color-action-tinted);transition:backdrop-filter 0.2s, background-color 0.2s;margin-top:20px;display:block;width:100%;background-color:var(–zephr-color-action-main);color:#fff;position:relative;overflow:hidden;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-button.svelte-17g75t9:hover{background-color:var(–zephr-color-action-tinted);border-color:var(–zephr-color-action-tinted)}.zephr-registration-form-button.svelte-17g75t9:disabled{background-color:var(–zephr-color-background-tinted);border-color:var(–zephr-color-background-tinted)}.zephr-registration-form-button.svelte-17g75t9:disabled:hover{background-color:var(–zephr-color-background-tinted);border-color:var(–zephr-color-background-tinted)}.zephr-registration-form-text.svelte-i1fi5{font-size:19px;text-align:center;margin:20px auto;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-input-inner-text.svelte-lvlpcn{cursor:pointer;position:absolute;top:50%;transform:translateY(-50%);right:10px;color:var(–zephr-color-text-main);font-size:12px;font-weight:bold;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-divider-container.svelte-mk4m8o{display:flex;align-items:center;justify-content:center;margin:40px 0}.zephr-registration-form-divider-line.svelte-mk4m8o{height:1px;width:50%;margin:0 5px;background-color:var(–zephr-color-text-tinted);;}.zephr-registration-form-divider-text.svelte-mk4m8o{margin:0 12px;color:var(–zephr-color-text-main);font-size:14px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);white-space:nowrap}.zephr-registration-form-response-message.svelte-179421u{text-align:center;padding:10px 30px;border-radius:5px;font-size:15px;margin-top:10px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-response-message-title.svelte-179421u{font-weight:bold;margin-bottom:10px}.zephr-registration-form-response-message-success.svelte-179421u{background-color:#baecbb;border:1px solid #00bc05}.zephr-registration-form-response-message-error.svelte-179421u{background-color:#fcdbec;border:1px solid #d90c00}.zephr-recaptcha-tcs.svelte-1wyy3bx{margin:20px 0 0 0;font-size:15px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-recaptcha-inline.svelte-1wyy3bx{margin:20px 0 0 0}.zephr-registration-form-social-sign-in.svelte-gp4ky7{align-items:center}.zephr-registration-form-social-sign-in-button.svelte-gp4ky7{height:55px;padding:0 15px;color:#000;background-color:#fff;box-shadow:0px 0px 5px rgba(0, 0, 0, 0.3);border-radius:10px;font-size:17px;display:flex;align-items:center;cursor:pointer;margin-top:20px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-social-sign-in-button.svelte-gp4ky7:hover{background-color:#fafafa}.zephr-registration-form-social-sign-in-icon.svelte-gp4ky7{display:flex;justify-content:center;margin-right:30px;width:25px}.zephr-form-link-message.svelte-rt4jae{margin:10px 0 10px 20px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-progress-bar.svelte-8qyhcl{width:100%;border:0;border-radius:20px;margin-top:10px}.zephr-registration-form-progress-bar.svelte-8qyhcl::-webkit-progress-bar{background-color:var(–zephr-color-background-tinted);border:0;border-radius:20px}.zephr-registration-form-progress-bar.svelte-8qyhcl::-webkit-progress-value{background-color:var(–zephr-color-text-tinted);border:0;border-radius:20px}.zephr-registration-progress-bar-step.svelte-8qyhcl{margin:auto;color:var(–zephr-color-text-tinted);font-size:12px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-progress-bar-step.svelte-8qyhcl:first-child{margin-left:0}.zephr-registration-progress-bar-step.svelte-8qyhcl:last-child{margin-right:0}.zephr-registration-form-input-inner-text.svelte-lvlpcn{cursor:pointer;position:absolute;top:50%;transform:translateY(-50%);right:10px;color:var(–zephr-color-text-main);font-size:12px;font-weight:bold;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-input-error-text.svelte-19a73pq{color:var(–zephr-color-warning-main);font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-input-select.svelte-19a73pq{display:block;appearance:auto;width:100%;height:calc(var(–zephr-input-height) * 1px);font-size:16px;border:calc(var(–zephr-input-borderWidth) * 1px) solid var(–zephr-color-text-main);border-radius:calc(var(–zephr-input-borderRadius) * 1px);transition:border-color 0.25s ease, box-shadow 0.25s ease;outline:0;color:var(–zephr-color-text-main);background-color:#fff;padding:10px}.zephr-registration-form-input-select.disabled.svelte-19a73pq{border:1px solid var(–zephr-color-background-tinted)}.zephr-registration-form-input-select.unselected.svelte-19a73pq{color:var(–zephr-color-background-tinted)}.zephr-registration-form-input-select.error.svelte-19a73pq{border-color:var(–zephr-color-warning-main)}.zephr-registration-form-input-textarea.svelte-19a73pq{background-color:#fff;border:1px solid #ddd;color:#222;font-size:14px;font-weight:300;padding:16px;width:100%}.zephr-registration-form-input-slider-output.svelte-19a73pq{margin:13px 0 0 10px}.spin.svelte-1cj2gr0{animation:svelte-1cj2gr0-spin 2s 0s infinite linear}.pulse.svelte-1cj2gr0{animation:svelte-1cj2gr0-spin 1s infinite steps(8)}@keyframes svelte-1cj2gr0-spin{0%{transform:rotate(0deg)}100%{transform:rotate(360deg)}}.zephr-registration-form-input-radio.svelte-1qn5n0t{position:absolute;opacity:0;cursor:pointer;height:0;width:0}.zephr-registration-form-radio-label.svelte-1qn5n0t{display:flex;align-items:center;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-radio-dot.svelte-1qn5n0t{position:relative;box-sizing:border-box;height:23px;width:23px;background-color:#fff;border:1px solid #ebebeb;border-radius:50%;margin-right:12px}.checked.svelte-1qn5n0t{border-color:#009fe3}.checked.svelte-1qn5n0t:after{content:””;position:absolute;width:17px;height:17px;background:#009fe3;background:linear-gradient(#009fe3, #006cb5);border-radius:50%;top:2px;left:2px}.disabled.checked.svelte-1qn5n0t:after{background:var(–zephr-color-background-tinted)}.error.svelte-1qn5n0t{border:1px solid var(–zephr-color-warning-main)}.zephr-registration-form-checkbox.svelte-1gzpw2y{position:absolute;opacity:0;cursor:pointer;height:0;width:0}.zephr-registration-form-checkbox-label.svelte-1gzpw2y{display:flex;align-items:center;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-checkmark.svelte-1gzpw2y{position:relative;box-sizing:border-box;height:23px;width:23px;background-color:#fff;border:1px solid var(–zephr-color-text-main);border-radius:6px;margin-right:12px;cursor:pointer}.zephr-registration-form-checkmark.checked.svelte-1gzpw2y{border-color:#009fe3}.zephr-registration-form-checkmark.checked.svelte-1gzpw2y:after{content:””;position:absolute;width:6px;height:13px;border:solid #009fe3;border-width:0 2px 2px 0;transform:rotate(45deg);top:3px;left:8px;box-sizing:border-box}.zephr-registration-form-checkmark.disabled.svelte-1gzpw2y{border:1px solid var(–zephr-color-background-tinted)}.zephr-registration-form-checkmark.disabled.checked.svelte-1gzpw2y:after{border:solid var(–zephr-color-background-tinted);border-width:0 2px 2px 0}.zephr-registration-form-checkmark.error.svelte-1gzpw2y{border:1px solid var(–zephr-color-warning-main)}.zephr-registration-form-google-icon.svelte-1jnblvg{width:20px}.zephr-form-link.svelte-64wplc{margin:10px 0;color:#6ba5e9;text-decoration:underline;cursor:pointer;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-form-link-disabled.svelte-64wplc{color:var(–zephr-color-text-main);cursor:none;text-decoration:none}.zephr-registration-form-password-progress.svelte-d1zv9r{display:flex;margin-top:10px}.zephr-registration-form-password-bar.svelte-d1zv9r{width:100%;height:4px;border-radius:2px}.zephr-registration-form-password-bar.svelte-d1zv9r:not(:first-child){margin-left:8px}.zephr-registration-form-password-requirements.svelte-d1zv9r{margin:20px 0;justify-content:center}.zephr-registration-form-password-requirement.svelte-d1zv9r{display:flex;align-items:center;color:var(–zephr-color-text-tinted);font-size:12px;height:20px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-password-requirement-icon.svelte-d1zv9r{margin-right:10px;font-size:15px}.zephr-registration-form-password-progress.svelte-d1zv9r{display:flex;margin-top:10px}.zephr-registration-form-password-bar.svelte-d1zv9r{width:100%;height:4px;border-radius:2px}.zephr-registration-form-password-bar.svelte-d1zv9r:not(:first-child){margin-left:8px}.zephr-registration-form-password-requirements.svelte-d1zv9r{margin:20px 0;justify-content:center}.zephr-registration-form-password-requirement.svelte-d1zv9r{display:flex;align-items:center;color:var(–zephr-color-text-tinted);font-size:12px;height:20px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-password-requirement-icon.svelte-d1zv9r{margin-right:10px;font-size:15px}
    .zephr-registration-form {
    max-width: 100%;
    background-image: url(/wp-content/themes/sciencenews/client/src/images/cta-module@2x.jpg);
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    margin: 0px auto;
    margin-bottom: 4rem;
    padding: 20px;
    }

    .zephr-registration-form-text h6 {
    font-size: 0.8rem;
    }

    .zephr-registration-form h4 {
    font-size: 3rem;
    }

    .zephr-registration-form h4 {
    font-size: 1.5rem;
    }

    .zephr-registration-form-button.svelte-17g75t9:hover {
    background-color: #fc6a65;
    border-color: #fc6a65;
    width: 150px;
    margin-left: auto;
    margin-right: auto;
    }
    .zephr-registration-form-button.svelte-17g75t9:disabled {
    background-color: #e04821;
    border-color: #e04821;
    width: 150px;
    margin-left: auto;
    margin-right: auto;
    }
    .zephr-registration-form-button.svelte-17g75t9 {
    background-color: #e04821;
    border-color: #e04821;
    width: 150px;
    margin-left: auto;
    margin-right: auto;
    }
    .zephr-registration-form-text > * {
    color: #FFFFFF;
    font-weight: bold
    font: 25px;
    }
    .zephr-registration-form-progress-bar.svelte-8qyhcl {
    width: 100%;
    border: 0;
    border-radius: 20px;
    margin-top: 10px;
    display: none;
    }
    .zephr-registration-form-response-message-title.svelte-179421u {
    font-weight: bold;
    margin-bottom: 10px;
    display: none;
    }
    .zephr-registration-form-response-message-success.svelte-179421u {
    background-color: #8db869;
    border: 1px solid #8db869;
    color: white;
    margin-top: -0.2rem;
    }
    .zephr-registration-form-text.svelte-i1fi5:nth-child(1){
    font-size: 18px;
    text-align: center;
    margin: 20px auto;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    }
    .zephr-registration-form-text.svelte-i1fi5:nth-child(5){
    font-size: 18px;
    text-align: left;
    margin: 20px auto;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    }
    .zephr-registration-form-text.svelte-i1fi5:nth-child(7){
    font-size: 18px;
    text-align: left;
    margin: 20px auto;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    }
    .zephr-registration-form-text.svelte-i1fi5:nth-child(9){
    font-size: 18px;
    text-align: left;
    margin: 20px auto;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    }
    .zephr-registration-form-input-label.svelte-1ok5fdj span.svelte-1ok5fdj {
    display: none;
    color: white;
    }
    .zephr-registration-form-input.disabled.svelte-blfh8x, .zephr-registration-form-input.disabled.svelte-blfh8x:hover {
    border: calc(var(–zephr-input-borderWidth) * 1px) solid var(–zephr-input-borderColor);
    background-color: white;
    }
    .zephr-registration-form-checkbox-label.svelte-1gzpw2y {
    display: flex;
    align-items: center;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    font-size: 20px;
    margin-bottom: -20px;
    }

    More than 24,600 seamounts have been previously mapped. One common way of finding these hidden mountains is to ping the seafloor with sonar (SN: 4/16/21). But that’s an expensive, time-intensive process that requires a ship. Only about 20 percent of the ocean has been mapped that way, says Scripps earth scientist Julie Gevorgian. “There are a lot of gaps.”

    So Gevorgian, Sandwell and their colleagues turned to satellite observations, which provide global coverage of the world’s oceans, to take a census of seamounts.

    The team pored over satellite measurements of the height of the sea surface. The researchers looked for centimeter-scale bumps caused by the gravitational influence of a seamount. Because rock is denser than water, the presence of a seamount slightly changes the Earth’s gravitational field at that spot. “There’s an extra gravitational attraction,” Sandwell says, that causes water to pile up above the seamount.

    Using that technique, the team spotted 19,325 previously unknown seamounts. The researchers compared some of their observations with sonar maps of the seafloor to confirm that the newly discovered seamounts were likely real. Most of the newly discovered underwater mountains are on the small side — between roughly 700 and 2,500 meters tall, the researchers estimate.

    However, it’s possible that some could pose a risk to mariners. “There’s a point when they’re shallow enough that they’re within the depth range of submarines,” says David Clague, a marine geologist at the Monterey Bay Aquarium Research Institute in Moss Landing, Calif., who was not involved in the research. In 2021, the USS Connecticut, a nuclear submarine, ran into an uncharted seamount in the South China Sea. The vessel is still undergoing repairs at a shipyard in Washington state. More

  • in

    Structured exploration allows biological brains to learn faster than AI

    Neuroscientists have uncovered how exploratory actions enable animals to learn their spatial environment more efficiently. Their findings could help build better AI agents that can learn faster and require less experience.
    Researchers at the Sainsbury Wellcome Centre and Gatsby Computational Neuroscience Unit at UCL found the instinctual exploratory runs that animals carry out are not random. These purposeful actions allow mice to learn a map of the world efficiently. The study, published today in Neuron, describes how neuroscientists tested their hypothesis that the specific exploratory actions that animals undertake, such as darting quickly towards objects, are important in helping them learn how to navigate their environment.
    “There are a lot of theories in psychology about how performing certain actions facilitates learning. In this study, we tested whether simply observing obstacles in an environment was enough to learn about them, or if purposeful, sensory-guided actions help animals build a cognitive map of the world,” said Professor Tiago Branco, Group Leader at the Sainsbury Wellcome Centre and corresponding author on the paper.
    In previous work, scientists at SWC observed a correlation between how well animals learn to go around an obstacle and the number of times they had run to the object. In this study, Philip Shamash, SWC PhD student and first author of the paper, carried out experiments to test the impact of preventing animals from performing exploratory runs. By expressing a light-activated protein called channelrhodopsin in one part of the motor cortex, Philip was able to use optogenetic tools to prevent animals from initiating exploratory runs towards obstacles.
    The team found that even though mice had spent a lot of time observing and sniffing obstacles, if they were prevented in running towards them, they did not learn. This shows that the instinctive exploratory actions themselves are helping the animals learn a map of their environment.
    To explore the algorithms that the brain might be using to learn, the team worked with Sebastian Lee, a PhD student in Andrew Saxe’s lab at SWC, to run different models of reinforcement learning that people have developed for artificial agents, and observe which one most closely reproduces the mouse behaviour.
    There are two main classes of reinforcement learning models: model-free and model-based. The team found that under some conditions mice act in a model-free way but under other conditions, they seem to have a model of the world. And so the researchers implemented an agent that can arbitrate between model-free and model-based. This is not necessarily how the mouse brain works, but it helped them to understand what is required in a learning algorithm to explain the behaviour.
    “One of the problems with artificial intelligence is that agents need a lot of experience in order to learn something. They have to explore the environment thousands of times, whereas a real animal can learn an environment in less than ten minutes. We think this is in part because, unlike artificial agents, animals’ exploration is not random and instead focuses on salient objects. This kind of directed exploration makes the learning more efficient and so they need less experience to learn,” explain Professor Branco.
    The next steps for the researchers are to explore the link between the execution of exploratory actions and the representation of subgoals. The team are now carrying out recordings in the brain to discover which areas are involved in representing subgoals and how the exploratory actions lead to the formation of the representations.
    This research was funded by a Wellcome Senior Research Fellowship (214352/Z/18/Z) and by the Sainsbury Wellcome Centre Core Grant from the Gatsby Charitable Foundation and Wellcome (090843/F/09/Z), the Sainsbury Wellcome Centre PhD Programme and a Sir Henry Dale Fellowship from the Wellcome Trust and Royal Society (216386/Z/19/Z). More

  • in

    Engineers ‘grow’ atomically thin transistors on top of computer chips

    Emerging AI applications, like chatbots that generate natural human language, demand denser, more powerful computer chips. But semiconductor chips are traditionally made with bulk materials, which are boxy 3D structures, so stacking multiple layers of transistors to create denser integrations is very difficult.
    However, semiconductor transistors made from ultrathin 2D materials, each only about three atoms in thickness, could be stacked up to create more powerful chips. To this end, MIT researchers have now demonstrated a novel technology that can effectively and efficiently “grow” layers of 2D transition metal dichalcogenide (TMD) materials directly on top of a fully fabricated silicon chip to enable denser integrations.
    Growing 2D materials directly onto a silicon CMOS wafer has posed a major challenge because the process usually requires temperatures of about 600 degrees Celsius, while silicon transistors and circuits could break down when heated above 400 degrees. Now, the interdisciplinary team of MIT researchers has developed a low-temperature growth process that does not damage the chip. The technology allows 2D semiconductor transistors to be directly integrated on top of standard silicon circuits.
    In the past, researchers have grown 2D materials elsewhere and then transferred them onto a chip or a wafer. This often causes imperfections that hamper the performance of the final devices and circuits. Also, transferring the material smoothly becomes extremely difficult at wafer-scale. By contrast, this new process grows a smooth, highly uniform layer across an entire 8-inch wafer.
    The new technology is also able to significantly reduce the time it takes to grow these materials. While previous approaches required more than a day to grow a single layer of 2D materials, the new approach can grow a uniform layer of TMD material in less than an hour over entire 8-inch wafers.
    Due to its rapid speed and high uniformity, the new technology enabled the researchers to successfully integrate a 2D material layer onto much larger surfaces than has been previously demonstrated. This makes their method better-suited for use in commercial applications, where wafers that are 8 inches or larger are key.

    “Using 2D materials is a powerful way to increase the density of an integrated circuit. What we are doing is like constructing a multistory building. If you have only one floor, which is the conventional case, it won’t hold many people. But with more floors, the building will hold more people that can enable amazing new things. Thanks to the heterogenous integration we are working on, we have silicon as the first floor and then we can have many floors of 2D materials directly integrated on top,” says Jiadi Zhu, an electrical engineering and computer science graduate student and co-lead author of a paper on this new technique.
    Zhu wrote the paper with co-lead-author Ji-Hoon Park, an MIT postdoc; corresponding authors Jing Kong, professor of electrical engineering and computer science (EECS) and a member of the Research Laboratory for Electronics; and Tomás Palacios, professor of EECS and director of the Microsystems Technology Laboratories (MTL); as well as others at MIT, MIT Lincoln Laboratory, Oak Ridge National Laboratory, and Ericsson Research. The paper appears today in Nature Nanotechnology.
    Slim materials with vast potential
    The 2D material the researchers focused on, molybdenum disulfide, is flexible, transparent, and exhibits powerful electronic and photonic properties that make it ideal for a semiconductor transistor. It is composed of a one-atom layer of molybdenum sandwiched between two atoms of sulfide.
    Growing thin films of molybdenum disulfide on a surface with good uniformity is often accomplished through a process known as metal-organic chemical vapor deposition (MOCVD). Molybdenum hexacarbonyl and diethylene sulfur, two organic chemical compounds that contain molybdenum and sulfur atoms, vaporize and are heated inside the reaction chamber, where they “decompose” into smaller molecules. Then they link up through chemical reactions to form chains of molybdenum disulfide on a surface.

    But decomposing these molybdenum and sulfur compounds, which are known as precursors, requires temperatures above 550 degrees Celsius, while silicon circuits start to degrade when temperatures surpass 400 degrees.
    So, the researchers started by thinking outside the box — they designed and built an entirely new furnace for the metal-organic chemical vapor deposition process.
    The oven consists of two chambers, a low-temperature region in the front, where the silicon wafer is placed, and a high-temperature region in the back. Vaporized molybdenum and sulfur precursors are pumped into the furnace. The molybdenum stays in the low-temperature region, where the temperature is kept below 400 degrees Celsius — hot enough to decompose the molybdenum precursor but not so hot that it damages the silicon chip.
    The sulfur precursor flows through into the high-temperature region, where it decomposes. Then it flows back into the low-temperature region, where the chemical reaction to grow molybdenum disulfide on the surface of the wafer occurs.
    “You can think about decomposition like making black pepper — you have a whole peppercorn and you grind it into a powder form. So, we smash and grind the pepper in the high-temperature region, then the powder flows back into the low-temperature region,” Zhu explains.
    Faster growth and better uniformity
    One problem with this process is that silicon circuits typically have aluminum or copper as a top layer so the chip can be connected to a package or carrier before it is mounted onto a printed circuit board. But sulfur causes these metals to sulfurize, the same way some metals rust when exposed to oxygen, which destroys their conductivity. The researchers prevented sulfurization by first depositing a very thin layer of passivation material on top of the chip. Then later they could open the passivation layer to make connections.
    They also placed the silicon wafer into the low-temperature region of the furnace vertically, rather than horizontally. By placing it vertically, neither end is too close to the high-temperature region, so no part of the wafer is damaged by the heat. Plus, the molybdenum and sulfur gas molecules swirl around as they bump into the vertical chip, rather than flowing over a horizontal surface. This circulation effect improves the growth of molybdenum disulfide and leads to better material uniformity.
    In addition to yielding a more uniform layer, their method was also much faster than other MOCVD processes. They could grow a layer in less than an hour, while typically the MOCVD growth process takes at least an entire day.
    Using the state-of-the-art MIT.Nano facilities, they were able to demonstrate high material uniformity and quality across an 8-inch silicon wafer, which is especially important for industrial applications where bigger wafers are needed.
    “By shortening the growth time, the process is much more efficient and could be more easily integrated into industrial fabrications. Plus, this is a silicon-compatible low-temperature process, which can be useful to push 2D materials further into the semiconductor industry,” Zhu says.
    In the future, the researchers want to fine-tune their technique and use it to grow many stacked layers of 2D transistors. In addition, they want to explore the use of the low-temperature growth process for flexible surfaces, like polymers, textiles, or even papers. This could enable the integration of semiconductors onto everyday objects like clothing or notebooks.
    This work is partially funded by the MIT Institute for Soldier Nanotechnologies, the National Science Foundation Center for Integrated Quantum Materials, Ericsson, MITRE, the U.S. Army Research Office, and the U.S. Department of Energy. The project also benefitted from the support of TSMC University Shuttle. More

  • in

    Study finds ChatGTP outperforms physicians in providing high-quality, empathetic advice to patient questions

    There has been widespread speculation about how advances in artificial intelligence (AI) assistants like ChatGPT could be used in medicine.
    A new study published in JAMA Internal Medicine led by Dr. John W. Ayers from the Qualcomm Institute within the University of California San Diego provides an early glimpse into the role that AI assistants could play in medicine. The study compared written responses from physicians and those from ChatGPT to real-world health questions. A panel of licensed healthcare professionals preferred ChatGPT’s responses 79% of the time and rated ChatGPT’s responses as higher quality and more empathetic.
    “The opportunities for improving healthcare with AI are massive,” said Ayers, who is also vice chief of innovation in the UC San Diego School of Medicine Division of Infectious Disease and Global Public Health. “AI-augmented care is the future of medicine.”
    Is ChatGPT Ready for Healthcare?
    In the new study, the research team set out to answer the question: Can ChatGPT respond accurately to questions patients send to their doctors? If yes, AI models could be integrated into health systems to improve physician responses to questions sent by patients and ease the ever-increasing burden on physicians.
    “ChatGPT might be able to pass a medical licensing exam,” said study co-author Dr. Davey Smith, a physician-scientist, co-director of the UC San Diego Altman Clinical and Translational Research Institute and professor at the UC San Diego School of Medicine, “but directly answering patient questions accurately and empathetically is a different ballgame.”
    “The COVID-19 pandemic accelerated virtual healthcare adoption,” added study co-author Dr. Eric Leas, a Qualcomm Institute affiliate and assistant professor in the UC San Diego Herbert Wertheim School of Public Health and Human Longevity Science. “While this made accessing care easier for patients, physicians are burdened by a barrage of electronic patient messages seeking medical advice that have contributed to record-breaking levels of physician burnout.”

    Designing a Study to Test ChatGPT in a Healthcare Setting
    To obtain a large and diverse sample of healthcare questions and physician answers that did not contain identifiable personal information, the team turned to social media where millions of patients publicly post medical questions to which doctors respond: Reddit’s AskDocs.
    r/AskDocs is a subreddit with approximately 452,000 members who post medical questions and verified healthcare professionals submit answers. While anyone can respond to a question, moderators verify healthcare professionals’ credentials and responses display the respondent’s level of credentials. The result is a large and diverse set of patient medical questions and accompanying answers from licensed medical professionals.
    While some may wonder if question-answer exchanges on social media are a fair test, team members noted that the exchanges were reflective of their clinical experience.
    The team randomly sampled 195 exchanges from AskDocs where a verified physician responded to a public question. The team provided the original question to ChatGPT and asked it to author a response. A panel of three licensed healthcare professionals assessed each question and the corresponding responses and were blinded to whether the response originated from a physician or ChatGPT. They compared responses based on information quality and empathy, noting which one they preferred.

    The panel of healthcare professional evaluators preferred ChatGPT responses to physician responses 79% of the time.
    “ChatGPT messages responded with nuanced and accurate information that often addressed more aspects of the patient’s questions than physician responses,” said Jessica Kelley, a nurse practitioner with San Diego firm Human Longevity and study co-author.
    Additionally, ChatGPT responses were rated significantly higher in quality than physician responses: good or very good quality responses were 3.6 times higher for ChatGPT than physicians (physicians 22.1% versus ChatGPT 78.5%). The responses were also more empathic: empathetic or very empathetic responses were 9.8 times higher for ChatGPT than for physicians (physicians 4.6% versus ChatGPT 45.1%).
    “I never imagined saying this,” added Dr. Aaron Goodman, an associate clinical professor at UC San Diego School of Medicine and study coauthor, “but ChatGPT is a prescription I’d like to give to my inbox. The tool will transform the way I support my patients.”
    Harnessing AI Assistants for Patient Messages
    “While our study pitted ChatGPT against physicians, the ultimate solution isn’t throwing your doctor out altogether,” said Dr. Adam Poliak, an assistant professor of Computer Science at Bryn Mawr College and study co-author. “Instead, a physician harnessing ChatGPT is the answer for better and empathetic care.”
    “Our study is among the first to show how AI assistants can potentially solve real world healthcare delivery problems,” said Dr. Christopher Longhurst, Chief Medical Officer and Chief Digital Officer at UC San Diego Health. “These results suggest that tools like ChatGPT can efficiently draft high quality, personalized medical advice for review by clinicians, and we are beginning that process at UCSD Health.”
    Dr. Mike Hogarth, a physician-bioinformatician, co-director of the Altman Clinical and Translational Research Institute at UC San Diego, professor in the UC San Diego School of Medicine and study co-author, added, “It is important that integrating AI assistants into healthcare messaging be done in the context of a randomized controlled trial to judge how the use of AI assistants impact outcomes for both physicians and patients.”
    In addition to improving workflow, investments into AI assistant messaging could impact patient health and physician performance.
    Dr. Mark Dredze, the John C Malone Associate Professor of Computer Science at Johns Hopkins and study co-author, noted: “We could use these technologies to train doctors in patient-centered communication, eliminate health disparities suffered by minority populations who often seek healthcare via messaging, build new medical safety systems, and assist doctors by delivering higher quality and more efficient care.” More

  • in

    Highly dexterous robot hand can operate in the dark — just like us

    Think about what you do with your hands when you’re home at night pushing buttons on your TV’s remote control, or at a restaurant using all kinds of cutlery and glassware. These skills are all based on touch, while you’re watching a TV program or choosing something from the menu. Our hands and fingers are incredibly skilled mechanisms, and highly sensitive to boot.
    Robotics researchers have long been trying to create “true” dexterity in robot hands, but the goal has been frustratingly elusive. Robot grippers and suction cups can pick and place items, but more dexterous tasks such as assembly, insertion, reorientation, packaging, etc. have remained in the realm of human manipulation. However, spurred by advances in both sensing technology and machine-learning techniques to process the sensed data, the field of robotic manipulation is changing very rapidly.
    Highly dexterous robot hand even works in the dark
    Researchers at Columbia Engineering have demonstrated a highly dexterous robot hand, one that combines an advanced sense of touch with motor learning algorithms in order to achieve a high level of dexterity.
    As a demonstration of skill, the team chose a difficult manipulation task: executing an arbitrarily large rotation of an unevenly shaped grasped object in hand while always maintaining the object in a stable, secure hold. This is a very difficult task because it requires constant repositioning of a subset of fingers, while the other fingers have to keep the object stable. Not only was the hand able to perform this task, but it also did it without any visual feedback whatsoever, based solely on touch sensing.
    In addition to the new levels of dexterity, the hand worked without any external cameras, so it’s immune to lighting, occlusion, or similar issues. And the fact that the hand does not rely on vision to manipulate objects means that it can do so in very difficult lighting conditions that would confuse vision-based algorithms — it can even operate in the dark.

    “While our demonstration was on a proof-of-concept task, meant to illustrate the capabilities of the hand, we believe that this level of dexterity will open up entirely new applications for robotic manipulation in the real world,” said Matei Ciocarlie, associate professor in the Departments of Mechanical Engineering and Computer Science. “Some of the more immediate uses might be in logistics and material handling, helping ease up supply chain problems like the ones that have plagued our economy in recent years, and in advanced manufacturing and assembly in factories.”
    Leveraging optics-based tactile fingers
    In earlier work, Ciocarlie’s group collaborated with Ioannis Kymissis, professor of electrical engineering, to develop a new generation of optics-based tactile robot fingers. These were the first robot fingers to achieve contact localization with sub-millimeter precision while providing complete coverage of a complex multi-curved surface. In addition, the compact packaging and low wire count of the fingers allowed for easy integration into complete robot hands.
    Teaching the hand to perform complex tasks
    For this new work, led by CIocarlie’s doctoral researcher, Gagan Khandate, the researchers designed and built a robot hand with five fingers and 15 independently actuated joints — each finger was equipped with the team’s touch-sensing technology. The next step was to test the ability of the tactile hand to perform complex manipulation tasks. To do this, they used new methods for motor learning, or the ability of a robot to learn new physical tasks via practice. In particular, they used a method called deep reinforcement learning, augmented with new algorithms that they developed for effective exploration of possible motor strategies.

    Robot completed approximately one year of practice in only hours of real-time
    The input to the motor learning algorithms consisted exclusively of the team’s tactile and proprioceptive data, without any vision. Using simulation as a training ground, the robot completed approximately one year of practice in only hours of real-time, thanks to modern physics simulators and highly parallel processors. The researchers then transferred this manipulation skill trained in simulation to the real robot hand, which was able to achieve the level of dexterity the team was hoping for. Ciocarlie noted that “the directional goal for the field remains assistive robotics in the home, the ultimate proving ground for real dexterity. In this study, we’ve shown that robot hands can also be highly dexterous based on touch sensing alone. Once we also add visual feedback into the mix along with touch, we hope to be able to achieve even more dexterity, and one day start approaching the replication of the human hand.”
    Ultimate goal: joining abstract intelligence with embodied intelligence
    Ultimately, Ciocarlie observed, a physical robot being useful in the real world needs both abstract, semantic intelligence (to understand conceptually how the world works), and embodied intelligence (the skill to physically interact with the world). Large language models such as OpenAI’s GPT-4 or Google’s PALM aim to provide the former, while dexterity in manipulation as achieved in this study represents complementary advances in the latter.
    For instance, when asked how to make a sandwich, ChatGPT will type out a step-by-step plan in response, but it takes a dexterous robot to take that plan and actually make the sandwich. In the same way, researchers hope that physically skilled robots will be able to take semantic intelligence out of the purely virtual world of the Internet, and put it to good use on real-world physical tasks, perhaps even in our homes.
    The paper has been accepted for publication at the upcoming Robotics: Science and Systems Conference (Daegu, Korea, July 10-14, 2023), and is currently available as a preprint.
    VIDEO: https://youtu.be/mYlc_OWgkyI More