More stories

  • in

    Quantum entanglement of photons doubles microscope resolution

    Using a “spooky” phenomenon of quantum physics, Caltech researchers have discovered a way to double the resolution of light microscopes.
    In a paper appearing in the journal Nature Communications, a team led by Lihong Wang, Bren Professor of Medical Engineering and Electrical Engineering, shows the achievement of a leap forward in microscopy through what is known as quantum entanglement. Quantum entanglement is a phenomenon in which two particles are linked such that the state of one particle is tied to the state of the other particle regardless of whether the particles are anywhere near each other. Albert Einstein famously referred to quantum entanglement as “spooky action at a distance” because it could not be explained by his relativity theory.
    According to quantum theory, any type of particle can be entangled. In the case of Wang’s new microscopy technique, dubbed quantum microscopy by coincidence (QMC), the entangled particles are photons. Collectively, two entangled photons are known as a biphoton, and, importantly for Wang’s microscopy, they behave in some ways as a single particle that has double the momentum of a single photon.
    Since quantum mechanics says that all particles are also waves, and that the wavelength of a wave is inversely related to the momentum of the particle, particles with larger momenta have smaller wavelengths. So, because a biphoton has double the momentum of a photon, its wavelength is half that of the individual photons.
    This is key to how QMC works. A microscope can only image the features of an object whose minimum size is half the wavelength of light used by the microscope. Reducing the wavelength of that light means the microscope can see even smaller things, which results in increased resolution.
    Quantum entanglement is not the only way to reduce the wavelength of light being used in a microscope. Green light has a shorter wavelength than red light, for example, and purple light has a shorter wavelength than green light. But due to another quirk of quantum physics, light with shorter wavelengths carries more energy. So, once you get down to light with a wavelength small enough to image tiny things, the light carries so much energy that it will damage the items being imaged, especially living things such as cells. This is why ultraviolet (UV) light, which has a very short wavelength, gives you a sunburn.

    QMC gets around this limit by using biphotons that carry the lower energy of longer-wavelength photons while having the shorter wavelength of higher-energy photons.
    “Cells don’t like UV light,” Wang says. “But if we can use 400-nanometer light to image the cell and achieve the effect of 200-nm light, which is UV, the cells will be happy, and we’re getting the resolution of UV.”
    To achieve that, Wang’s team built an optical apparatus that shines laser light into a special kind of crystal that converts some of the photons passing through it into biphotons. Even using this special crystal, the conversion is very rare and occurs in about one in a million photons. Using a series of mirrors, lenses, and prisms, each biphoton — which actually consists of two discrete photons — is split up and shuttled along two paths, so that one of the paired photons passes through the object being imaged and the other does not. The photon passing through the object is called the signal photon, and the one that does not is called the idler photon. These photons then continue along through more optics until they reach a detector connected to a computer that builds an image of the cell based on the information carried by the signal photon. Amazingly, the paired photons remain entangled as a biphoton behaving at half the wavelength despite the presence of the object and their separate pathways.
    Wang’s lab was not the first to work on this kind of biphoton imaging, but it was the first to create a viable system using the concept. “We developed what we believe a rigorous theory as well as a faster and more accurate entanglement-measurement method. We reached microscopic resolution and imaged cells.”
    While there is no theoretical limit to the number of photons that can be entangled with each other, each additional photon would further increase the momentum of the resulting multiphoton while further decreasing its wavelength.
    Wang says future research could enable entanglement of even more photons, although he notes that each extra photon further reduces the probability of a successful entanglement, which, as mentioned above, is already as low as a one-in-a-million chance.
    The paper describing the work, “Quantum Microscopy of Cells at the Heisenberg Limit,” appears in the April 28 issue of Nature Communications. Co-authors are Zhe Heand Yide Zhang, both postdoctoral scholar research associates in medical engineering; medical engineering graduate student Xin Tong (MS ’21); and Lei Li (PhD ’19), formerly a medical engineering postdoctoral scholar and now an assistant professor of electrical and computer engineering at Rice University.
    Funding for the research was provided by the Chan Zuckerberg Initiative and the National Institutes of Health. More

  • in

    Could wearables capture well-being?

    Applying machine learning models, a type of artificial intelligence (AI), to data collected passively from wearable devices can identify a patient’s degree of resilience and well-being, according to investigators at the Icahn School of Medicine at Mount Sinai in New York.
    The findings, reported in the May 2nd issue of JAMIA Open, support wearable devices, such as the Apple Watch®, as a way to monitor and assess psychological states remotely without requiring the completion of mental health questionnaires.
    The paper points out that resilience, or an individual’s ability to overcome difficulty, is an important stress mitigator, reduces morbidity, and improves chronic disease management.
    “Wearables provide a means to continually collect information about an individual’s physical state. Our results provide insight into the feasibility of assessing psychological characteristics from this passively collected data,” said first author Robert P. Hirten, MD, Clinical Director, Hasso Plattner Institute for Digital Health at Mount Sinai. “To our knowledge, this is the first study to evaluate whether resilience, a key mental health feature, can be evaluated from devices such as the Apple Watch.”
    Mental health disorders are common, accounting for 13 percent of the burden of global disease, with a quarter of the population at some point experiencing psychological illness. Yet we have limited resources for their evaluation, say the researchers.
    “There are wide disparities in access across geography and socioeconomic status, and the need for in-person assessment or the completion of validated mental health surveys is further limiting,” said senior author Zahi Fayad, PhD, Director of the BioMedical Engineering and Imaging Institute at Icahn Mount Sinai. “A better understanding of who is at psychological risk and an improved means of tracking the impact of psychological interventions is needed. The growth of digital technology presents an opportunity to improve access to mental health services for all people.”
    To determine if machine learning models could be trained to distinguish an individual’s degree of resilience and psychological well-being using the data from wearable devices, the Icahn Mount Sinai researchers analyzed data from the Warrior Watch Study. Leveraged for the current digital observational study, the data set comprised 329 health care workers enrolled at seven hospitals in New York City.

    Subjects wore an Apple Watch® Series 4 or 5 for the duration of their participation, measuring heart rate variability and resting heart rate throughout the follow-up period. Surveys were collected measuring resilience, optimism, and emotional support at baseline. The metrics collected were found to be predictive in identifying resilience or well-being states. Despite the Warrior Watch Study not being designed to evaluate this endpoint, the findings support the further assessment of psychological characteristics from passively collected wearable data.
    “We hope that this approach will enable us to bring psychological assessment and care to a larger population, who may not have access at this time,” said Micol Zweig, MPH, co-author of the paper and Associate Director of Clinical Research, Hasso Plattner Institute for Digital Health at Mount Sinai. “We also intend to evaluate this technique in other patient populations to further refine the algorithm and improve its applicability.”
    To that end, the research team plans to continue using wearable data to observe a range of physical and psychological disorders and diseases. The simultaneous development of sophisticated analytical tools, including artificial intelligence, say the investigators, can facilitate the analysis of data collected from these devices and apps to identify patterns associated with a given mental or physical disease condition.
    The paper is titled “A machine learning approach to determine resilience utilizing wearable device data: analysis of an observational cohort.”
    Additional co-authors are Matteo Danielleto, PhD, Maria Suprun, PhD, Eddye Golden, MPH, Sparshdeep Kaur, BBA, Drew Helmus, MPH, Anthony Biello, BA, Dennis Charney, MD, Laurie Keefer, PhD, Mayte Suarez-Farinas, PhD, and Girish N Nadkami, MD, all from the Icahn School of Medicine at Mount Sinai.
    Support for this study was provided by the Ehrenkranz Lab for Human Resilience, the BioMedical Engineering and Imaging Institute, the Hasso Plattner Institute for Digital Health, the Mount Sinai Clinical Intelligence Center, and the Dr. Henry D. Janowitz Division of Gastroenterology, all at Icahn Mount Sinai, and from the National Institutes of Health, grant number K23DK129835. More

  • in

    Sensor enables high-fidelity input from everyday objects, human body

    Couches, tables, sleeves and more can turn into a high-fidelity input device for computers using a new sensing system developed at the University of Michigan.
    The system repurposes technology from new bone-conduction microphones, known as Voice Pickup Units (VPUs), which detect only those acoustic waves that travel along the surface of objects. It works in noisy environments, along odd geometries such as toys and arms, and on soft fabrics such as clothing and furniture.
    Called SAWSense, for the surface acoustic waves it relies on, the system recognizes different inputs, such as taps, scratches and swipes, with 97% accuracy. In one demonstration, the team used a normal table to replace a laptop’s trackpad.
    “This technology will enable you to treat, for example, the whole surface of your body like an interactive surface,” said Yasha Iravantchi, U-M doctoral candidate in computer science and engineering. “If you put the device on your wrist, you can do gestures on your own skin. We have preliminary findings that demonstrate this is entirely feasible.”
    Taps, swipes and other gestures send acoustic waves along the surfaces of materials. The system then classifies these waves with machine learning to turn all touch into a robust set of inputs. The system was presented last week at the 2023 Conference on Human Factors in Computing Systems, where it received a best paper award.
    As more objects continue to incorporate smart or connected technology, designers are faced with a number of challenges when trying to give them intuitive input mechanisms. This results in a lot of clunky incorporation of input methods such as touch screens, as well as mechanical and capacitive buttons, Iravantchi says. Touch screens may be too costly to enable gesture inputs across large surfaces like counters and refrigerators, while buttons only allow one kind of input at predefined locations.

    Past approaches to overcome these limitations have included the use of microphones and cameras for audio- and gesture-based inputs, but the authors say techniques like these have limited practicality in the real world.
    “When there’s a lot of background noise, or something comes between the user and the camera, audio and visual gesture inputs don’t work well,” Iravantchi said.
    To overcome these limitations, the sensors powering SAWSense are housed in a hermetically sealed chamber that completely blocks even very loud ambient noise. The only entryway is through a mass-spring system that conducts the surface-acoustic waves inside the housing without ever coming in contact with sounds in the surrounding environment. When combined with the team’s signal processing software, which generates features from the data before feeding it into the machine learning model, the system can record and classify the events along an object’s surface.
    “There are other ways you could detect vibrations or surface-acoustic waves, like piezo-electric sensors or accelerometers,” said Alanson Sample, U-M associate professor of electrical engineering and computer science, “but they can’t capture the broad range of frequencies that we need to tell the difference between a swipe and a scratch, for instance.”
    The high fidelity of the VPUs allows SAWSense to identify a wide range of activities on a surface beyond user touch events. For instance, a VPU on a kitchen countertop can detect chopping, stirring, blending or whisking, as well as identifying electronic devices in use such as a blender or microwave.

    “VPUs do a good job of sensing activities and events happening in a well-defined area,” Iravantchi said. “This allows the functionality that comes with a smart object without the privacy concerns of a standard microphone that senses the whole room, for example.”
    When multiple VPUs are used in combination, SAWSense could enable more specific and sensitive inputs, especially those that require a sense of space and distance like the keys on a keyboard or buttons on a remote.
    In addition, the researchers are exploring the use of VPUs for medical sensing, including picking up delicate noises such as the sounds of joints and connective tissues as they move. The high-fidelity audio data VPUs provide could enable real-time analytics about a person’s health, Sample says.
    The research is partially funded by Meta Platforms Inc.
    The team has applied for patent protection with the assistance of U-M Innovation Partnerships and is seeking partners to bring the technology to market. More

  • in

    Lithography-free photonic chip offers speed and accuracy for artificial intelligence

    Photonic chips have revolutionized data-heavy technologies. On their own or in concert with traditional electronic circuits, these laser-powered devices send and process information at the speed of light, making them a promising solution for artificial intelligence’s data-hungry applications.
    In addition to their incomparable speed, photonic circuits use significantly less energy than electronic ones. Electrons move relatively slowly through hardware, colliding with other particles and generating heat, while photons flow without losing energy, generating no heat at all. Unburdened by the energy loss inherent in electronics, integrated photonics are poised to play a leading role in sustainable computing.
    Photonics and electronics draw on separate areas of science and use distinct architectural structures. Both, however, rely on lithography to define their circuit elements and connect them sequentially. While photonic chips don’t make use of the transistors that populate electronic chips’ ever-shrinking and increasingly layered grooves, their complex lithographic patterning guides laser beams through a coherent circuit to form a photonic network that can perform computational algorithms.
    But now, for the first time, researchers at the University of Pennsylvania School of Engineering and Applied Science have created a photonic device that provides programmable on-chip information processing without lithography, offering the speed of photonics augmented by superior accuracy and flexibility for AI applications.
    Achieving unparalleled control of light, this device consists of spatially distributed optical gain and loss. Lasers cast light directly on a semiconductor wafer, without the need for defined lithographic pathways.
    Liang Feng, Professor in the Departments of Materials Science and Engineering (MSE) and Electrical Systems and Engineering (ESE), along with Ph.D. student Tianwei Wu (MSE) and postdoctoral fellows Zihe Gao and Marco Menarini (ESE), introduced the microchip in a recent study published in Nature Photonics.

    Silicon-based electronic systems have transformed the computational landscape. But they have clear limitations: they are slow in processing signal, they work through data serially and not in parallel, and they can only be miniaturized to a certain extent. Photonics is one of the most promising alternatives because it can overcome all these shortcomings.
    “But photonic chips intended for machine learning applications face the obstacles of an intricate fabrication process where lithographic patterning is fixed, limited in reprogrammability, subject to error or damage and expensive,” says Feng. “By removing the need for lithography, we are creating a new paradigm. Our chip overcomes those obstacles and offers improved accuracy and ultimate reconfigurability given the elimination of all kinds of constraints from predefined features.”
    Without lithography, these chips become adaptable data-processing powerhouses. Because patterns are not pre-defined and etched in, the device is intrinsically free of defects. Perhaps more impressively, the lack of lithography renders the microchip impressively reprogrammable, able to tailor its laser-cast patterns for optimal performance, be the task simple (few inputs, small datasets) or complex (many inputs, large datasets).
    In other words, the intricacy or minimalism of the device is a sort of living thing, adaptable in ways no etched microchip could be.
    “What we have here is something incredibly simple,” says Wu. “We can build and use it very quickly. We can integrate it easily with classical electronics. And we can reprogram it, changing the laser patterns on the fly to achieve real-time reconfigurable computing for on-chip training of an AI network.”
    An unassuming slab of semiconductor, the device couldn’t be simpler. It’s the manipulation of this slab’s material properties that is the key to research team’s breakthrough in projecting lasers into dynamically programmable patterns to reconfigure the computing functions of the photonic information processor.

    This ultimate reconfigurability is critical for real-time machine learning and AI.
    “The interesting part,” says Menarini, “is how we are controlling the light. Conventional photonic chips are technologies based on passive material, meaning its material scatters light, bouncing it back and forth. Our material is active. The beam of pumping light modifies the material such that when the signal beam arrives, it can release energy and increase the amplitude of signals.”
    “This active nature is the key to this science, and the solution required to achieve our lithography-free technology,” adds Gao. “We can use it to reroute optical signals and program optical information processing on-chip.”
    Feng compares the technology to an artistic tool, a pen for drawing pictures on a blank page.
    “What we have achieved is exactly the same: pumping light is our pen to draw the photonic computational network (the picture) on a piece of unpatterned semiconductor wafer (the blank page).”
    But unlike indelible lines of ink, these beams of light can be drawn and redrawn, their patterns tracing innumerable paths to the future. More

  • in

    Realistic simulated driving environment based on ‘crash-prone’ Michigan intersection

    The first statistically realistic roadway simulation has been developed by researchers at the University of Michigan. While it currently represents a particularly perilous roundabout, future work will expand it to include other driving situations for testing autonomous vehicle software.
    The simulation is a machine-learning model that trained on data collected at a roundabout on the south side of Ann Arbor, recognized as one of the most crash-prone intersections in the state of Michigan and conveniently just a few miles from the offices of the research team.
    Known as the Neural Naturalistic Driving Environment or NeuralNDE, it turned that data into a simulation of what drivers experience everyday. Virtual roadways like this are needed to ensure the safety of autonomous vehicle software before other cars, cyclists and pedestrians ever cross its path.
    “The NeuralNDE reproduces the driving environment and, more importantly, realistically simulates these safety-critical situations so we can evaluate the safety performance of autonomous vehicles,” said Henry Liu, U-M professor of civil engineering and director of Mcity, a U-M-led public-private mobility research partnership.
    Liu is also director of Center for Connected and Automated Transportation and corresponding author of the study in Nature Communications.
    Safety critical events, which require a driver to make split-second decisions and take action, don’t happen that often. Drivers can go many hours between events that force them to slam on the brakes or swerve to avoid a collision, and each event has its own unique circumstances.

    Together, these represent two bottlenecks in the effort to simulate our roadways, known as the “curse of rarity” and the “curse of dimensionality” respectively. The curse of dimensionality is caused by the complexity of the driving environment, which includes factors like pavement quality, the current weather conditions, and the different types of road users including pedestrians and bicyclists.
    To model it all, the team tried to see it all. They installed sensor systems on light poles which continuously collect data at the State Street/Ellsworth Road roundabout.
    “The reason that we chose that location is that roundabouts are a very challenging, urban driving scenario for autonomous vehicles. In a roundabout, drivers are required to spontaneously negotiate and cooperate with other drivers moving through the intersection. In addition, this particular roundabout experiences high traffic volume and is two lanes, which adds to its complexity,” said Xintao Yan, a Ph.D. student in civil and environmental engineering and first author of the study, who is advised by Liu.
    The NeuralNDE serves as a key component of the CCAT Safe AI Framework for Trustworthy Edge Scenario Tests, or SAFE TEST, a system developed by Liu’s team that uses artificial intelligence to reduce the testing miles required to ensure the safety of autonomous vehicles by 99.99%. It essentially breaks the “curse of rarity,” introducing safety-critical incidents a thousand times more frequently than they occur in real driving. The NeuralNDE is also critical to a project designed to enable the Mcity Test Facility to be used for remote testing of AV software.
    But unlike a fully virtual environment, these tests take place in mixed reality on closed test tracks such as the Mcity Test Facility and the American Center for Mobility in Ypsilanti, Michigan. In addition to the real conditions of the track, the autonomous vehicles also experience virtual drivers, cyclists and pedestrians behaving in both safe and dangerous ways. By testing these scenarios in a controlled environment, AV developers can fine-tune their systems to better handle all driving situations.
    The NeuralNDE is not only beneficial for AV developers but also for researchers studying human driver behavior. The simulation can interpret data on how drivers respond to different scenarios, which can help develop more functional road infrastructure.
    In 2021, the U-M Transportation Research Institute was awarded $9.95 million in funding by the U.S. Department of Transportation to expand the number of intersections equipped with these sensors to 21. This implementation will expand the capabilities of the NeuralNDE and provide real-time alerts to drivers with connected vehicles.
    The research was funded by Mcity, CCAT and the U-M Transportation Research Institute. Founded in 1965, UMTRI is a global leader in multidisciplinary research and a partner of choice for industry leaders, foundations and government agencies to advance safe, equitable and efficient transportation and mobility. CCAT is a regional university transportation research center that was recently awarded a $15 million, five-year renewal by the USDOT. More

  • in

    Brain activity decoder can reveal stories in people’s minds

    A new artificial intelligence system called a semantic decoder can translate a person’s brain activity — while listening to a story or silently imagining telling a story — into a continuous stream of text. The system developed by researchers at The University of Texas at Austin might help people who are mentally conscious yet unable to physically speak, such as those debilitated by strokes, to communicate intelligibly again.
    The study, published in the journal Nature Neuroscience, was led by Jerry Tang, a doctoral student in computer science, and Alex Huth, an assistant professor of neuroscience and computer science at UT Austin. The work relies in part on a transformer model, similar to the ones that power Open AI’s ChatGPT and Google’s Bard.
    Unlike other language decoding systems in development, this system does not require subjects to have surgical implants, making the process noninvasive. Participants also do not need to use only words from a prescribed list. Brain activity is measured using an fMRI scanner after extensive training of the decoder, in which the individual listens to hours of podcasts in the scanner. Later, provided that the participant is open to having their thoughts decoded, their listening to a new story or imagining telling a story allows the machine to generate corresponding text from brain activity alone.
    “For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences,” Huth said. “We’re getting the model to decode continuous language for extended periods of time with complicated ideas.”
    The result is not a word-for-word transcript. Instead, researchers designed it to capture the gist of what is being said or thought, albeit imperfectly. About half the time, when the decoder has been trained to monitor a participant’s brain activity, the machine produces text that closely (and sometimes precisely) matches the intended meanings of the original words.
    For example, in experiments, a participant listening to a speaker say, “I don’t have my driver’s license yet” had their thoughts translated as, “She has not even started to learn to drive yet.” Listening to the words, “I didn’t know whether to scream, cry or run away. Instead, I said, ‘Leave me alone!'” was decoded as, “Started to scream and cry, and then she just said, ‘I told you to leave me alone.'”
    Beginning with an earlier version of the paper that appeared as a preprint online, the researchers addressed questions about potential misuse of the technology. The paper describes how decoding worked only with cooperative participants who had participated willingly in training the decoder. Results for individuals on whom the decoder had not been trained were unintelligible, and if participants on whom the decoder had been trained later put up resistance — for example, by thinking other thoughts — results were similarly unusable.

    “We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that,” Tang said. “We want to make sure people only use these types of technologies when they want to and that it helps them.”
    In addition to having participants listen or think about stories, the researchers asked subjects to watch four short, silent videos while in the scanner. The semantic decoder was able to use their brain activity to accurately describe certain events from the videos.
    The system currently is not practical for use outside of the laboratory because of its reliance on the time need on an fMRI machine. But the researchers think this work could transfer to other, more portable brain-imaging systems, such as functional near-infrared spectroscopy (fNIRS).
    “fNIRS measures where there’s more or less blood flow in the brain at different points in time, which, it turns out, is exactly the same kind of signal that fMRI is measuring,” Huth said. “So, our exact kind of approach should translate to fNIRS,” although, he noted, the resolution with fNIRS would be lower.
    This work was supported by the Whitehall Foundation, the Alfred P. Sloan Foundation and the Burroughs Wellcome Fund.
    The study’s other co-authors are Amanda LeBel, a former research assistant in the Huth lab, and Shailee Jain, a computer science graduate student at UT Austin.
    Alexander Huth and Jerry Tang have filed a PCT patent application related to this work. More

  • in

    Researchers explore why some people get motion sick playing VR games while others don’t

    The way our senses adjust while playing high-intensity virtual reality games plays a critical role in understanding why some people experience severe cybersickness and others don’t.
    Cybersickness is a form of motion sickness that occurs from exposure to immersive VR and augmented reality applications.
    A new study, led by researchers at the University of Waterloo, found that the subjective visual vertical — a measure of how individuals perceive the orientation of vertical lines — shifted considerably after participants played a high-intensity VR game.
    “Our findings suggest that the severity of a person’s cybersickness is affected by how our senses adjust to the conflict between reality and virtual reality,” said Michael Barnett-Cowan, a professor in the Department of Kinesiology and Health Sciences. “This knowledge could be invaluable for developers and designers of VR experiences, enabling them to create more comfortable and enjoyable environments for users.”
    The researchers collected data from 31 participants. They assessed their perceptions of the vertical before and after playing two VR games, one high-intensity and one low-intensity.
    Those who experienced less sickness were more likely to have the largest change in the subjective visual vertical following exposure to VR, particularly at a high intensity. Conversely, those who had the highest levels of cybersickness were less likely to have changed how they perceived vertical lines. There were no significant differences between males and females, nor between participants with low and high gaming experience.
    “While the subjective vertical visual task significantly predicted the severity of cybersickness symptoms, there is still much to be explained,” said co-author William Chung, a former Waterloo doctoral student who is now a postdoctoral fellow at the Toronto Rehabilitation Institute.
    “By understanding the relationship between sensory reweighting and cybersickness susceptibility, we can potentially develop personalized cybersickness mitigation strategies and VR experiences that take into account individual differences in sensory processing and hopefully lower the occurrence of cybersickness.”
    As VR continues to revolutionize gaming, education and social interaction, addressing the pervasive issue of cybersickness — marked by symptoms such as nausea, disorientation, eye strain and fatigue — is critical for ensuring a positive user experience. More

  • in

    Satellite data reveal nearly 20,000 previously unknown deep-sea mountains

    The number of known mountains in Earth’s oceans has roughly doubled. Global satellite observations have revealed nearly 20,000 previously unknown seamounts, researchers report in the April Earth and Space Science.

    Just as mountains tower over Earth’s surface, seamounts also rise above the ocean floor. The tallest mountain on Earth, as measured from base to peak, is Mauna Kea, which is part of the Hawaiian-Emperor Seamount Chain.

    These underwater edifices are often hot spots of marine biodiversity (SN: 10/7/16). That’s in part because their craggy walls — formed from volcanic activity — provide a plethora of habitats. Seamounts also promote upwelling of nutrient-rich water, which distributes beneficial compounds like nitrates and phosphates throughout the water column. They’re like “stirring rods in the ocean,” says David Sandwell, a geophysicist at the Scripps Institution of Oceanography at the University of California, San Diego.

    .email-conversion {
    border: 1px solid #ffcccb;
    color: white;
    margin-top: 50px;
    background-image: url(“/wp-content/themes/sciencenews/client/src/images/cta-module@2x.jpg”);
    padding: 20px;
    clear: both;
    }

    .zephr-registration-form{max-width:440px;margin:20px auto;padding:20px;background-color:#fff;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form *{box-sizing:border-box}.zephr-registration-form-text > *{color:var(–zephr-color-text-main)}.zephr-registration-form-relative-container{position:relative}.zephr-registration-form-flex-container{display:flex}.zephr-registration-form-input.svelte-blfh8x{display:block;width:100%;height:calc(var(–zephr-input-height) * 1px);padding-left:8px;font-size:16px;border:calc(var(–zephr-input-borderWidth) * 1px) solid var(–zephr-input-borderColor);border-radius:calc(var(–zephr-input-borderRadius) * 1px);transition:border-color 0.25s ease, box-shadow 0.25s ease;outline:0;color:var(–zephr-color-text-main);background-color:#fff;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-input.svelte-blfh8x::placeholder{color:var(–zephr-color-background-tinted)}.zephr-registration-form-input-checkbox.svelte-blfh8x{width:auto;height:auto;margin:8px 5px 0 0;float:left}.zephr-registration-form-input-radio.svelte-blfh8x{position:absolute;opacity:0;cursor:pointer;height:0;width:0}.zephr-registration-form-input-color[type=”color”].svelte-blfh8x{width:50px;padding:0;border-radius:50%}.zephr-registration-form-input-color[type=”color”].svelte-blfh8x::-webkit-color-swatch{border:none;border-radius:50%;padding:0}.zephr-registration-form-input-color[type=”color”].svelte-blfh8x::-webkit-color-swatch-wrapper{border:none;border-radius:50%;padding:0}.zephr-registration-form-input.disabled.svelte-blfh8x,.zephr-registration-form-input.disabled.svelte-blfh8x:hover{border:calc(var(–zephr-input-borderWidth) * 1px) solid var(–zephr-input-borderColor);background-color:var(–zephr-color-background-tinted)}.zephr-registration-form-input.error.svelte-blfh8x{border:1px solid var(–zephr-color-warning-main)}.zephr-registration-form-input-label.svelte-1ok5fdj.svelte-1ok5fdj{margin-top:10px;display:block;line-height:30px;font-size:12px;color:var(–zephr-color-text-tinted);font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-input-label.svelte-1ok5fdj span.svelte-1ok5fdj{display:block}.zephr-registration-form-button.svelte-17g75t9{height:calc(var(–zephr-button-height) * 1px);line-height:0;padding:0 20px;text-decoration:none;text-transform:capitalize;text-align:center;border-radius:calc(var(–zephr-button-borderRadius) * 1px);font-size:calc(var(–zephr-button-fontSize) * 1px);font-weight:normal;cursor:pointer;border-style:solid;border-width:calc(var(–zephr-button-borderWidth) * 1px);border-color:var(–zephr-color-action-tinted);transition:backdrop-filter 0.2s, background-color 0.2s;margin-top:20px;display:block;width:100%;background-color:var(–zephr-color-action-main);color:#fff;position:relative;overflow:hidden;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-button.svelte-17g75t9:hover{background-color:var(–zephr-color-action-tinted);border-color:var(–zephr-color-action-tinted)}.zephr-registration-form-button.svelte-17g75t9:disabled{background-color:var(–zephr-color-background-tinted);border-color:var(–zephr-color-background-tinted)}.zephr-registration-form-button.svelte-17g75t9:disabled:hover{background-color:var(–zephr-color-background-tinted);border-color:var(–zephr-color-background-tinted)}.zephr-registration-form-text.svelte-i1fi5{font-size:19px;text-align:center;margin:20px auto;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-input-inner-text.svelte-lvlpcn{cursor:pointer;position:absolute;top:50%;transform:translateY(-50%);right:10px;color:var(–zephr-color-text-main);font-size:12px;font-weight:bold;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-divider-container.svelte-mk4m8o{display:flex;align-items:center;justify-content:center;margin:40px 0}.zephr-registration-form-divider-line.svelte-mk4m8o{height:1px;width:50%;margin:0 5px;background-color:var(–zephr-color-text-tinted);;}.zephr-registration-form-divider-text.svelte-mk4m8o{margin:0 12px;color:var(–zephr-color-text-main);font-size:14px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);white-space:nowrap}.zephr-registration-form-response-message.svelte-179421u{text-align:center;padding:10px 30px;border-radius:5px;font-size:15px;margin-top:10px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-response-message-title.svelte-179421u{font-weight:bold;margin-bottom:10px}.zephr-registration-form-response-message-success.svelte-179421u{background-color:#baecbb;border:1px solid #00bc05}.zephr-registration-form-response-message-error.svelte-179421u{background-color:#fcdbec;border:1px solid #d90c00}.zephr-recaptcha-tcs.svelte-1wyy3bx{margin:20px 0 0 0;font-size:15px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-recaptcha-inline.svelte-1wyy3bx{margin:20px 0 0 0}.zephr-registration-form-social-sign-in.svelte-gp4ky7{align-items:center}.zephr-registration-form-social-sign-in-button.svelte-gp4ky7{height:55px;padding:0 15px;color:#000;background-color:#fff;box-shadow:0px 0px 5px rgba(0, 0, 0, 0.3);border-radius:10px;font-size:17px;display:flex;align-items:center;cursor:pointer;margin-top:20px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-social-sign-in-button.svelte-gp4ky7:hover{background-color:#fafafa}.zephr-registration-form-social-sign-in-icon.svelte-gp4ky7{display:flex;justify-content:center;margin-right:30px;width:25px}.zephr-form-link-message.svelte-rt4jae{margin:10px 0 10px 20px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-progress-bar.svelte-8qyhcl{width:100%;border:0;border-radius:20px;margin-top:10px}.zephr-registration-form-progress-bar.svelte-8qyhcl::-webkit-progress-bar{background-color:var(–zephr-color-background-tinted);border:0;border-radius:20px}.zephr-registration-form-progress-bar.svelte-8qyhcl::-webkit-progress-value{background-color:var(–zephr-color-text-tinted);border:0;border-radius:20px}.zephr-registration-progress-bar-step.svelte-8qyhcl{margin:auto;color:var(–zephr-color-text-tinted);font-size:12px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-progress-bar-step.svelte-8qyhcl:first-child{margin-left:0}.zephr-registration-progress-bar-step.svelte-8qyhcl:last-child{margin-right:0}.zephr-registration-form-input-inner-text.svelte-lvlpcn{cursor:pointer;position:absolute;top:50%;transform:translateY(-50%);right:10px;color:var(–zephr-color-text-main);font-size:12px;font-weight:bold;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-input-error-text.svelte-19a73pq{color:var(–zephr-color-warning-main);font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-input-select.svelte-19a73pq{display:block;appearance:auto;width:100%;height:calc(var(–zephr-input-height) * 1px);font-size:16px;border:calc(var(–zephr-input-borderWidth) * 1px) solid var(–zephr-color-text-main);border-radius:calc(var(–zephr-input-borderRadius) * 1px);transition:border-color 0.25s ease, box-shadow 0.25s ease;outline:0;color:var(–zephr-color-text-main);background-color:#fff;padding:10px}.zephr-registration-form-input-select.disabled.svelte-19a73pq{border:1px solid var(–zephr-color-background-tinted)}.zephr-registration-form-input-select.unselected.svelte-19a73pq{color:var(–zephr-color-background-tinted)}.zephr-registration-form-input-select.error.svelte-19a73pq{border-color:var(–zephr-color-warning-main)}.zephr-registration-form-input-textarea.svelte-19a73pq{background-color:#fff;border:1px solid #ddd;color:#222;font-size:14px;font-weight:300;padding:16px;width:100%}.zephr-registration-form-input-slider-output.svelte-19a73pq{margin:13px 0 0 10px}.spin.svelte-1cj2gr0{animation:svelte-1cj2gr0-spin 2s 0s infinite linear}.pulse.svelte-1cj2gr0{animation:svelte-1cj2gr0-spin 1s infinite steps(8)}@keyframes svelte-1cj2gr0-spin{0%{transform:rotate(0deg)}100%{transform:rotate(360deg)}}.zephr-registration-form-input-radio.svelte-1qn5n0t{position:absolute;opacity:0;cursor:pointer;height:0;width:0}.zephr-registration-form-radio-label.svelte-1qn5n0t{display:flex;align-items:center;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-radio-dot.svelte-1qn5n0t{position:relative;box-sizing:border-box;height:23px;width:23px;background-color:#fff;border:1px solid #ebebeb;border-radius:50%;margin-right:12px}.checked.svelte-1qn5n0t{border-color:#009fe3}.checked.svelte-1qn5n0t:after{content:””;position:absolute;width:17px;height:17px;background:#009fe3;background:linear-gradient(#009fe3, #006cb5);border-radius:50%;top:2px;left:2px}.disabled.checked.svelte-1qn5n0t:after{background:var(–zephr-color-background-tinted)}.error.svelte-1qn5n0t{border:1px solid var(–zephr-color-warning-main)}.zephr-registration-form-checkbox.svelte-1gzpw2y{position:absolute;opacity:0;cursor:pointer;height:0;width:0}.zephr-registration-form-checkbox-label.svelte-1gzpw2y{display:flex;align-items:center;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-checkmark.svelte-1gzpw2y{position:relative;box-sizing:border-box;height:23px;width:23px;background-color:#fff;border:1px solid var(–zephr-color-text-main);border-radius:6px;margin-right:12px;cursor:pointer}.zephr-registration-form-checkmark.checked.svelte-1gzpw2y{border-color:#009fe3}.zephr-registration-form-checkmark.checked.svelte-1gzpw2y:after{content:””;position:absolute;width:6px;height:13px;border:solid #009fe3;border-width:0 2px 2px 0;transform:rotate(45deg);top:3px;left:8px;box-sizing:border-box}.zephr-registration-form-checkmark.disabled.svelte-1gzpw2y{border:1px solid var(–zephr-color-background-tinted)}.zephr-registration-form-checkmark.disabled.checked.svelte-1gzpw2y:after{border:solid var(–zephr-color-background-tinted);border-width:0 2px 2px 0}.zephr-registration-form-checkmark.error.svelte-1gzpw2y{border:1px solid var(–zephr-color-warning-main)}.zephr-registration-form-google-icon.svelte-1jnblvg{width:20px}.zephr-form-link.svelte-64wplc{margin:10px 0;color:#6ba5e9;text-decoration:underline;cursor:pointer;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-form-link-disabled.svelte-64wplc{color:var(–zephr-color-text-main);cursor:none;text-decoration:none}.zephr-registration-form-password-progress.svelte-d1zv9r{display:flex;margin-top:10px}.zephr-registration-form-password-bar.svelte-d1zv9r{width:100%;height:4px;border-radius:2px}.zephr-registration-form-password-bar.svelte-d1zv9r:not(:first-child){margin-left:8px}.zephr-registration-form-password-requirements.svelte-d1zv9r{margin:20px 0;justify-content:center}.zephr-registration-form-password-requirement.svelte-d1zv9r{display:flex;align-items:center;color:var(–zephr-color-text-tinted);font-size:12px;height:20px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-password-requirement-icon.svelte-d1zv9r{margin-right:10px;font-size:15px}.zephr-registration-form-password-progress.svelte-d1zv9r{display:flex;margin-top:10px}.zephr-registration-form-password-bar.svelte-d1zv9r{width:100%;height:4px;border-radius:2px}.zephr-registration-form-password-bar.svelte-d1zv9r:not(:first-child){margin-left:8px}.zephr-registration-form-password-requirements.svelte-d1zv9r{margin:20px 0;justify-content:center}.zephr-registration-form-password-requirement.svelte-d1zv9r{display:flex;align-items:center;color:var(–zephr-color-text-tinted);font-size:12px;height:20px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-password-requirement-icon.svelte-d1zv9r{margin-right:10px;font-size:15px}
    .zephr-registration-form {
    max-width: 100%;
    background-image: url(/wp-content/themes/sciencenews/client/src/images/cta-module@2x.jpg);
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    margin: 0px auto;
    margin-bottom: 4rem;
    padding: 20px;
    }

    .zephr-registration-form-text h6 {
    font-size: 0.8rem;
    }

    .zephr-registration-form h4 {
    font-size: 3rem;
    }

    .zephr-registration-form h4 {
    font-size: 1.5rem;
    }

    .zephr-registration-form-button.svelte-17g75t9:hover {
    background-color: #fc6a65;
    border-color: #fc6a65;
    width: 150px;
    margin-left: auto;
    margin-right: auto;
    }
    .zephr-registration-form-button.svelte-17g75t9:disabled {
    background-color: #e04821;
    border-color: #e04821;
    width: 150px;
    margin-left: auto;
    margin-right: auto;
    }
    .zephr-registration-form-button.svelte-17g75t9 {
    background-color: #e04821;
    border-color: #e04821;
    width: 150px;
    margin-left: auto;
    margin-right: auto;
    }
    .zephr-registration-form-text > * {
    color: #FFFFFF;
    font-weight: bold
    font: 25px;
    }
    .zephr-registration-form-progress-bar.svelte-8qyhcl {
    width: 100%;
    border: 0;
    border-radius: 20px;
    margin-top: 10px;
    display: none;
    }
    .zephr-registration-form-response-message-title.svelte-179421u {
    font-weight: bold;
    margin-bottom: 10px;
    display: none;
    }
    .zephr-registration-form-response-message-success.svelte-179421u {
    background-color: #8db869;
    border: 1px solid #8db869;
    color: white;
    margin-top: -0.2rem;
    }
    .zephr-registration-form-text.svelte-i1fi5:nth-child(1){
    font-size: 18px;
    text-align: center;
    margin: 20px auto;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    }
    .zephr-registration-form-text.svelte-i1fi5:nth-child(5){
    font-size: 18px;
    text-align: left;
    margin: 20px auto;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    }
    .zephr-registration-form-text.svelte-i1fi5:nth-child(7){
    font-size: 18px;
    text-align: left;
    margin: 20px auto;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    }
    .zephr-registration-form-text.svelte-i1fi5:nth-child(9){
    font-size: 18px;
    text-align: left;
    margin: 20px auto;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    }
    .zephr-registration-form-input-label.svelte-1ok5fdj span.svelte-1ok5fdj {
    display: none;
    color: white;
    }
    .zephr-registration-form-input.disabled.svelte-blfh8x, .zephr-registration-form-input.disabled.svelte-blfh8x:hover {
    border: calc(var(–zephr-input-borderWidth) * 1px) solid var(–zephr-input-borderColor);
    background-color: white;
    }
    .zephr-registration-form-checkbox-label.svelte-1gzpw2y {
    display: flex;
    align-items: center;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    font-size: 20px;
    margin-bottom: -20px;
    }

    More than 24,600 seamounts have been previously mapped. One common way of finding these hidden mountains is to ping the seafloor with sonar (SN: 4/16/21). But that’s an expensive, time-intensive process that requires a ship. Only about 20 percent of the ocean has been mapped that way, says Scripps earth scientist Julie Gevorgian. “There are a lot of gaps.”

    So Gevorgian, Sandwell and their colleagues turned to satellite observations, which provide global coverage of the world’s oceans, to take a census of seamounts.

    The team pored over satellite measurements of the height of the sea surface. The researchers looked for centimeter-scale bumps caused by the gravitational influence of a seamount. Because rock is denser than water, the presence of a seamount slightly changes the Earth’s gravitational field at that spot. “There’s an extra gravitational attraction,” Sandwell says, that causes water to pile up above the seamount.

    Using that technique, the team spotted 19,325 previously unknown seamounts. The researchers compared some of their observations with sonar maps of the seafloor to confirm that the newly discovered seamounts were likely real. Most of the newly discovered underwater mountains are on the small side — between roughly 700 and 2,500 meters tall, the researchers estimate.

    However, it’s possible that some could pose a risk to mariners. “There’s a point when they’re shallow enough that they’re within the depth range of submarines,” says David Clague, a marine geologist at the Monterey Bay Aquarium Research Institute in Moss Landing, Calif., who was not involved in the research. In 2021, the USS Connecticut, a nuclear submarine, ran into an uncharted seamount in the South China Sea. The vessel is still undergoing repairs at a shipyard in Washington state. More