More stories

  • in

    ‘The Deepest Map’ explores the thrills — and dangers — of charting the ocean

    The Deepest MapLaura TretheweyHarper Wave, $32

    In 2019, the multimillionaire and explorer Victor Vescovo made headlines when he became the first person to visit the deepest parts of all five of Earth’s oceans. But arguably the real star of the expedition was marine geologist Cassie Bongiovanni, the lead ocean mapper who ensured Vescovo piloted his submersible to the actual deepest depths.

    Today, only 25 percent of the seafloor is well mapped. When Vescovo set out to score his record, the exact deepest location in each ocean was unknown. Bongiovanni, Vescovo and their crew had to chart these regions in detail before each dive.

    .email-conversion {
    border: 1px solid #ffcccb;
    color: white;
    margin-top: 50px;
    background-image: url(“/wp-content/themes/sciencenews/client/src/images/cta-module@2x.jpg”);
    padding: 20px;
    clear: both;
    }

    .zephr-registration-form{max-width:440px;margin:20px auto;padding:20px;background-color:#fff;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form *{box-sizing:border-box}.zephr-registration-form-text > *{color:var(–zephr-color-text-main)}.zephr-registration-form-relative-container{position:relative}.zephr-registration-form-flex-container{display:flex}.zephr-registration-form-input.svelte-blfh8x{display:block;width:100%;height:calc(var(–zephr-input-height) * 1px);padding-left:8px;font-size:16px;border:calc(var(–zephr-input-borderWidth) * 1px) solid var(–zephr-input-borderColor);border-radius:calc(var(–zephr-input-borderRadius) * 1px);transition:border-color 0.25s ease, box-shadow 0.25s ease;outline:0;color:var(–zephr-color-text-main);background-color:#fff;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-input.svelte-blfh8x::placeholder{color:var(–zephr-color-background-tinted)}.zephr-registration-form-input-checkbox.svelte-blfh8x{width:auto;height:auto;margin:8px 5px 0 0;float:left}.zephr-registration-form-input-radio.svelte-blfh8x{position:absolute;opacity:0;cursor:pointer;height:0;width:0}.zephr-registration-form-input-color[type=”color”].svelte-blfh8x{width:50px;padding:0;border-radius:50%}.zephr-registration-form-input-color[type=”color”].svelte-blfh8x::-webkit-color-swatch{border:none;border-radius:50%;padding:0}.zephr-registration-form-input-color[type=”color”].svelte-blfh8x::-webkit-color-swatch-wrapper{border:none;border-radius:50%;padding:0}.zephr-registration-form-input.disabled.svelte-blfh8x,.zephr-registration-form-input.disabled.svelte-blfh8x:hover{border:calc(var(–zephr-input-borderWidth) * 1px) solid var(–zephr-input-borderColor);background-color:var(–zephr-color-background-tinted)}.zephr-registration-form-input.error.svelte-blfh8x{border:1px solid var(–zephr-color-warning-main)}.zephr-registration-form-input-label.svelte-1ok5fdj.svelte-1ok5fdj{margin-top:10px;display:block;line-height:30px;font-size:12px;color:var(–zephr-color-text-tinted);font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-input-label.svelte-1ok5fdj span.svelte-1ok5fdj{display:block}.zephr-registration-form-button.svelte-17g75t9{height:calc(var(–zephr-button-height) * 1px);line-height:0;padding:0 20px;text-decoration:none;text-transform:capitalize;text-align:center;border-radius:calc(var(–zephr-button-borderRadius) * 1px);font-size:calc(var(–zephr-button-fontSize) * 1px);font-weight:normal;cursor:pointer;border-style:solid;border-width:calc(var(–zephr-button-borderWidth) * 1px);border-color:var(–zephr-color-action-tinted);transition:backdrop-filter 0.2s, background-color 0.2s;margin-top:20px;display:block;width:100%;background-color:var(–zephr-color-action-main);color:#fff;position:relative;overflow:hidden;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-button.svelte-17g75t9:hover{background-color:var(–zephr-color-action-tinted);border-color:var(–zephr-color-action-tinted)}.zephr-registration-form-button.svelte-17g75t9:disabled{background-color:var(–zephr-color-background-tinted);border-color:var(–zephr-color-background-tinted)}.zephr-registration-form-button.svelte-17g75t9:disabled:hover{background-color:var(–zephr-color-background-tinted);border-color:var(–zephr-color-background-tinted)}.zephr-registration-form-text.svelte-i1fi5{font-size:19px;text-align:center;margin:20px auto;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-divider-container.svelte-mk4m8o{display:flex;align-items:center;justify-content:center;margin:40px 0}.zephr-registration-form-divider-line.svelte-mk4m8o{height:1px;width:50%;margin:0 5px;background-color:var(–zephr-color-text-tinted);;}.zephr-registration-form-divider-text.svelte-mk4m8o{margin:0 12px;color:var(–zephr-color-text-main);font-size:14px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);white-space:nowrap}.zephr-registration-form-input-inner-text.svelte-lvlpcn{cursor:pointer;position:absolute;top:50%;transform:translateY(-50%);right:10px;color:var(–zephr-color-text-main);font-size:12px;font-weight:bold;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-response-message.svelte-179421u{text-align:center;padding:10px 30px;border-radius:5px;font-size:15px;margin-top:10px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-response-message-title.svelte-179421u{font-weight:bold;margin-bottom:10px}.zephr-registration-form-response-message-success.svelte-179421u{background-color:#baecbb;border:1px solid #00bc05}.zephr-registration-form-response-message-error.svelte-179421u{background-color:#fcdbec;border:1px solid #d90c00}.zephr-registration-form-social-sign-in.svelte-gp4ky7{align-items:center}.zephr-registration-form-social-sign-in-button.svelte-gp4ky7{height:55px;padding:0 15px;color:#000;background-color:#fff;box-shadow:0px 0px 5px rgba(0, 0, 0, 0.3);border-radius:10px;font-size:17px;display:flex;align-items:center;cursor:pointer;margin-top:20px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-social-sign-in-button.svelte-gp4ky7:hover{background-color:#fafafa}.zephr-registration-form-social-sign-in-icon.svelte-gp4ky7{display:flex;justify-content:center;margin-right:30px;width:25px}.zephr-form-link-message.svelte-rt4jae{margin:10px 0 10px 20px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-recaptcha-tcs.svelte-1wyy3bx{margin:20px 0 0 0;font-size:15px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-recaptcha-inline.svelte-1wyy3bx{margin:20px 0 0 0}.zephr-registration-form-progress-bar.svelte-8qyhcl{width:100%;border:0;border-radius:20px;margin-top:10px}.zephr-registration-form-progress-bar.svelte-8qyhcl::-webkit-progress-bar{background-color:var(–zephr-color-background-tinted);border:0;border-radius:20px}.zephr-registration-form-progress-bar.svelte-8qyhcl::-webkit-progress-value{background-color:var(–zephr-color-text-tinted);border:0;border-radius:20px}.zephr-registration-progress-bar-step.svelte-8qyhcl{margin:auto;color:var(–zephr-color-text-tinted);font-size:12px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-progress-bar-step.svelte-8qyhcl:first-child{margin-left:0}.zephr-registration-progress-bar-step.svelte-8qyhcl:last-child{margin-right:0}.zephr-registration-form-input-error-text.svelte-19a73pq{color:var(–zephr-color-warning-main);font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-input-select.svelte-19a73pq{display:block;appearance:auto;width:100%;height:calc(var(–zephr-input-height) * 1px);font-size:16px;border:calc(var(–zephr-input-borderWidth) * 1px) solid var(–zephr-color-text-main);border-radius:calc(var(–zephr-input-borderRadius) * 1px);transition:border-color 0.25s ease, box-shadow 0.25s ease;outline:0;color:var(–zephr-color-text-main);background-color:#fff;padding:10px}.zephr-registration-form-input-select.disabled.svelte-19a73pq{border:1px solid var(–zephr-color-background-tinted)}.zephr-registration-form-input-select.unselected.svelte-19a73pq{color:var(–zephr-color-background-tinted)}.zephr-registration-form-input-select.error.svelte-19a73pq{border-color:var(–zephr-color-warning-main)}.zephr-registration-form-input-textarea.svelte-19a73pq{background-color:#fff;border:1px solid #ddd;color:#222;font-size:14px;font-weight:300;padding:16px;width:100%}.zephr-registration-form-input-slider-output.svelte-19a73pq{margin:13px 0 0 10px}.zephr-registration-form-input-inner-text.svelte-lvlpcn{cursor:pointer;position:absolute;top:50%;transform:translateY(-50%);right:10px;color:var(–zephr-color-text-main);font-size:12px;font-weight:bold;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.spin.svelte-1cj2gr0{animation:svelte-1cj2gr0-spin 2s 0s infinite linear}.pulse.svelte-1cj2gr0{animation:svelte-1cj2gr0-spin 1s infinite steps(8)}@keyframes svelte-1cj2gr0-spin{0%{transform:rotate(0deg)}100%{transform:rotate(360deg)}}.zephr-registration-form-checkbox.svelte-1gzpw2y{position:absolute;opacity:0;cursor:pointer;height:0;width:0}.zephr-registration-form-checkbox-label.svelte-1gzpw2y{display:flex;align-items:center;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-checkmark.svelte-1gzpw2y{position:relative;box-sizing:border-box;height:23px;width:23px;background-color:#fff;border:1px solid var(–zephr-color-text-main);border-radius:6px;margin-right:12px;cursor:pointer}.zephr-registration-form-checkmark.checked.svelte-1gzpw2y{border-color:#009fe3}.zephr-registration-form-checkmark.checked.svelte-1gzpw2y:after{content:””;position:absolute;width:6px;height:13px;border:solid #009fe3;border-width:0 2px 2px 0;transform:rotate(45deg);top:3px;left:8px;box-sizing:border-box}.zephr-registration-form-checkmark.disabled.svelte-1gzpw2y{border:1px solid var(–zephr-color-background-tinted)}.zephr-registration-form-checkmark.disabled.checked.svelte-1gzpw2y:after{border:solid var(–zephr-color-background-tinted);border-width:0 2px 2px 0}.zephr-registration-form-checkmark.error.svelte-1gzpw2y{border:1px solid var(–zephr-color-warning-main)}.zephr-registration-form-input-radio.svelte-1qn5n0t{position:absolute;opacity:0;cursor:pointer;height:0;width:0}.zephr-registration-form-radio-label.svelte-1qn5n0t{display:flex;align-items:center;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-radio-dot.svelte-1qn5n0t{position:relative;box-sizing:border-box;height:23px;width:23px;background-color:#fff;border:1px solid #ebebeb;border-radius:50%;margin-right:12px}.checked.svelte-1qn5n0t{border-color:#009fe3}.checked.svelte-1qn5n0t:after{content:””;position:absolute;width:17px;height:17px;background:#009fe3;background:linear-gradient(#009fe3, #006cb5);border-radius:50%;top:2px;left:2px}.disabled.checked.svelte-1qn5n0t:after{background:var(–zephr-color-background-tinted)}.error.svelte-1qn5n0t{border:1px solid var(–zephr-color-warning-main)}.zephr-form-link.svelte-64wplc{margin:10px 0;color:#6ba5e9;text-decoration:underline;cursor:pointer;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-form-link-disabled.svelte-64wplc{color:var(–zephr-color-text-main);cursor:none;text-decoration:none}.zephr-registration-form-google-icon.svelte-1jnblvg{width:20px}.zephr-registration-form-password-progress.svelte-d1zv9r{display:flex;margin-top:10px}.zephr-registration-form-password-bar.svelte-d1zv9r{width:100%;height:4px;border-radius:2px}.zephr-registration-form-password-bar.svelte-d1zv9r:not(:first-child){margin-left:8px}.zephr-registration-form-password-requirements.svelte-d1zv9r{margin:20px 0;justify-content:center}.zephr-registration-form-password-requirement.svelte-d1zv9r{display:flex;align-items:center;color:var(–zephr-color-text-tinted);font-size:12px;height:20px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-password-requirement-icon.svelte-d1zv9r{margin-right:10px;font-size:15px}.zephr-registration-form-password-progress.svelte-d1zv9r{display:flex;margin-top:10px}.zephr-registration-form-password-bar.svelte-d1zv9r{width:100%;height:4px;border-radius:2px}.zephr-registration-form-password-bar.svelte-d1zv9r:not(:first-child){margin-left:8px}.zephr-registration-form-password-requirements.svelte-d1zv9r{margin:20px 0;justify-content:center}.zephr-registration-form-password-requirement.svelte-d1zv9r{display:flex;align-items:center;color:var(–zephr-color-text-tinted);font-size:12px;height:20px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-password-requirement-icon.svelte-d1zv9r{margin-right:10px;font-size:15px}
    .zephr-registration-form {
    max-width: 100%;
    background-image: url(/wp-content/themes/sciencenews/client/src/images/cta-module@2x.jpg);
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    margin: 0px auto;
    margin-bottom: 4rem;
    padding: 20px;
    }

    .zephr-registration-form-text h6 {
    font-size: 0.8rem;
    }

    .zephr-registration-form h4 {
    font-size: 3rem;
    }

    .zephr-registration-form h4 {
    font-size: 1.5rem;
    }

    .zephr-registration-form-button.svelte-17g75t9:hover {
    background-color: #fc6a65;
    border-color: #fc6a65;
    width: 150px;
    margin-left: auto;
    margin-right: auto;
    }
    .zephr-registration-form-button.svelte-17g75t9:disabled {
    background-color: #e04821;
    border-color: #e04821;
    width: 150px;
    margin-left: auto;
    margin-right: auto;
    }
    .zephr-registration-form-button.svelte-17g75t9 {
    background-color: #e04821;
    border-color: #e04821;
    width: 150px;
    margin-left: auto;
    margin-right: auto;
    }
    .zephr-registration-form-text > * {
    color: #FFFFFF;
    font-weight: bold
    font: 25px;
    }
    .zephr-registration-form-progress-bar.svelte-8qyhcl {
    width: 100%;
    border: 0;
    border-radius: 20px;
    margin-top: 10px;
    display: none;
    }
    .zephr-registration-form-response-message-title.svelte-179421u {
    font-weight: bold;
    margin-bottom: 10px;
    display: none;
    }
    .zephr-registration-form-response-message-success.svelte-179421u {
    background-color: #8db869;
    border: 1px solid #8db869;
    color: white;
    margin-top: -0.2rem;
    }
    .zephr-registration-form-text.svelte-i1fi5:nth-child(1){
    font-size: 18px;
    text-align: center;
    margin: 20px auto;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    }
    .zephr-registration-form-text.svelte-i1fi5:nth-child(5){
    font-size: 18px;
    text-align: left;
    margin: 20px auto;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    }
    .zephr-registration-form-text.svelte-i1fi5:nth-child(7){
    font-size: 18px;
    text-align: left;
    margin: 20px auto;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    }
    .zephr-registration-form-text.svelte-i1fi5:nth-child(9){
    font-size: 18px;
    text-align: left;
    margin: 20px auto;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    }
    .zephr-registration-form-input-label.svelte-1ok5fdj span.svelte-1ok5fdj {
    display: none;
    color: white;
    }
    .zephr-registration-form-input.disabled.svelte-blfh8x, .zephr-registration-form-input.disabled.svelte-blfh8x:hover {
    border: calc(var(–zephr-input-borderWidth) * 1px) solid var(–zephr-input-borderColor);
    background-color: white;
    }
    .zephr-registration-form-checkbox-label.svelte-1gzpw2y {
    display: flex;
    align-items: center;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    font-size: 20px;
    margin-bottom: -20px;
    }

    “Traditionally, captains never cared about the seafloor as long as it stayed far enough away from the hulls of their ships,” journalist Laura Trethewey writes in The Deepest Map. The book explores humankind’s quest to map the seafloor, framed around Bongiovanni’s adventures.

    Seafloor topography has been a big concern for militaries patrolling Neptunian frontiers with nuclear submarines and companies facilitating intercontinental communication via subsea cables (SN: 4/10/21, p. 28). In recent decades, seafloor data have become crucial to the deep-sea mining industries searching for metals needed to produce green technology.

    Satellites have revealed many of the knobs and crevices visible in the deep blue of Google Maps. But with that relatively coarse information, entire mountains can be missed. To see the seafloor in high resolution requires a sophisticated sonar system aboard a big ship that sends sound signals from the sea surface into the abyss.

    Mappers like Bongiovanni calculate depth from the time it takes for the signal to travel down and bounce back to the surface. These state-of-the-art sonar systems transform “the satellite-predicted blur into a sharp three-dimensional terrain of ripples, cracks and tears in the seafloor,” Trethewey writes. “The seafloor is ‘heard,’ rather than seen.”

    Through Trethewey’s tale, she twines stories of tagging along with scientists and ocean mappers. That includes her inaugural adventure at sea, which a crew member noted was “pretty rough for a first-timer,” as he and Trethewey clung to a doorframe in near gale force winds. On this cruise aboard research vessel E/V Nautilus, which was surveying a poorly mapped stretch of California’s coast, Trethewey (and readers) are introduced to the art and science of seafloor mapping. On this day, Trethewey learned that mapping is especially difficult — and sometimes impossible — when the ocean is angry.

    Trethewey’s insightful writing helps readers understand just why mapping the ocean — even in shallow coastal waters — is crucial to so many endeavors. She visits a remote Inuit village on the western bank of Canada’s Hudson Bay, where she joins hunters who map ever-changing coastlines for their own safety. Later, she scuba dives with archaeologists in Florida who use underwater maps to explore remnants of early human history that have been submerged for thousands of years.

    A distant, possibly unreachable goal envisions creating a complete map of the entire seafloor by the end of this decade, an effort known as Seabed 2030. Because the oceans are vast and replete with remote and dangerous places that people simply can’t or shouldn’t go, this effort will almost certainly require autonomous surface vehicles armed with sonar. Such devices are already probing the depths and sending back data.

    Staring at computer screens in a sun-filled conference room, Trethewey watches as a drone outfitted with cameras, environmental sensors and a sonar system maps a bit of seafloor off California as she sips her coffee. “The future of ocean mapping weirdly felt a lot like checking social media or doing anything else on your phone these days,” she wryly observes.

    Trethewey’s book is about more than just mapping the oceans. It’s also about what can go wrong when explorers explore. It’s hard to read The Deepest Map without being reminded of the recent implosion of the Titan submersible in the North Atlantic that killed everyone on board in June. Indeed, Trethewey describes how, during Vescovo’s first solo dive, his colleagues endured 25 minutes of apprehension-turned-alarm when they didn’t hear from him.

    She also reminds us how easily exploration can turn into exploitation. In the not-so-distant past, Europeans “discovered” the so-called New World and mapped it, Trethewey writes. Exploitation followed. Scientists and environmentalists alike are now concerned that a full, detailed map of the ocean floor might lead to the destruction of delicate, mostly unknown habitats if deep-sea miners are allowed to extract metals.

    .subscribe-cta {
    color: black;
    margin-top: 0px;
    background-image: url(“”);
    background-size: cover;
    padding: 20px;
    border: 1px solid #ffcccb;
    border-top: 5px solid #e04821;
    clear: both;
    }

    Subscribe to Science News

    Get great science journalism, from the most trusted source, delivered to your doorstep.

    Trethewey envisions a different outcome. Seabed 2030’s mapping effort may help people see that “the weird, wonderful deep-sea world is not a blank space, another frontier to use up and throw away,” and should be safeguarded for scientists “to uncover our past and protect our future.”

    Buy The Deepest Map from Bookshop.org. Science News is a Bookshop.org affiliate and will earn a commission on purchases made from links in this article. More

  • in

    Are US teenagers more likely than others to exaggerate their math abilities?

    A major new study has revealed that American teenagers are more likely than any other nationality to brag about their math ability.
    Research using data from 40,000 15-year-olds from nine English-speaking nations internationally found those in North America were the most likely to exaggerate their mathematical knowledge, while those in Ireland and Scotland were least likely to do so.
    The study, published in the peer-reviewed journal Assessment in Education: Principles, Policy & Practice, used responses from the OECD Programme for International Student Assessment (PISA), in which participants took a two-hour maths test alongside a 30-minute background questionnaire.
    They were asked how familiar they were with each of 16 mathematical terms — but three of the terms were fake.
    Further questions revealed those who claimed familiarity with non-existent mathematical concepts were also more likely to display overconfidence in their academic prowess, problem-solving skills and perseverance.
    For instance, they claimed higher levels of competence in calculating a discount on a television and in finding their way to a destination. Two thirds of those most likely to overestimate their mathematical ability were confident they could work out the petrol consumption of a car, compared to just 40 per cent of those least likely to do so.
    Those likely to over-claim were also more likely to say if their mobile phone stopped sending texts they would consult a manual (41 per cent versus 30 per cent) while those less likely to do so tended to say they would react by pressing all the buttons (56 per cent versus 49 per cent). More

  • in

    AI-driven tool makes it easy to personalize 3D-printable models

    As 3D printers have become cheaper and more widely accessible, a rapidly growing community of novice makers are fabricating their own objects. To do this, many of these amateur artisans access free, open-source repositories of user-generated 3D models that they download and fabricate on their 3D printer.
    But adding custom design elements to these models poses a steep challenge for many makers, since it requires the use of complex and expensive computer-aided design (CAD) software, and is especially difficult if the original representation of the model is not available online. Plus, even if a user is able to add personalized elements to an object, ensuring those customizations don’t hurt the object’s functionality requires an additional level of domain expertise that many novice makers lack.
    To help makers overcome these challenges, MIT researchers developed a generative-AI-driven tool that enables the user to add custom design elements to 3D models without compromising the functionality of the fabricated objects. A designer could utilize this tool, called Style2Fab, to personalize 3D models of objects using only natural language prompts to describe their desired design. The user could then fabricate the objects with a 3D printer.
    “For someone with less experience, the essential problem they faced has been: Now that they have downloaded a model, as soon as they want to make any changes to it, they are at a loss and don’t know what to do. Style2Fab would make it very easy to stylize and print a 3D model, but also experiment and learn while doing it,” says Faraz Faruqi, a computer science graduate student and lead author of a paper introducing Style2Fab.
    Style2Fab is driven by deep-learning algorithms that automatically partition the model into aesthetic and functional segments, streamlining the design process.
    In addition to empowering novice designers and making 3D printing more accessible, Style2Fab could also be utilized in the emerging area of medical making. Research has shown that considering both the aesthetic and functional features of an assistive device increases the likelihood a patient will use it, but clinicians and patients may not have the expertise to personalize 3D-printable models.
    With Style2Fab, a user could customize the appearance of a thumb splint so it blends in with her clothing without altering the functionality of the medical device, for instance. Providing a user-friendly tool for the growing area of DIY assistive technology was a major motivation for this work, adds Faruqi. More

  • in

    Verbal nonsense reveals limitations of AI chatbots

    The era of artificial-intelligence chatbots that seem to understand and use language the way we humans do has begun. Under the hood, these chatbots use large language models, a particular kind of neural network. But a new study shows that large language models remain vulnerable to mistaking nonsense for natural language. To a team of researchers at Columbia University, it’s a flaw that might point toward ways to improve chatbot performance and help reveal how humans process language.
    In a paper published online today in Nature Machine Intelligence, the scientists describe how they challenged nine different language models with hundreds of pairs of sentences. For each pair, people who participated in the study picked which of the two sentences they thought was more natural, meaning that it was more likely to be read or heard in everyday life. The researchers then tested the models to see if they would rate each sentence pair the same way the humans had.
    In head-to-head tests, more sophisticated AIs based on what researchers refer to as transformer neural networks tended to perform better than simpler recurrent neural network models and statistical models that just tally the frequency of word pairs found on the internet or in online databases. But all the models made mistakes, sometimes choosing sentences that sound like nonsense to a human ear.
    “That some of the large language models perform as well as they do suggests that they capture something important that the simpler models are missing,” said Dr. Nikolaus Kriegeskorte, PhD, a principal investigator at Columbia’s Zuckerman Institute and a coauthor on the paper. “That even the best models we studied still can be fooled by nonsense sentences shows that their computations are missing something about the way humans process language.”
    Consider the following sentence pair that both human participants and the AI’s assessed in the study:
    That is the narrative we have been sold.
    This is the week you have been dying. More

  • in

    New camera offers ultrafast imaging at a fraction of the normal cost

    Capturing blur-free images of fast movements like falling water droplets or molecular interactions requires expensive ultrafast cameras that acquire millions of images per second. In a new paper, researchers report a camera that could offer a much less expensive way to achieve ultrafast imaging for a wide range of applications such as real-time monitoring of drug delivery or high-speed lidar systems for autonomous driving.
    “Our camera uses a completely new method to achieve high-speed imaging,” said Jinyang Liang from the Institut national de la recherche scientifique (INRS) in Canada. “It has an imaging speed and spatial resolution similar to commercial high-speed cameras but uses off-the-shelf components that would likely cost less than a tenth of today’s ultrafast cameras, which can start at close to $100,000.”
    In Optica, Optica Publishing Group’s journal for high-impact research, Liang together with collaborators from Concordia University in Canada and Meta Platforms Inc. show that their new diffraction-gated real-time ultrahigh-speed mapping (DRUM) camera can capture a dynamic event in a single exposure at 4.8 million frames per second. They demonstrate this capability by imaging the fast dynamics of femtosecond laser pulses interacting with liquid and laser ablation in biological samples.
    “In the long term, I believe that DRUM photography will contribute to advances in biomedicine and automation-enabling technologies such as lidar, where faster imaging would allow more accurate sensing of hazards,” said Liang. “However, the paradigm of DRUM photography is quite generic. In theory, it can be used with any CCD and CMOS cameras without degrading their other advantages such as high sensitivity.”
    Creating a better ultrafast camera
    Despite a great deal of progress in ultrafast imaging, today’s methods are still expensive and complex to implement. Their performance is also limited by trade-offs between the number of frames captured in each movie and light throughput or temporal resolution. To overcome these issues, the researchers developed a new time-gating method known as time-varying optical diffraction.
    Cameras use gates to control when light hits the sensor. For example, the shutter in a traditional camera is a type of gate that opens and closes once. In time-gating, the gate is opened and closed in quick succession a certain number of times before the sensor reads out the image. This captures a short high-speed movie of a scene. More

  • in

    Evolution wired human brains to act like supercomputers

    Scientists have confirmed that human brains are naturally wired to perform advanced calculations, much like a high-powered computer, to make sense of the world through a process known as Bayesian inference.
    In a study published in the journal Nature Communications, researchers from the University of Sydney, University of Queensland and University of Cambridge developed a specific mathematical model that closely matches how human brains work when it comes to reading vision. The model contained everything needed to carry out Bayesian inference.
    Bayesian inference is a statistical method that combines prior knowledge with new evidence to make intelligent guesswork. For example, if you know what a dog looks like and you see a furry animal with four legs, you might use your prior knowledge to guess it’s a dog.
    This inherent capability enables people to interpret the environment with extraordinary precision and speed, unlike machines that can be bested by simple CAPTCHA security measures when prompted to identify fire hydrants in a panel of images.
    The study’s senior investigator Dr Reuben Rideaux, from the University of Sydney’s School of Psychology, said: “Despite the conceptual appeal and explanatory power of the Bayesian approach, how the brain calculates probabilities is largely mysterious.”
    “Our new study sheds light on this mystery. We discovered that the basic structure and connections within our brain’s visual system are set up in a way that allows it to perform Bayesian inference on the sensory data it receives.
    “What makes this finding significant is the confirmation that our brains have an inherent design that allows this advanced form of processing, enabling us to interpret our surroundings more effectively.”
    The study’s findings not only confirm existing theories about the brain’s use of Bayesian-like inference but open doors to new research and innovation, where the brain’s natural ability for Bayesian inference can be harnessed for practical applications that benefit society. More

  • in

    Take the money now or later? Financial scarcity doesn’t lead to poor decision making

    When people feel that their resources are scarce — that they don’t have enough money or time to meet their needs — they often make decisions that favor short-term gains over long-term benefits. Because of that, researchers have argued that scarcity pushes people to make myopic, impulsive decisions. But a study published by the American Psychological Association provides support for a different, less widely held view: People experiencing scarcity make reasonable decisions based on their circumstances, and only prioritize short-term benefits over long-term gains when scarcity threatens their more immediate needs.
    “This research challenges the predominant view that when people feel poor or live in poverty, they become impatient and shortsighted and can’t or don’t think about the future,” said study co-author Eesha Sharma, Ph.D., of San Diego State University. “It provides a framework, instead, for understanding that when people are experiencing financial scarcity, they’re trying to make the best decision they can, given the circumstances they’re in.”
    The research was published in the Journal of Personality and Social Psychology.
    Sharma and co-authors Stephanie Tully, Ph.D., of the University of Southern California, and Xiang Wang, Ph.D., of Lingnan University in Hong Kong, wanted to distinguish between two competing ideas: That people’s preference for shorter-term gains reflects impatience and impulsivity, or that it reflects more intentional, deliberate decision-making. To do so, they examined how people’s decisions change depending on the timeline of the needs that they feel they don’t have enough money for.
    “Needs exist across a broad time horizon,” said Tully. “We often think about immediate needs like food or shelter, but people can experience scarcity related to future needs, too, such as replacing a run-down car before it dies, buying a house or paying for college. Yet research on scarcity has focused almost exclusively on immediate needs.”
    In the current study, the researchers conducted five experiments in which they measured or induced a sense of scarcity in participants, and examined how the choices people made changed depending on whether that scarcity was related to a shorter- or longer-term need.
    Overall, they found that when people feel that they don’t have enough resources to meet an immediate need, such as food or shelter, they are more likely to make decisions that offer an immediate payout, even if it comes at the expense of receiving a larger payout later. But when scarcity threatens a longer-term need, such as replacing a run-down car, people experiencing scarcity are no less willing to wait for larger, later rewards — and in some cases are more willing to wait — compared with people not experiencing scarcity. More

  • in

    Images of simulated cities help artificial intelligence to understand real streetscapes

    Recent advances in artificial intelligence and deep learning have revolutionized many industries, and might soon help recreate your neighborhood as well. Given images of a landscape, the analysis of deep-learning models can help urban landscapers visualize plans for redevelopment, thereby improving scenery or preventing costly mistakes.
    To accomplish this, however, models must be able to correctly identify and categorize each element in a given image. This step, called instance segmentation, remains challenging for machines owing to a lack of suitable training data. Although it is relatively easy to collect images of a city, generating the ‘ground truth’, that is, the labels that tell the model if its segmentation is correct, involves painstakingly segmenting each image, often by hand.
    Now, to address this problem, researchers at Osaka University have developed a way to train these data-hungry models using computer simulation. First, a realistic 3D city model is used to generate the segmentation ground truth. Then, an image-to-image model generates photorealistic images from the ground truth images. The result is a dataset of realistic images similar to those of an actual city, complete with precisely generated ground-truth labels that do not require manual segmentation.
    “Synthetic data have been used in deep learning before,” says lead author Takuya Kikuchi. “But most landscape systems rely on 3D models of existing cities, which remain hard to build. We also simulate the city structure, but we do it in a way that still generates effective training data for models in the real world.”
    After the 3D model of a realistic city is generated procedurally, segmentation images of the city are created with a game engine. Finally, a generative adversarial network, which is a neural network that uses game theory to learn how to generate realistic-looking images, is trained to convert images of shapes into images with realistic city textures This image-to-image model creates the corresponding street-view images.
    “This removes the need for datasets of real buildings, which are not publicly available. Moreover, several individual objects can be separated, even if they overlap in the image,” explains corresponding author Tomohiro Fukuda. “But most importantly, this approach saves human effort, and the costs associated with that, while still generating good training data.”
    To prove this, a segmentation model called a ‘mask region-based convolutional neural network’ was trained on the simulated data and another was trained on real data. The models performed similarly on instances of large, distinct buildings, even though the time to produce the dataset was reduced by 98%.
    The researchers plan to see if improvements to the image-to-image model increase performance under more conditions. For now, this approach generates large amounts of data with an impressively low amount of effort. The researchers’ achievement will address current and upcoming shortages of training data, reduce costs associated with dataset preparation and help to usher in a new era of deep learning-assisted urban landscaping. More