More stories

  • in

    Breakthrough in tackling increasing demand by ‘internet of things’ on mobile networks

    A novel technology to manage demands on mobile networks from multiple users using Terahertz frequencies has been developed by University of Leicester computer scientists.
    As we see an explosion of devices joining the ‘internet of things’, this solution could not only improve speed and power consumption for users of mobile devices, but could also help reap the benefits from the next generation of mobile technologies, 6G.
    They have detailed the technology in a new study in IEEE Transactions on Communications.
    Demands on the UK’s mobile telecommunications network are growing, with Mobile UK estimating that twenty-five million devices are connected to mobile networks, a number expected to rise to thirty billion by 2030. As the ‘internet of things’ grows, more and more technology will be competing for access to those networks.
    State-of-the-art telecommunication technologies have been established for current applications in 5G, but with increasing demands of more users and devices, these systems demonstrate slower connections and costly energy consumption. These systems suffer from the self-interference problem that severely affects communication quality and efficiency. To deal with these challenges, a technique known as multicarrier-division duplex (MDD) has been recently proposed and studied, which allows a receiver in the network to be nearly free of self-interference in the digital domain by relying only on the fast Fourier transform (FFT) processing.
    This project proposed a novel technology to optimise the assignment of subcarrier set and the number of access point clusters and improve the communication quality in different networks. The team tested their technology in a simulation based on a real-world industrial setting, finding that it out-performed existing technologies. A 10% power consumption reduction can be achieved, compared to other state of the art technologies.
    Lead Principal Investigator Professor Huiyu Zhou from the University of Leicester School of Computing and Mathematical Sciences said: “With our proposed technology, 5G/6G systems require less energy consumption, have faster device selection and less resource allocation. Users may feel their mobile communication is quicker, wider and with reduced power demands.
    “The University of Leicester is leading the development of AI solutions for device selection and access point clustering. AI technologies, reinforcement learning in particular, help us to search for the best parameters used in the proposed wireless communication systems quickly and effectively. This helps to save power, resources and human labour. Without using AI technologies, we will spend much more time on rendering the best parameters for system set-up and device selection in the network.”
    The team is now continuing work on the optimising the proposed technologies and reducing the computational complexity of the technique. The source code of the proposed method has been published and shared with the entire world for promoting the research.
    The study forms part of the EU-funded 6G BRAINS project, which will develop an AI-driven self-learning platform to intelligently and dynamically allocate resources, enhancing capacity and reliability, and improving positioning accuracy while decreasing latency of response for future industrial applications of massive scale and varying demands. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 101017226. More

  • in

    Crabs left the sea not once, but several times, in their evolution

    Most terrestrial plants and animals left the ocean a single time in their evolutionary history to live ashore. But crabs have seemingly scuttled out of the sea more than a dozen times, with at least two groups later reverting back to a marine lifestyle, a study finds.

    The research, published November 6 in Systematic Biology, sheds new light on the evolutionary history of the group Brachyura, which includes roughly 7,600 species of “true crabs,” and includes the most comprehensive evolutionary tree yet created for the group. And the study offers clues about how other early invertebrates may have evolved a terrestrial lifestyle, researchers say.

    .email-conversion {
    border: 1px solid #ffcccb;
    color: white;
    margin-top: 50px;
    background-image: url(“/wp-content/themes/sciencenews/client/src/images/cta-module@2x.jpg”);
    padding: 20px;
    clear: both;
    }

    .zephr-registration-form{max-width:440px;margin:20px auto;padding:20px;background-color:#fff;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form *{box-sizing:border-box}.zephr-registration-form-text > *{color:var(–zephr-color-text-main)}.zephr-registration-form-relative-container{position:relative}.zephr-registration-form-flex-container{display:flex}.zephr-registration-form-input.svelte-blfh8x{display:block;width:100%;height:calc(var(–zephr-input-height) * 1px);padding-left:8px;font-size:16px;border:calc(var(–zephr-input-borderWidth) * 1px) solid var(–zephr-input-borderColor);border-radius:calc(var(–zephr-input-borderRadius) * 1px);transition:border-color 0.25s ease, box-shadow 0.25s ease;outline:0;color:var(–zephr-color-text-main);background-color:#fff;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-input.svelte-blfh8x::placeholder{color:var(–zephr-color-background-tinted)}.zephr-registration-form-input-checkbox.svelte-blfh8x{width:auto;height:auto;margin:8px 5px 0 0;float:left}.zephr-registration-form-input-radio.svelte-blfh8x{position:absolute;opacity:0;cursor:pointer;height:0;width:0}.zephr-registration-form-input-color[type=”color”].svelte-blfh8x{width:50px;padding:0;border-radius:50%}.zephr-registration-form-input-color[type=”color”].svelte-blfh8x::-webkit-color-swatch{border:none;border-radius:50%;padding:0}.zephr-registration-form-input-color[type=”color”].svelte-blfh8x::-webkit-color-swatch-wrapper{border:none;border-radius:50%;padding:0}.zephr-registration-form-input.disabled.svelte-blfh8x,.zephr-registration-form-input.disabled.svelte-blfh8x:hover{border:calc(var(–zephr-input-borderWidth) * 1px) solid var(–zephr-input-borderColor);background-color:var(–zephr-color-background-tinted)}.zephr-registration-form-input.error.svelte-blfh8x{border:1px solid var(–zephr-color-warning-main)}.zephr-registration-form-input-label.svelte-1ok5fdj.svelte-1ok5fdj{margin-top:10px;display:block;line-height:30px;font-size:12px;color:var(–zephr-color-text-tinted);font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-input-label.svelte-1ok5fdj span.svelte-1ok5fdj{display:block}.zephr-registration-form-button.svelte-17g75t9{height:calc(var(–zephr-button-height) * 1px);line-height:0;padding:0 20px;text-decoration:none;text-transform:capitalize;text-align:center;border-radius:calc(var(–zephr-button-borderRadius) * 1px);font-size:calc(var(–zephr-button-fontSize) * 1px);font-weight:normal;cursor:pointer;border-style:solid;border-width:calc(var(–zephr-button-borderWidth) * 1px);border-color:var(–zephr-color-action-tinted);transition:backdrop-filter 0.2s, background-color 0.2s;margin-top:20px;display:block;width:100%;background-color:var(–zephr-color-action-main);color:#fff;position:relative;overflow:hidden;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-button.svelte-17g75t9:hover{background-color:var(–zephr-color-action-tinted);border-color:var(–zephr-color-action-tinted)}.zephr-registration-form-button.svelte-17g75t9:disabled{background-color:var(–zephr-color-background-tinted);border-color:var(–zephr-color-background-tinted)}.zephr-registration-form-button.svelte-17g75t9:disabled:hover{background-color:var(–zephr-color-background-tinted);border-color:var(–zephr-color-background-tinted)}.zephr-registration-form-text.svelte-i1fi5{font-size:19px;text-align:center;margin:20px auto;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-divider-container.svelte-mk4m8o{display:flex;align-items:center;justify-content:center;margin:40px 0}.zephr-registration-form-divider-line.svelte-mk4m8o{height:1px;width:50%;margin:0 5px;background-color:var(–zephr-color-text-tinted);;}.zephr-registration-form-divider-text.svelte-mk4m8o{margin:0 12px;color:var(–zephr-color-text-main);font-size:14px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);white-space:nowrap}.zephr-registration-form-input-inner-text.svelte-lvlpcn{cursor:pointer;position:absolute;top:50%;transform:translateY(-50%);right:10px;color:var(–zephr-color-text-main);font-size:12px;font-weight:bold;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-response-message.svelte-179421u{text-align:center;padding:10px 30px;border-radius:5px;font-size:15px;margin-top:10px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-response-message-title.svelte-179421u{font-weight:bold;margin-bottom:10px}.zephr-registration-form-response-message-success.svelte-179421u{background-color:#baecbb;border:1px solid #00bc05}.zephr-registration-form-response-message-error.svelte-179421u{background-color:#fcdbec;border:1px solid #d90c00}.zephr-registration-form-social-sign-in.svelte-gp4ky7{align-items:center}.zephr-registration-form-social-sign-in-button.svelte-gp4ky7{height:55px;padding:0 15px;color:#000;background-color:#fff;box-shadow:0px 0px 5px rgba(0, 0, 0, 0.3);border-radius:10px;font-size:17px;display:flex;align-items:center;cursor:pointer;margin-top:20px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-social-sign-in-button.svelte-gp4ky7:hover{background-color:#fafafa}.zephr-registration-form-social-sign-in-icon.svelte-gp4ky7{display:flex;justify-content:center;margin-right:30px;width:25px}.zephr-form-link-message.svelte-rt4jae{margin:10px 0 10px 20px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-recaptcha-tcs.svelte-1wyy3bx{margin:20px 0 0 0;font-size:15px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-recaptcha-inline.svelte-1wyy3bx{margin:20px 0 0 0}.zephr-registration-form-progress-bar.svelte-8qyhcl{width:100%;border:0;border-radius:20px;margin-top:10px}.zephr-registration-form-progress-bar.svelte-8qyhcl::-webkit-progress-bar{background-color:var(–zephr-color-background-tinted);border:0;border-radius:20px}.zephr-registration-form-progress-bar.svelte-8qyhcl::-webkit-progress-value{background-color:var(–zephr-color-text-tinted);border:0;border-radius:20px}.zephr-registration-progress-bar-step.svelte-8qyhcl{margin:auto;color:var(–zephr-color-text-tinted);font-size:12px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-progress-bar-step.svelte-8qyhcl:first-child{margin-left:0}.zephr-registration-progress-bar-step.svelte-8qyhcl:last-child{margin-right:0}.zephr-registration-form-input-error-text.svelte-19a73pq{color:var(–zephr-color-warning-main);font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-input-select.svelte-19a73pq{display:block;appearance:auto;width:100%;height:calc(var(–zephr-input-height) * 1px);font-size:16px;border:calc(var(–zephr-input-borderWidth) * 1px) solid var(–zephr-color-text-main);border-radius:calc(var(–zephr-input-borderRadius) * 1px);transition:border-color 0.25s ease, box-shadow 0.25s ease;outline:0;color:var(–zephr-color-text-main);background-color:#fff;padding:10px}.zephr-registration-form-input-select.disabled.svelte-19a73pq{border:1px solid var(–zephr-color-background-tinted)}.zephr-registration-form-input-select.unselected.svelte-19a73pq{color:var(–zephr-color-background-tinted)}.zephr-registration-form-input-select.error.svelte-19a73pq{border-color:var(–zephr-color-warning-main)}.zephr-registration-form-input-textarea.svelte-19a73pq{background-color:#fff;border:1px solid #ddd;color:#222;font-size:14px;font-weight:300;padding:16px;width:100%}.zephr-registration-form-input-slider-output.svelte-19a73pq{margin:13px 0 0 10px}.zephr-registration-form-input-inner-text.svelte-lvlpcn{cursor:pointer;position:absolute;top:50%;transform:translateY(-50%);right:10px;color:var(–zephr-color-text-main);font-size:12px;font-weight:bold;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.spin.svelte-1cj2gr0{animation:svelte-1cj2gr0-spin 2s 0s infinite linear}.pulse.svelte-1cj2gr0{animation:svelte-1cj2gr0-spin 1s infinite steps(8)}@keyframes svelte-1cj2gr0-spin{0%{transform:rotate(0deg)}100%{transform:rotate(360deg)}}.zephr-registration-form-checkbox.svelte-1gzpw2y{position:absolute;opacity:0;cursor:pointer;height:0;width:0}.zephr-registration-form-checkbox-label.svelte-1gzpw2y{display:flex;align-items:center;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-checkmark.svelte-1gzpw2y{position:relative;box-sizing:border-box;height:23px;width:23px;background-color:#fff;border:1px solid var(–zephr-color-text-main);border-radius:6px;margin-right:12px;cursor:pointer}.zephr-registration-form-checkmark.checked.svelte-1gzpw2y{border-color:#009fe3}.zephr-registration-form-checkmark.checked.svelte-1gzpw2y:after{content:””;position:absolute;width:6px;height:13px;border:solid #009fe3;border-width:0 2px 2px 0;transform:rotate(45deg);top:3px;left:8px;box-sizing:border-box}.zephr-registration-form-checkmark.disabled.svelte-1gzpw2y{border:1px solid var(–zephr-color-background-tinted)}.zephr-registration-form-checkmark.disabled.checked.svelte-1gzpw2y:after{border:solid var(–zephr-color-background-tinted);border-width:0 2px 2px 0}.zephr-registration-form-checkmark.error.svelte-1gzpw2y{border:1px solid var(–zephr-color-warning-main)}.zephr-registration-form-input-radio.svelte-1qn5n0t{position:absolute;opacity:0;cursor:pointer;height:0;width:0}.zephr-registration-form-radio-label.svelte-1qn5n0t{display:flex;align-items:center;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-radio-dot.svelte-1qn5n0t{position:relative;box-sizing:border-box;height:23px;width:23px;background-color:#fff;border:1px solid #ebebeb;border-radius:50%;margin-right:12px}.checked.svelte-1qn5n0t{border-color:#009fe3}.checked.svelte-1qn5n0t:after{content:””;position:absolute;width:17px;height:17px;background:#009fe3;background:linear-gradient(#009fe3, #006cb5);border-radius:50%;top:2px;left:2px}.disabled.checked.svelte-1qn5n0t:after{background:var(–zephr-color-background-tinted)}.error.svelte-1qn5n0t{border:1px solid var(–zephr-color-warning-main)}.zephr-form-link.svelte-64wplc{margin:10px 0;color:#6ba5e9;text-decoration:underline;cursor:pointer;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-form-link-disabled.svelte-64wplc{color:var(–zephr-color-text-main);cursor:none;text-decoration:none}.zephr-registration-form-google-icon.svelte-1jnblvg{width:20px}.zephr-registration-form-password-progress.svelte-d1zv9r{display:flex;margin-top:10px}.zephr-registration-form-password-bar.svelte-d1zv9r{width:100%;height:4px;border-radius:2px}.zephr-registration-form-password-bar.svelte-d1zv9r:not(:first-child){margin-left:8px}.zephr-registration-form-password-requirements.svelte-d1zv9r{margin:20px 0;justify-content:center}.zephr-registration-form-password-requirement.svelte-d1zv9r{display:flex;align-items:center;color:var(–zephr-color-text-tinted);font-size:12px;height:20px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-password-requirement-icon.svelte-d1zv9r{margin-right:10px;font-size:15px}.zephr-registration-form-password-progress.svelte-d1zv9r{display:flex;margin-top:10px}.zephr-registration-form-password-bar.svelte-d1zv9r{width:100%;height:4px;border-radius:2px}.zephr-registration-form-password-bar.svelte-d1zv9r:not(:first-child){margin-left:8px}.zephr-registration-form-password-requirements.svelte-d1zv9r{margin:20px 0;justify-content:center}.zephr-registration-form-password-requirement.svelte-d1zv9r{display:flex;align-items:center;color:var(–zephr-color-text-tinted);font-size:12px;height:20px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-password-requirement-icon.svelte-d1zv9r{margin-right:10px;font-size:15px}
    .zephr-registration-form {
    max-width: 100%;
    background-image: url(/wp-content/themes/sciencenews/client/src/images/cta-module@2x.jpg);
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    margin: 0px auto;
    margin-bottom: 4rem;
    padding: 20px;
    }

    .zephr-registration-form-text h6 {
    font-size: 0.8rem;
    }

    .zephr-registration-form h4 {
    font-size: 3rem;
    }

    .zephr-registration-form h4 {
    font-size: 1.5rem;
    }

    .zephr-registration-form-button.svelte-17g75t9:hover {
    background-color: #fc6a65;
    border-color: #fc6a65;
    width: 150px;
    margin-left: auto;
    margin-right: auto;
    }
    .zephr-registration-form-button.svelte-17g75t9:disabled {
    background-color: #e04821;
    border-color: #e04821;
    width: 150px;
    margin-left: auto;
    margin-right: auto;
    }
    .zephr-registration-form-button.svelte-17g75t9 {
    background-color: #e04821;
    border-color: #e04821;
    width: 150px;
    margin-left: auto;
    margin-right: auto;
    }
    .zephr-registration-form-text > * {
    color: #FFFFFF;
    font-weight: bold
    font: 25px;
    }
    .zephr-registration-form-progress-bar.svelte-8qyhcl {
    width: 100%;
    border: 0;
    border-radius: 20px;
    margin-top: 10px;
    display: none;
    }
    .zephr-registration-form-response-message-title.svelte-179421u {
    font-weight: bold;
    margin-bottom: 10px;
    display: none;
    }
    .zephr-registration-form-response-message-success.svelte-179421u {
    background-color: #8db869;
    border: 1px solid #8db869;
    color: white;
    margin-top: -0.2rem;
    }
    .zephr-registration-form-text.svelte-i1fi5:nth-child(1){
    font-size: 18px;
    text-align: center;
    margin: 20px auto;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    }
    .zephr-registration-form-text.svelte-i1fi5:nth-child(5){
    font-size: 18px;
    text-align: left;
    margin: 20px auto;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    }
    .zephr-registration-form-text.svelte-i1fi5:nth-child(7){
    font-size: 18px;
    text-align: left;
    margin: 20px auto;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    }
    .zephr-registration-form-text.svelte-i1fi5:nth-child(9){
    font-size: 18px;
    text-align: left;
    margin: 20px auto;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    }
    .zephr-registration-form-input-label.svelte-1ok5fdj span.svelte-1ok5fdj {
    display: none;
    color: white;
    }
    .zephr-registration-form-input.disabled.svelte-blfh8x, .zephr-registration-form-input.disabled.svelte-blfh8x:hover {
    border: calc(var(–zephr-input-borderWidth) * 1px) solid var(–zephr-input-borderColor);
    background-color: white;
    }
    .zephr-registration-form-checkbox-label.svelte-1gzpw2y {
    display: flex;
    align-items: center;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    font-size: 20px;
    margin-bottom: -20px;
    }

    Unlike for well-studied animals such as birds and mammals, a unified crab tree of life has been lacking, says Kristin Hultgren, an invertebrate zoologist at Seattle University. “While the authors have developed a useful framework for understanding the complexity of transitioning to terrestrial life, one of the most important contributions is the extensive, well-dated evolutionary tree.”

    Crabs are an extremely diverse group and have colonized nearly every type of habitat on Earth. It’s been a challenge to study when crabs first shifted from one habitat to another during evolution because, like some other invertebrates, crabs don’t have the extensive fossil trail that early vertebrates do, says Joanna Wolfe, an evolutionary biologist at Harvard University.

    Past research has also often treated marine, freshwater and land crabs as discrete subgroups when they’re more like a continuum, Wolfe says. “They’re not distinct and actually have a lot in common, and looking at them together helps trace their evolution.”

    Wolfe and her colleagues collected genetic data from 333 species of crabs in the group Brachyura. These crustaceans are evolutionarily distinct from, although closely related to, another group of crustaceans that independently evolved crablike bodies and are often erroneously referred to as crabs, including animals like the hermit and king crabs.

    The team then combined that genetic data with dozens of fossils to generate a crab evolutionary tree, layering on details about each species’ life history and adaptations for living on land to reconstruct a possible timeline of when crabs colonized drier ground.

    True crabs diverged from other crustacean lineages roughly 230 million years ago during the Triassic Period, the researchers found, refining previous estimates. Over the next hundred or so million years, brachyurans diversified widely during a period previously dubbed the “Cretaceous crab revolution.”

    The study also showed that during their evolution, crabs appear to have adapted to a more terrestrial lifestyle as many as 17 times, either by shifting from the ocean to the intertidal zone or similarly salty habitats like mangroves, or by colonizing freshwater estuaries and rivers on route to land. In at least two cases, crabs reverted to a marine lifestyle long after they’d left.

    The amount of times that crabs independently left the ocean is “astonishing,” says Katie Davis, an evolutionary paleobiologist at the University of York in England who was not involved in the research. “And it’s really fantastic that molecular biology, fossils and modern numerical techniques can be combined to provide insight into previously unanswerable questions.”

    The study also hints at what other early arthropods that ventured onto the land may have been like, Wolfe says. Past studies have shown that crabs and insects share a common, if unknown, aquatic ancestor. By looking at the types of crabs that successfully left the ocean, it’s possible to guess at what adaptations early insects might have needed to do the same. Modern crabs living out of the water today, for example, excel at keeping themselves from drying out and have limited their dependence on water for reproduction.

    “If you’re going to be the first proto-insect to come out of the ocean … you’re probably going to need those kinds of adaptations,” Wolfe says. More

  • in

    Shedding light on unique conduction mechanisms in a new type of perovskite oxide

    The remarkable proton and oxide-ion (dual-ion) conductivities of hexagonal perovskite-related oxide Ba7Nb3.8Mo1.2O20.1 are promising for next-generation electrochemical devices, as reported by scientists at Tokyo Tech. The unique ion-transport mechanisms they unveiled will hopefully pave the way for better dual-ion conductors, which could play an essential role in tomorrow’s clean energy technologies.
    Clean energy technologies are the cornerstone of sustainable societies, and solid-oxide fuel cells (SOFCs) and proton ceramic fuel cells (PCFCs) are among the most promising types of electrochemical devices for green power generation. These devices, however, still face challenges that hinder their development and adoption.
    Ideally, SOFCs should be operated at low temperatures to prevent unwanted chemical reactions from degrading their constituent materials. Unfortunately, most known oxide-ion conductors, a key component of SOFCs, only exhibit decent ionic conductivity at elevated temperatures. As for PCFCs, not only are they chemically unstable under carbon dioxide atmospheres, but they also require energy-intensive, high-temperature processing steps during manufacture.
    Fortunately, there is a type of material that can solve these problems by combining the benefits of both SOFCs and PCFCs: dual-ion conductors. By supporting the diffusion of both protons and oxide ions, dual-ion conductors can realize high total conductivity at lower temperatures and improve the performance of electrochemical devices. Although some perovskite-related dual-ion conducting materials such as Ba7Nb4MoO20 have been reported, their conductivities are not high enough for practical applications, and their underlying conducting mechanisms are not well understood.
    Against this backdrop, a research team led by Professor Masatomo Yashima from Tokyo Institute of Technology, Japan, decided to investigate the conductivity of materials similar to 7Nb4MoO20 but with a higher Mo fraction (that is, Ba7Nb4-xMo1+xO20+x/2). Their latest study, which was conducted in collaboration with the Australian Nuclear Science and Technology Organisation (ANSTO), the High Energy Accelerator Research Organization (KEK), and Tohoku University, was published in Chemistry of Materials.
    After screening various Ba7Nb4-xMo1+xO20+x/2 compositions, the team found that Ba7Nb3.8Mo1.2O20.1 had remarkable proton and oxide-ion conductivities. “Ba7Nb3.8Mo1.2O20.1 exhibited bulk conductivities of 11 mS/cm at 537 ℃ under wet air and 10 mS/cm at 593 ℃ under dry air. Total direct current conductivity at 400 ℃ in wet air of Ba7Nb3.8Mo1.2O20.1 was 13 times higher than that of Ba7Nb4MoO20, and the bulk conductivity in dry air at 306 ℃ is 175 times higher than that of the conventional yttria-stabilized zirconia (YSZ),” highlights Prof. Yashima.
    Next, the researchers sought to shed light on the underlying mechanisms behind these high conductivity values. To this end, they conducted ab initio molecular dynamics (AIMD) simulations, neutron diffraction experiments, and neutron scattering length density analyses. These techniques enabled them to study the structure of Ba7Nb3.8Mo1.2O20.1 in greater detail and determine what makes it special as a dual-ion conductor.
    Interestingly, the team found that the high oxide-ion conductivity of Ba7Nb3.8Mo1.2O20.1 originates from a unique phenomenon. It turns out that adjacent MO5 monomers in Ba7Nb3.8Mo1.2O20.1 can form M2O9 dimers by sharing an oxygen atom on one of their corners (M = Nb or Mo cation). The breaking and reforming of these dimers gives rise to ultrafast oxide-ion movement in a manner analogous to a long line of people relaying buckets of water (oxide ions) from one person to the next. Furthermore, the AIMD simulations revealed that the observed high proton conduction was due to efficient proton migration in the hexagonal close-packed BaO3 layers in the material.
    Taken together, the results of this study highlight the potential of perovskite-related dual-ion conductors and could serve as guidelines for the rational design of these materials. “The present findings of high conductivities and unique ion migration mechanisms in Ba7Nb3.8Mo1.2O20.1 will help the development of science and engineering of oxide-ion, proton, and dual-ion conductors,” concludes a hopeful Prof. Yashima. More

  • in

    Future of brain-inspired AI as Python code library passes major milestone

    Four years ago, UC Santa Cruz’s Jason Eshraghian developed a Python library that combines neuroscience with artificial intelligence to create spiking neural networks, a machine learning method that takes inspiration from the brain’s ability to efficiently process data. Now, his open source code library, called “snnTorch,” has surpassed 100,000 downloads and is used in a wide variety of projects, from NASA satellite tracking efforts to semiconductor companies optimizing chips for AI.
    A new paper published in the journal Proceedings of the IEEE documents the coding library but also is intended to be a candid educational resource for students and any other programmers interested in learning about brain-inspired AI.
    “It’s exciting because it shows people are interested in the brain, and that people have identified that neural networks are really inefficient compared to the brain,” said Eshraghian, an assistant professor of electrical and computer engineering. “People are concerned about the environmental impact [of the costly power demands] of neural networks and large language models, and so this is a very plausible direction forward.”
    Building snnTorch
    Spiking neural networks emulate the brain and biological systems to process information more efficiently. The brain’s neurons are at rest until there is a piece of information for them to process, which causes their activity to spike. Similarly, a spiking neural network only begins processing data when there is an input into the system, rather than constantly processing data like traditional neural networks.
    “We want to take all the benefits of the brain and its power efficiency and smush them into the functionality of artificial intelligence — so taking the best of both worlds,” Eshraghian said.
    Eshraghian began building the code for a spiking neural network in Python as a passion project during the pandemic, somewhat as a method to teach himself the coding language Python. A chip designer by training, he became interested in learning to code when considering that computing chips could be optimized for power efficiency by co-designing the software and the hardware to ensure they best complement each other.

    Now, snnTorch is being used by thousands of programmers around the world on a variety of projects, supporting everything from NASA’s satellite tracking projects to major chip designers such as Graphcore.
    While building the Python library, Eshraghian created code documentation and educational materials, which came naturally to him in the process of teaching himself the coding language. The documents, tutorials, and interactive coding notebooks he made later exploded in the community and became the first point of entry for many people learning about the topics of neuromorphic engineering and spiking neural networks, which he sees as one of the major reasons that his library became so popular.
    An honest resource
    Knowing that these educational materials could be very valuable to the growing community of computer scientists and beyond who were interested in the field, Eshraghian began compiling his extensive documentation into a paper, which has now been published in the Proceedings of the IEEE, a leading computing journal.
    The paper acts as a companion to the snnTorch code library and is structured like a tutorial, and an opinionated one at that, discussing uncertainty among brain-inspired deep learning researchers and offering a perspective on the future of the field. Eshraghian said that the paper is intentionally upfront to its readers that the field of neuromorphic computing is evolving and unsettled in an effort to save students the frustration of trying to find the theoretical basis for code decision-making that the research community doesn’t even understand.
    “This paper is painfully honest, because students deserve that,” Eshraghian said. “There’s a lot of things that we do in deep learning, and we just don’t know why they work. A lot of times we want to claim that we did something intentionally, and we published because we went through a series of rigorous experiments, but here we say just: this is what works best and we have no idea why.”
    The paper contains blocks of code, a format unusual to typical research papers. These code blocks are sometimes accompanied by explanations that certain areas may be vastly unsettled, but provide insight into why researchers think certain approaches may be successful. Eshraghian said he has seen a positive reception to this honest approach in the community, and has even been told that the paper is being used in onboarding materials at neuromorphic hardware startups.

    “I don’t want my research to put people through the same pain I went through,” he said.
    Learning from and about the brain
    The paper offers a perspective on how researchers in the field might navigate some of the limitations of brain-inspired deep learning that stem from the fact that overall, our understanding of how the brain functions and processes information is quite limited.
    For AI researchers to move toward more brain-like learning mechanisms for their deep learning models, they need to identify the correlations and discrepancies between deep learning and biology, Eshraghian said. One of these key differences is that brains can’t survey all of the data they’ve ever inputted in the way that AI models can, and instead focus on the real-time data that comes their way, which could offer opportunities for enhanced energy efficiency.
    “Brains aren’t time machines, they can’t go back — all your memories are pushed forward as you experience the world, so training and processing are coupled together,” Eshraghian said. “One of the things that I make a big deal of in the paper is how we can apply learning in real time.”
    Another area of exploration in the paper is a fundamental concept in neuroscience that states that neurons that fire together are wired together — meaning when two neurons are triggered to send out a signal at the same time, the pathway between the two neurons is strengthened. However, the ways in which the brain learns on an organ-wide scale still remains mysterious.
    The “fire together, wired together” concept has been traditionally seen as in opposition to deep learning’s model training method known as backpropagation, but Eshraghian suggests that these processes may be complementary, opening up new areas of exploration for the field.
    Eshraghian is also excited about working with cerebral organoids, which are models of brain tissue grown from stem cells, to learn more about how the brain processes information. He’s currently collaborating with biomolecular engineering researchers in the UCSC Genomics Institute’s Braingeneers group to explore these questions with organoid models. This is a unique opportunity for UC Santa Cruz engineers to incorporate “wetware” — a term referring to biological models for computing research — into the software/hardware co-design paradigm that is prevalent in the field. The snnTorch code could even provide a platform for simulating organoids, which can be difficult to maintain in the lab.
    “[The Braingeneers] are building the biological instruments and tools that we can use to get a better feel for how learning can happen, and how that might translate in order to make deep learning more efficient,” Eshraghian said.
    Brain-inspired learning at UCSC and beyond
    Eshraghian is now using the concepts developed in his library and the recent paper in his class on neuromorphic computing at UC Santa Cruz called “Brain-Inspired Deep Learning.” Undergraduate and graduate students across a range of academic disciplines are taking the class to learn the basics of deep learning and complete a project in which they write their own tutorial for, and potentially contributing to, snnTorch.
    “It’s not just kind of coming out of the class with an exam or getting an A plus, it’s now making a contribution to something, and being able to say that you’ve done something tangible,” Eshraghian said.
    Meanwhile, the preprint version of the recent IEEE paper continues to receive contributions from researchers around the world, a reflection of the dynamic, open-source nature of the field. A new NSF grant he is a co-principal investigator on will support students’ ability to attend the month-long Telluride Neuromorphic & Cognition Engineering workshop.
    Eshraghian is collaborating with people to push the field in a number of ways, from making biological discoveries about the brain, to pushing the limits of neuromorphic chips to handle low-power AI workloads, to facilitating collaboration to bring the spiking neural network-style of computing to other domains such as natural physics.
    Discord and Slack channels dedicated to discussing the spiking neural network code support a thriving environment of collaboration across industry and academia. Eshraghian even recently came across a job posting that listed proficiency in snnTorch as a desired quality. More

  • in

    Dams now run smarter with AI

    In August 2020, following a period of prolonged drought and intense rainfall, a dam situated near the Seomjin River in Korea experienced overflow during a water release, resulting in damages exceeding 100 billion won (USD 76 million). The flooding was attributed to maintaining the dam’s water level 6 meters higher than the norm. Could this incident have been averted through predictive dam management?
    A research team led by Professor Jonghun Kam and Eunmi Lee, a PhD candidate, from the Division of Environmental Science & Engineering at Pohang University of Science and Technology (POSTECH), recently employed deep learning techniques to scrutinize dam operation patterns and assess their effectiveness. Their findings were published in the Journal of Hydrology.
    Korea faces a precipitation peak during the summer, relying on dams and associated infrastructure for water management. However, the escalating global climate crisis has led to the emergence of unforeseen typhoons and droughts, complicating dam operations. In response, a new study has emerged, aiming to surpass conventional physical models by harnessing the potential of an artificial intelligence (AI) model trained on extensive big data.
    The team focused on crafting an AI model aimed at not only predicting the operational patterns of dams within the Seomjin River basin, specifically focusing on the Seomjin River Dam, Juam Dam, and Juam Control Dam, but also understanding the decision-making processes of the trained AI models. Their objective was to formulate a scenario outlining the methodology for forecasting dam water levels. Employing the Gated Recurrent Unit (GRU) model, a deep learning algorithm, the team trained it using data spanning from 2002 to 2021 from dams along the Seomjin River. Precipitation, inflow, and outflow data served as inputs while hourly dam levels served as outputs. The analysis demonstrated remarkable accuracy, boasting an efficiency index exceeding 0.9.
    Subsequently, the team devised explainable scenarios, manipulating inputs by -40%, -20%, +20%, and 40%, of each input variable to examine how the trained GRU model responded to these alterations in inputs. While changes in precipitation had a negligible impact on dam water levels, variations in inflow significantly influenced the dam’s water level. Notably, the identical change in outflow yielded different water levels at distinct dams, affirming that the GRU model had effectively learned the unique operational nuances of each dam.
    Professor Jonghun Kam remarked “Our examination delved beyond predicting the patterns of dam operations securitize their effectiveness using AI models. We introduced a methodology aimed at indirectly understanding the decision-making process of AI-based black box model determining dam water levels.” He further stated, “Our aspiration is that this insight will contribute to a deeper understanding of dam operations and enhance their efficiency in the future.”
    The research was sponsored by the Mid-career Researcher Program of the National Research Foundation of Korea. More

  • in

    The mind’s eye of a neural network system

    In the background of image recognition software that can ID our friends on social media and wildflowers in our yard are neural networks, a type of artificial intelligence inspired by how own our brains process data. While neural networks sprint through data, their architecture makes it difficult to trace the origin of errors that are obvious to humans — like confusing a Converse high-top with an ankle boot — limiting their use in more vital work like health care image analysis or research. A new tool developed at Purdue University makes finding those errors as simple as spotting mountaintops from an airplane.
    “In a sense, if a neural network were able to speak, we’re showing you what it would be trying to say,” said David Gleich, a Purdue professor of computer science in the College of Science who developed the tool, which is featured in a paper published in Nature Machine Intelligence. “The tool we’ve developed helps you find places where the network is saying, ‘Hey, I need more information to do what you’ve asked.’ I would advise people to use this tool on any high-stakes neural network decision scenarios or image prediction task.”
    Code for the tool is available on GitHub, as are use case demonstrations. Gleich collaborated on the research with Tamal K. Dey, also a Purdue professor of computer science, and Meng Liu, a former Purdue graduate student who earned a doctorate in computer science.
    In testing their approach, Gleich’s team caught neural networks mistaking the identity of images in databases of everything from chest X-rays and gene sequences to apparel. In one example, a neural network repeatedly mislabeled images of cars from the Imagenette database as cassette players. The reason? The pictures were drawn from online sales listings and included tags for the cars’ stereo equipment.
    Neural network image recognition systems are essentially algorithms that process data in a way that mimics the weighted firing pattern of neurons as an image is analyzed and identified. A system is trained to its task — such as identifying an animal, a garment or a tumor — with a “training set” of images that includes data on each pixel, tagging and other information, and the identity of the image as classified within a particular category. Using the training set, the network learns, or “extracts,” the information it needs in order to match the input values with the category. This information, a string of numbers called an embedded vector, is used to calculate the probability that the image belongs to each of the possible categories. Generally speaking, the correct identity of the image is within the category with the highest probability.
    But the embedded vectors and probabilities don’t correlate to a decision-making process that humans would recognize. Feed in 100,000 numbers representing the known data, and the network produces an embedded vector of 128 numbers that don’t correspond to physical features, although they do make it possible for the network to classify the image. In other words, you can’t open the hood on the algorithms of a trained system and follow along. Between the input values and the predicted identity of the image is a proverbial “black box” of unrecognizable numbers across multiple layers.
    “The problem with neural networks is that we can’t see inside the machine to understand how it’s making decisions, so how can we know if a neural network is making a characteristic mistake?” Gleich said.

    Rather than trying to trace the decision-making path of any single image through the network, Gleich’s approach makes it possible to visualize the relationship that the computer sees among all the images in an entire database. Think of it like a bird’s-eye view of all the images as the neural network has organized them.
    The relationship among the images (like network’s prediction of the identity classification of each of the images in the database) is based on the embedded vectors and probabilities the network generates. To boost the resolution of the view and find places where the network can’t distinguish between two different classifications, Gleich’s team first developed a method of splitting and overlapping the classifications to identify where images have a high probability of belonging to more than one classification.
    The team then maps the relationships onto a Reeb graph, a tool taken from the field of topological data analysis. On the graph, each group of images the network thinks are related is represented by a single dot. Dots are color coded by classification. The closer the dots, the more similar the network considers groups to be, and most areas of the graph show clusters of dots in a single color. But groups of images with a high probability of belonging to more than one classification will be represented by two differently colored overlapping dots. With a single glance, areas where the network cannot distinguish between two classifications appear as a cluster of dots in one color, accompanied by a smattering of overlapping dots in a second color. Zooming in on the overlapping dots will show an area of confusion, like the picture of the car that’s been labeled both car and cassette player.
    “What we’re doing is taking these complicated sets of information coming out of the network and giving people an ‘in’ into how the network sees the data at a macroscopic level,” Gleich said. “The Reeb map represents the important things, the big groups and how they relate to each other, and that makes it possible to see the errors.”
    “Topological Structure of Complex Predictions” was produced with the support of the National Science Foundation and the U.S. Department of Energy. More

  • in

    Wearables capture body sounds to continuously monitor health

    During even the most routine visits, physicians listen to sounds inside their patients’ bodies — air moving in and out of the lungs, heart beats and even digested food progressing through the long gastrointestinal tract. These sounds provide valuable information about a person’s health. And when these sounds subtly change or downright stop, it can signal a serious problem that warrants time-sensitive intervention.
    Now, Northwestern University researchers are introducing new soft, miniaturized wearable devices that go well beyond episodic measurements obtained during occasional doctor exams. Softly adhered to the skin, the devices continuously track these subtle sounds simultaneously and wirelessly at multiple locations across nearly any region of the body.
    The new study will be published on Thursday (Nov. 16) in the journal Nature Medicine.
    In pilot studies, researchers tested the devices on 15 premature babies with respiratory and intestinal motility disorders and 55 adults, including 20 with chronic lung diseases. Not only did the devices perform with clinical-grade accuracy, they also offered new functionalities that have not been developed nor introduced into research or clinical care.
    “Currently, there are no existing methods for continuously monitoring and spatially mapping body sounds at home or in hospital settings,” said Northwestern’s John A. Rogers, a bioelectronics pioneer who led the device development. “Physicians have to put a conventional, or a digital, stethoscope on different parts of the chest and back to listen to the lungs in a point-by-point fashion. In close collaborations with our clinical teams, we set out to develop a new strategy for monitoring patients in real-time on a continuous basis and without encumbrances associated with rigid, wired, bulky technology.”
    “The idea behind these devices is to provide highly accurate, continuous evaluation of patient health and then make clinical decisions in the clinics or when patients are admitted to the hospital or attached to ventilators,”said Dr. Ankit Bharat, a thoracic surgeon at Northwestern Medicine, who led the clinical research in the adult subjects. “A key advantage of this device is to be able to simultaneously listen and compare different regions of the lungs. Simply put, it’s like up to 13 highly trained doctors listening to different regions of the lungs simultaneously with their stethoscopes, and their minds are synced to create a continuous and a dynamic assessment of the lung health that is translated into a movie on a real-life computer screen.”
    Rogers is the Louis Simpson and Kimberly Querrey Professor of Materials Science and Engineering, Biomedical Engineering and Neurological Surgery at Northwestern’s McCormick School of Engineering and Northwestern University Feinberg School of Medicine. He also directs the Querrey Simpson Institute for Bioelectronics. Bharat is the chief of thoracic surgery and the Harold L. and Margaret N. Method Professor of Surgery at Feinberg. As the director of the Northwestern Medicine Canning Thoracic Institute, Bharat performed the first double-lung transplants on COVID-19 patients in the U.S. and started a first-of-its-kind lung transplant program for certain patients with stage 4 lung cancers.

    Comprehensive, non-invasive sensing network
    Containing pairs of high-performance, digital microphones and accelerometers, the small, lightweight devices gently adhere to the skin to create a comprehensive non-invasive sensing network. By simultaneously capturing sounds and correlating those sounds to body processes, the devices spatially map how air flows into, through and out of the lungs as well as how cardiac rhythm changes in varied resting and active states, and how food, gas and fluids move through the intestines.
    Encapsulated in soft silicone, each device measures 40 millimeters long, 20 millimeters wide and 8 millimeters thick. Within that small footprint, the device contains a flash memory drive, tiny battery, electronic components, Bluetooth capabilities and two tiny microphones — one facing inward toward the body and another facing outward toward the exterior. By capturing sounds in both directions, an algorithm can separate external (ambient or neighboring organ) sounds and internal body sounds.
    “Lungs don’t produce enough sound for a normal person to hear,” Bharat said. “They just aren’t loud enough, and hospitals can be noisy places. When there are people talking nearby or machines beeping, it can be incredibly difficult. An important aspect of our technology is that it can correct for those ambient sounds.”
    Not only does capturing ambient noise enable noise canceling, it also provides contextual information about the patients’ surrounding environments, which is particularly important when treating premature babies.
    “Irrespective of device location, the continuous recording of the sound environment provides objective data on the noise levels to which babies are exposed,” said Dr. Wissam Shalish, a neonatologist at the Montreal Children’s Hospital and co-first author of the paper. “It also offers immediate opportunities to address any sources of stressful or potentially compromising auditory stimuli.”
    Non-obtrusively monitoring babies’ breathing

    When developing the new devices, the researchers had two vulnerable communities in mind: premature babies in the neonatal intensive care unit (NICU) and post-surgery adults. In the third trimester during pregnancy, babies’ respiratory systems mature so babies can breathe outside the womb. Babies born either before or in the earliest stages of the third trimester, therefore, are more likely to develop lung issues and disordered breathing complications.
    Particularly common in premature babies, apneas are a leading cause of prolonged hospitalization and potentially death. When apneas occur, infants either do not take a breath (due to immature breathing centers in the brain) or have an obstruction in their airway that restricts airflow. Some babies might even have a combination of the two. Yet, there are no current methods to continuously monitor airflow at the bedside and to accurately distinguish apnea subtypes, especially in these most vulnerable infants in the clinical NICU.
    “Many of these babies are smaller than a stethoscope, so they are already technically challenging to monitor,” said Dr. Debra E. Weese-Mayer, a study co-author, chief of autonomic medicine at Ann & Robert H. Lurie Children’s Hospital of Chicago and the Beatrice Cummings Mayer Professor of Autonomic Medicine at Feinberg. “The beauty of these new acoustic devices is they can non-invasively monitor a baby continuously — during wakefulness and sleep — without disturbing them. These acoustic wearables provide the opportunity to safely and non-obtrusively determine each infant’s ‘signature’ pertinent to their air movement (in and out of airway and lungs), heart sounds and intestinal motility day and night, with attention to circadian rhythmicity. And these wearables simultaneously monitor ambient noise that might affect the internal acoustic ‘signature’ and/or introduce other stimuli that might affect healthy growth and development.”
    In collaborative studies conducted at the Montreal Children’s Hospital in Canada, health care workers placed the acoustic devices on babies just below the suprasternal notch at the base of the throat. Devices successfully detected the presence of airflow and chest movements and could estimate the degree of airflow obstruction with high reliability, therefore allowing identification and classification of all apnea subtypes.
    “When placed on the suprasternal notch, the enhanced ability to detect and classify apneas could lead to more targeted and personalized care, improved outcomes and reduced length of hospitalization and costs,” Shalish said. “When placed on the right and left chest of critically ill babies, the real-time feedback transmitted whenever the air entry is diminished on one side relative to the other could promptly alert clinicians of a possible pathology necessitating immediate intervention.”
    Tracking infant digestion
    In children and infants, cardiorespiratory and gastrointestinal problems are major causes of death during the first five years of life. Gastrointestinal issues, in particular, are accompanied by reduced bowels sounds, which could be used as an early warning sign of digestion issues, intestinal dysmotility and potential obstructions. So, as part of the pilot study in the NICU, the researchers used the devices to monitor these sounds.
    In the study, premature babies wore sensors at four locations across their abdomen. Early results aligned with measurements of adult intestinal motility using wire-based systems, which is the current standard of care.
    “When placed on the abdomen, the automatic detection of reduced bowel sounds could alert the clinician of an impending (sometimes life-threatening) gastrointestinal complication,” Shalish said. “While improved bowel sounds could indicate signs of bowel recovery, especially after a gastrointestinal surgery.”
    “Intestinal motility has its own acoustic patterns and tonal qualities,” Weese-Mayer said. “Once an individual patient’s acoustic ‘signature’ is characterized, deviations from that personalized signature have potential to alert the individual and health care team to impending ill health, while there is still time for intervention to restore health.”
    In addition to offering continuous monitoring, the devices also untethered NICU babies from the variety of sensors, wires and cables connected to bedside monitors.
    Mapping a single breath
    Accompanying the NICU study, researchers tested the devices on adult patients, which included 35 adults with chronic lung diseases and 20 healthy controls. In all subjects, the devices captured the distribution of lung sounds and body motions at various locations simultaneously, enabling researchers to analyze a single breath across a range of regions throughout the lungs.
    “As physicians, we often don’t understand how a specific region of the lungs is functioning,” Bharat said. “With these wireless sensors, we can capture different regions of the lungs and assess their specific performance and each region’s performance relative to one another.”
    In 2020, cardiovascular and respiratory diseases claimed nearly 800,000 lives in the U.S., making them the first and third leading causes of death in adults, according to the Centers for Disease Control and Prevention. With the goal of helping guide clinical decisions and improve outcomes, the researchers hope their new devices can slash these numbers to save lives.
    “Lungs can make all sorts of sounds, including crackling, wheezing, rippling and howling,” Bharat said. “It’s a fascinating microenvironment. By continuously monitoring these sounds in real time, we can determine if lung health is getting better or worse and evaluate how well a patient is responding to a particular medication or treatment. Then we can personalize treatments to individual patients.”
    The study, “Wireless broadband acousto-mechanical sensors as body area networks for continuous physiological monitoring,” was supported by the Querrey-Simpson Institute for Bioelectronics at Northwestern University. The paper’s co-first authors are Jae-Young Yoo of Northwestern, Seyong Oh of Hanyang University in Korea and Wissam Shalish of the McGill University Health Centre. More

  • in

    AI model can help predict survival outcomes for patients with cancer

    Investigators from the UCLA Health Jonsson Comprehensive Cancer Center have developed an artificial intelligence (AI) model based on epigenetic factors that is able to predict patient outcomes successfully across multiple cancer types.
    The researchers found that by examining the gene expression patterns of epigenetic factors — factors that influence how genes are turned on or off — in tumors, they could categorize them into distinct groups to predict patient outcomes across various cancer types better than traditional measures like cancer grade and stage.
    These findings, described in Communications Biology, also lay the groundwork for developing targeted therapies aimed at regulating epigenetic factors in cancer therapy, such as histone acetyltransferases and SWI/SNF chromatin remodelers.
    “Traditionally, cancer has been viewed as primarily a result of genetic mutations within oncogenes or tumor suppressors,” said co-senior author Hilary Coller, professor of molecular, cell, and developmental biology and a member of the UCLA Health Jonsson Comprehensive Cancer Center and the Eli and Edythe Broad Center of Regenerative Medicine and Stem Cell Research at UCLA. “However, the emergence of advanced next-generation sequencing technologies has made more people realize that the state of the chromatin and the levels of epigenetic factors that maintain this state are important for cancer and cancer progression. There are different aspects of the state of the chromatin — like whether the histone proteins are modified, or whether the nucleic acid bases of the DNA contain extra methyl groups — that can affect cancer outcomes. Understanding these differences between tumors could help us learn more about why some patients respond differently to treatments and why their outcomes vary.”
    While previous studies have shown that mutations in the genes that encode epigenetic factors can affect an individual’s cancer susceptibility, little is known about how the levels of these factors impact cancer progression. This knowledge gap is crucial in fully understanding how epigenetics affects patient outcomes, noted Coller.
    To see if there was a relationship between epigenetic patterns and clinical outcomes, the researchers analyzed the expression patterns of 720 epigenetic factors to classify tumors from 24 different cancer types into distinct clusters.
    Out of the 24 adult cancer types, the team found that for 10 of the cancers, the clusters were associated with significant differences in patient outcomes, including progression-free survival, disease-specific survival, and overall survival.

    This was especially true for adrenocortical carcinoma, kidney renal clear cell carcinoma, brain lower grade glioma, liver hepatocellular carcinoma and lung adenocarcinoma, where the differences were significant for all the survival measurements.
    The clusters with poor outcomes tended to have higher cancer stage, larger tumor size, or more severe spread indicators.
    “We saw that the prognostic efficacy of an epigenetic factor was dependent on the tissue-of-origin of the cancer type,” said Mithun Mitra, co-senior author of the study and an associate project scientist in the Coller laboratory. “We even saw this link in the few pediatric cancer types we analyzed. This may be helpful in deciding the cancer-specific relevance of therapeutically targeting these factors.”
    The team then used epigenetic factor gene expression levels to train and test an AI model to predict patient outcomes. This model was specifically designed to predict what might happen for the five cancer types that had significant differences in survival measurements.
    The scientists found the model could successfully divide patients with these five cancer types into two groups: one with a significantly higher chance of better outcomes and another with a higher chance of poorer outcomes.
    They also saw that the genes that were most crucial for the AI model had a significant overlap with the cluster-defining signature genes.
    “The pan-cancer AI model is trained and tested on the adult patients from the TCGA cohort and it would be good to test this on other independent datasets to explore its broad applicability,” said Mitra. “Similar epigenetic factor-based models could be generated for pediatric cancers to see what factors influence the decision-making process compared to the models built on adult cancers.”
    “Our research helps provide a roadmap for similar AI models that can be generated through publicly-available lists of prognostic epigenetic factors,” said the study’s first author, Michael Cheng, a graduate student in the Bioinformatics Interdepartmental Program at UCLA. “The roadmap demonstrates how to identify certain influential factors in different types of cancer and contains exciting potential for predicting specific targets for cancer treatment.”
    The study was funded in part by grants from the National Cancer Institute, Cancer Research Institute, Melanoma Research Alliance, Melanoma Research Foundation, National Institutes of Health and the UCLA Spore in Prostate Cancer. More