More stories

  • in

    AI transforms smartwatch ECG signals into a diagnostic tool for heart failure

    A study published in Nature Medicine reports the ability of a smartwatch ECG to accurately detect heart failure in nonclinical environments. Researchers at Mayo Clinic applied artificial intelligence (AI) to Apple Watch ECG recordings to identify patients with a weak heart pump. Participants in the study recorded their smartwatch ECGs remotely whenever they wanted, from wherever they were. Periodically, they uploaded the ECGs to their electronic health records automatically and securely via a smartphone app developed by Mayo Clinic’s Center for Digital Health.
    “Currently, we diagnose ventricular dysfunction — a weak heart pump — through an echocardiogram, CT scan or an MRI, but these are expensive, time consuming and at times inaccessible. The ability to diagnose a weak heart pump remotely, from an ECG that a person records using a consumer device, such as a smartwatch, allows a timely identification of this potentially life-threatening disease at massive scale,” says Paul Friedman, M.D., chair of the Department of Cardiovascular Medicine at Mayo Clinic in Rochester. Dr. Friedman is the senior author of the study.
    People with a weak heart pump might not have symptoms, but this common form of heart disease affects about 2% of the population and 9% of people over 60. When the heart cannot pump enough oxygen-rich blood, symptoms may develop, including shortness of breath, a rapid heart rate and swelling in the legs. Early diagnosis is important because once identified, there are numerous treatments to improve quality of life and decrease the risks of heart failure and death.
    Mayo researchers interpreted Apple Watch single-lead ECGs by modifying an earlier algorithm developed for 12-lead ECGs that is proven to detect a weak heart pump. The 12-lead algorithm for low ventricular ejection fraction is licensed to Anumana Inc., an AI-driven health technology company, co-created by nference and Mayo Clinic.
    While the data are early, the modified AI algorithm using single-lead ECG data had an area under the curve of 0.88 to detect a weak heart pump. By comparison, this measure of accuracy is as good as or slightly better than a medical treadmill diagnostic test.
    “These data are encouraging because they show that digital tools allow convenient, inexpensive, scalable screening for important conditions. Through technology, we can remotely gather useful information about a patient’s heart in an accessible way that can meet the needs of people where they are,” says Zachi Attia, Ph.D., the lead AI scientist in the Department of Cardiovascular Medicine at Mayo Clinic. Dr. Attia is first author of the study.
    “Building the capability to ingest data from wearable consumer electronics and provide analytic capabilities to prevent disease or improve health remotely in the manner demonstrated by this study can revolutionize health care. Solutions like this not only enable prediction and prevention of problems, but also will eventually help diminish health disparities and the burden on health systems and clinicians,” says Bradley Leibovich, M.D., the medical director for the Mayo Clinic Center for Digital Health, and co-author on the study.
    All 2,454 study participants were Mayo Clinic patients from across the U.S. and 11 countries. They downloaded an app created by the Mayo Clinic Center for Digital Health to securely upload their Apple Watch ECGs to their electronic health records. Participants logged more than 125,000 previous and new Apple Watch ECGs to their electronic health records between August 2021 and February 2022. Clinicians had access to view all the ECG data on an AI dashboard built into the electronic health record, including the day and time it was recorded.
    Approximately 420 participants had an echocardiogram — a standard test using sound waves to produce images of the heart — within 30 days of logging an Apple Watch ECG in the app. Of those, 16 patients had low ejection fraction confirmed by the echocardiogram, which provided a comparison for accuracy.
    This study was funded by Mayo Clinic with no technical or financial support from Apple. Drs. Attia and Friedman, along with others, are co-inventors of the low ejection fraction algorithm licensed to Anumana and may benefit from its commercialization.
    Story Source:
    Materials provided by Mayo Clinic. Original written by Terri Malloy. Note: Content may be edited for style and length. More

  • in

    New statistical method improves genomic analyzes

    A new statistical method provides a more efficient way to uncover biologically meaningful changes in genomic data that span multiple conditions — such as cell types or tissues.
    Whole genome studies produce enormous amounts of data, ranging from millions of individual DNA sequences to information about where and how many of the thousands of genes are expressed to the location of functional elements across the genome. Because of the amount and complexity of the data, comparing different biological conditions or across studies performed by separate laboratories can be statistically challenging.
    “The difficulty when you have multiple conditions is how to analyze the data together in a way that can be both statistically powerful and computationally efficient,” said Qunhua Li, associate professor of statistics at Penn State. “Existing methods are computationally expensive or produce results that are difficult to interpret biologically. We developed a method called CLIMB that improves on existing methods, is computationally efficient, and produces biologically interpretable results. We test the method on three types of genomic data collected from hematopoietic cells — related to blood stem cells — but the method could also be used in analyses of other ‘omic’ data.”
    The researchers describe the CLIMB (Composite LIkelihood eMpirical Bayes) method in a paper appearing online Nov. 12 in the journal Nature Communications.
    “In experiments where there is so much information but from relatively few individuals, it helps to be able to use information as efficiently as possible,” said Hillary Koch, a graduate student at Penn State at the time of the research and now a senior statistician at Moderna. “There are statistical advantages to be able to look at everything together and even to use information from related experiments. CLIMB allows us to do just that.”
    The CLIMB method uses principles from two traditional techniques to analyze data across multiple conditions. One technique uses a series of pairwise comparisons between conditions but becomes increasingly challenging to interpret as additional conditions are added. More

  • in

    Wasting muscles built back better

    Muscles waste as a result of not being exercised enough, as happens quickly with a broken limb that has been immobilized in a cast, and more slowly in people reaching an advanced age. Muscle atrophy, how clinicians refer to the phenomenon, is also a debilitating symptom in patients suffering from neurological disorders, such as amyotrophic lateral sclerosis (ALS) and multiple sclerosis (MS), and can be a systemic response to various other diseases, including cancer and diabetes.
    Mechanotherapy, a form of therapy given by manual or mechanical means, is thought to have broad potential for tissue repair. The best-known example is massage, which applies compressive stimulation to muscles for their relaxation. However, it has been much less clear whether stretching and contracting muscles by external means can also be a treatment. So far, two major challenges have prevented such studies: limited mechanical systems capable of evenly generating stretching and contraction forces along the length of muscles, and inefficient delivery of these mechanical stimuli to the surface and into the deeper layers of muscle tissue.
    Now, bioengineers at the Wyss Institute for Biologically Inspired Engineering at Harvard University and the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a mechanically active adhesive named MAGENTA, which functions as a soft robotic device and solves this two-fold problem. In an animal model, MAGENTA successfully prevented and supported the recovery from muscle atrophy. The team’s findings are published in Nature Materials.
    “With MAGENTA, we developed a new integrated multi-component system for the mechanostimulation of muscle that can be directly placed on muscle tissue to trigger key molecular pathways for growth,” said senior author and Wyss Founding Core Faculty member David Mooney, Ph.D. “While the study provides first proof-of-concept that externally provided stretching and contraction movements can prevent atrophy in an animal model, we think that the device’s core design can be broadly adapted to various disease settings where atrophy is a major issue.” Mooney leads the Wyss Institute’s Immuno-Materials Platform, and is also the Robert P. Pinkas Family Professor of Bioengineering at SEAS.
    An adhesive that can make muscles move
    One of MAGENTA’s major components is an engineered spring made from nitinol, a type of metal known as “shape memory alloy” (SMA) that enables MAGENTA’s rapid actuation when heated to a certain temperature. The researchers actuated the spring by electrically wiring it to a microprocessor unit that allows the frequency and duration of the stretching and contraction cycles to be programmed. The other components of MAGENTA are an elastomer matrix that forms the body of the device and insulates the heated SMA, and a “tough adhesive” that enables the device to be firmly adhered to muscle tissue. In this way, the device is aligned with the natural axis of muscle movement, transmitting the mechanical force generated by SMA deep into the muscle. Mooney’s group is advancing MAGENTA, which stands for “mechanically active gel-elastomer-nitinol tissue adhesive,” as one of several Tough Gel Adhesives with functionalities tailored to various regenerative applications across multiple tissues. More

  • in

    Researcher lauded for superb solution of algorithmic riddle from the 1950s

    For more than half a century, researchers around the world have been struggling with an algorithmic problem known as “the single source shortest path problem.” The problem is essentially about how to devise a mathematical recipe that best finds the shortest route between a node and all other nodes in a network, where there may be connections with negative weights.
    Sound complicated? Possibly. But in fact, this type of calculation is already used in a wide range of the apps and technologies that we depend upon for finding our ways around — as Google Maps guides us across landscapes and through cities, for example.
    Now, researchers from the University of Copenhagen’s Department of Computer Science have succeeded in solving the single source shortest path problem, a riddle that has stumped researchers and experts for decades.
    “We discovered an algorithm that solves the problem in virtually linear time, the fastest way possible. It is a fundamental algorithmic problem that has been studied since the 1950s and is taught around the world. This was one of the reasons that prompted us to solve it,” explains Associate Professor Christian Wulff-Nilsen, who clearly finds it tough to leave an unsolved algorithmic problem alone.
    Quicker calculations for routing electric cars
    Last year, Wulff-Nilsen made another breakthrough in the same area, one that produced a result which addressed how to find the shortest path in a network that changes over time. His solution to the recent riddle builds upon that work. More

  • in

    Lack of computer access linked to poorer mental health in young people during COVID-19 pandemic

    Cambridge researchers have highlighted how lack of access to a computer was linked to poorer mental health among young people and adolescents during COVID-19 lockdowns.
    The team found that the end of 2020 was the time when young people faced the most difficulties and that the mental health of those young people without access to a computer tended to deteriorate to a greater extent than that of their peers who did have access.
    The COVID-19 pandemic had a significant effect on young people’s mental health, with evidence of rising levels of anxiety, depression, and psychological distress. Adolescence is a period when people are particularly vulnerable to developing mental health disorders, which can have long-lasting consequences into adulthood. In the UK, the mental health of children and adolescents was already deteriorating before the pandemic, but the proportion of people in this age group likely to be experiencing a mental health disorder increased from 11% in 2017 to 16% in July 2020.
    The pandemic led to the closure of schools and an increase in online schooling, the impacts of which were not felt equally. Those adolescents without access to a computer faced the greatest disruption: in one study 30% of school students from middle-class homes reported taking part in live or recorded school lessons daily, while only 16% of students from working-class homes reported doing so.
    In addition to school closures, lockdown often meant that young people could not meet their friends in person. During these periods, online and digital forms of interaction with peers, such as through video games and social media, are likely to have helped reduce the impact of these social disruptions.
    Tom Metherell, who at the time of the study was an undergraduate student at Fitzwilliam College, University of Cambridge, said: “Access to computers meant that many young people were still able to ‘attend’ school virtually, carry on with their education to an extent and keep up with friends. But anyone who didn’t have access to a computer would have been at a significant disadvantage, which would only risk increasing their sense of isolation.”
    To examine in detail the impact of digital exclusion on the mental health of young people, Metherell and colleagues examined data from 1,387 10-15-year-olds collected as part of Understanding Society, a large UK-wide longitudinal survey. They focused on access to computers rather than smartphones, as schoolwork is largely possible only on a computer while at this age most social interactions occur in person at school. More

  • in

    Scientists promote FAIR standards for managing artificial intelligence models

    New data standards have been created for AI models.
    Aspiring bakers are frequently called upon to adapt award-winning recipes based on differing kitchen setups. Someone might use an eggbeater instead of a stand mixer to make prize-winning chocolate chip cookies, for instance.
    Being able to reproduce a recipe in different situations and with varying setups is critical for both talented chefs and computational scientists, the latter of whom are faced with a similar problem of adapting and reproducing their own ​”recipes” when trying to validate and work with new AI models. These models have applications in scientific fields ranging from climate analysis to brain research.
    “When we talk about data, we have a practical understanding of the digital assets we deal with,” said Eliu Huerta, scientist and lead for Translational AI at the U.S. Department of Energy’s (DOE) Argonne National Laboratory. ​”With an AI model, it’s a little less clear; are we talking about data structured in a smart way, or is it computing, or software, or a mix?”
    In a new study, Huerta and his colleagues have articulated a new set of standards for managing AI models. Adapted from recent research on automated data management, these standards are called FAIR, which stands for findable, accessible, interoperable and reusable.
    “By making AI models FAIR, we no longer have to build each system from the ground up each time,” said Argonne computational scientist Ben Blaiszik. ​”It becomes easier to reuse concepts from different groups, helping to create cross-pollination across teams.”
    According to Huerta, the fact that many AI models are currently not FAIR poses a challenge to scientific discovery. ​”For many studies that have been done to date, it is difficult to gain access to and reproduce the AI models that are referenced in the literature,” he said. ​”By creating and sharing FAIR AI models, we can reduce the amount of duplication of effort and share best practices for how to use these models to enable great science.” More

  • in

    A dual boost for optical delay scanning

    Various applications of pulsed laser sources rely on the ability to produce a series of pulse pairs with a stepwise increasing delay between them. Implementing such optical delay scanning with high precision is demanding, in particular for long delays. Addressing this challenge, ETH physicists have developed a versatile ‘dual-comb’ laser that combines a wide scanning range with high power, low noise, stable operation, and ease of use — thereby offering bright prospects for practical uses.
    Ultrafast laser technology has enabled a trove of methods for precision measurements. These include in particular a broad class of pulsed-laser experiments in which a sample is excited and, after a variable amount of time, the response is measured. In such studies, the delay between the two pulses should typically cover the range from femtoseconds to nanoseconds. In practice, scanning the delay time over a range that broad in a repeatable and precise manner is a significant challenge. A team of researchers in the group of Prof. Ursula Keller in the Department of Physics at ETH Zurich, with main contributions from Dr. Justinas Pupeikis, Dr. Benjamin Willenberg and Dr. Christopher Phillips, has now taken a major step towards a solution that has the potential to be a game changer for a wide range of practical applications. Writing in Optica, they recently introduced and demonstrated a versatile laser design that offers both outstanding specifications and a low-complexity setup that runs stably over many hours.
    The long path to long delays
    The conceptually simplest solution to scanning optical delays is based on a laser whose output is split into two pulses. While one of them takes a fixed route to the target, the optical path for the second pulse is varied with linearly displacing mirrors. The longer the path between mirrors, the later the laser pulse arrives at the target and the longer is the delay relative to the first pulse. The problem, however, is that light travels at famously high speed, covering some 0.3 metres per nanosecond (in air). For mechanical delay lines this means that scanning to delays up to several nanoseconds requires large devices with intricate and typically slow mechanical constructions.
    An elegant way to avoid complex constructions of that kind is to use a pair of ultrashort pulse lasers that emit trains of pulses, each at slightly different repetition rates. If, say, the first pulses emerging from each of the lasers are perfectly synchronized, then the second pair has a delay between the pulses that corresponds to the difference in repetition times of the two lasers. The next pair of pulses has twice that delay between them, and so on. In this manner, a perfectly linear and fast scan of optical delays without moving parts is possible — at least in theory. The most refined type of a laser system generating two such pulse trains is known as a dual comb, in reference to the spectral structure of the output consisting of a pair of optical frequency combs.
    Whereas the promise of the dual-comb approach has long been clear, progress towards applications was hindered by challenges related to designing a readily deployable laser system that provides two simultaneously operating combs of the required quality and with high relative stability. Now, Pupeikis et al. made a breakthrough towards such a practical laser, and the key is a new way to generate the two frequency combs in one and the same laser cavity.
    Two from one
    The task the researchers had at hand was to construct a laser source that consist of two coherent optical pulse trains that are basically identical in all properties except from that all-important difference in repetition rate. A natural route to achieve this is to create the two combs in the same laser cavity. Various approaches for realizing such laser-cavity multiplexing have been introduced in the past. But these typically require that additional components are placed inside the cavity. This introduces losses and different dispersion characteristics for the two combs, among other issues. The ETH physicists have overcome these issues while still ensuring that the two combs share all of the components inside the cavity.
    They achieved this by inserting into the cavity a ‘biprism’, a device with two separate angles on the surface from which light is reflected. The biprism splits the cavity mode into two parts, and the researchers show that by suitable design of the optical cavity the two combs can be spatially separated on the active intracavity components while still taking a very similar path otherwise. ‘Active components’ refers here to the gain medium, where lasing is induced, and to the so-called SESAM (semiconductor saturable absorber mirror) element, which enables mode-locking and pulse generation. The spatial separation of the modes at these stages means that two combs with distinct spacing can be generated, while most other properties are essentially duplicated. In particular, the two combs have highly correlated timing noise. That is, while imperfections in the temporal comb structure are unavoidably present, they are almost the same for the two combs, making it possible to deal with such noise.
    A gate to practical applications
    An outstanding feature of the novel single-cavity architecture now introduced is that it does not require compromises in laser design. Instead, cavity architectures that are optimal for single-comb operation can be readily adapted for dual-comb use. With that, the new design also represents a major simplification relative to commercial products and opens up a path for the production and deployment of this new class of ultrafast laser sources.
    The benchmarks achieved in the first demonstrations are highly encouraging. The researchers scanned an optical delay of 12.5 ns (equivalent to a distance of 3.75 m in air) with 2-fs precision (which is less than a micrometre in physical distance) at rates of up to 500 Hz and with record-high stability for a single-cavity dual-comb laser. The obtained performance — including the high power of more than 2.4 W for each comb, the short pulse durations of less than 140 fs, and the demonstrated coupling to an optical parametric oscillator (OPO) for converting the light into a different wavelength regime — underline the practical potential of the approach for a wide spectrum of measurements, from precision optical ranging (the optical measurement of absolute distance) to high-resolution absorption spectroscopy and nonlinear spectroscopy for sampling ultrafast phenomena. More

  • in

    Sharks face rising odds of extinction even as other big fish populations recover

    After decades of population declines, the future is looking brighter for several tuna and billfish species, such as southern bluefin tuna, black marlins and swordfish, thanks to years of successful fisheries management and conservation actions. But some sharks that live in these fishes’ open water habitats are still in trouble, new research suggests.

    These sharks, including oceanic whitetips and porbeagles, are often caught by accident within tuna and billfish fisheries. And a lack of dedicated management of these species has meant their chances of extinction continue to rise, researchers report in the Nov. 11 Science. 

    The analysis evaluates the extinction risk of 18 species of large ocean fish over nearly seven decades. It provides “a view of the open ocean that we have not had before,” says Colin Simpfendorfer, a marine biologist at James Cook University in Australia who was not involved in this research.

    “Most of this information was available for individual species, but the synthesis for all of the species provides a much broader picture of what is happening in this important ecosystem,” he says.

    In recent years, major global biodiversity assessments have documented declines in species and ecosystems across the globe, says Maria José Juan-Jordá, a fisheries ecologist at the Spanish Institute of Oceanography in Madrid. But these patterns are poorly understood in the oceans.

    To fill this gap, Juan-Jordá and her colleagues looked to the International Union for Conservation of Nature’s Red List, which evaluates changes in a species’s extinction risk. The Red List Index evaluates the risk of extinction of an entire group of species. The team specifically targeted tunas, billfishes and sharks — large predatory fishes that have influential roles in their open ocean ecosystems. 

    Red List Index assessments occur every four to 10 years. In the new study, the researchers built on the Red List criteria to develop a way of tracking extinction risk continuously over time, rather than just within the IUCN intervals.

    Juan-Jordá and her colleagues did this by compiling data on species’ average age at reproductive maturity, changes in population biomass and abundance from fish stock assessments for seven tuna species, like the vulnerable bigeye and endangered southern bluefin; six billfish species, like black marlin and sailfish; and five shark species. The team combined the data to calculate extinction risk trends for these 18 species from 1950 to 2019.

    The team found that the extinction risk for tunas and billfishes increased throughout the last half of the 20th century, with the trend reversing for tunas starting in the 1990s and billfishes in the 2010s. These shifts are tied to known reductions in fishing deaths for these species that occurred at the same time.

    The results are positive for tunas and billfishes, Simpfendorfer says. But three of the seven tunas and three of the six billfishes that the researchers looked at are still considered near threatened, vulnerable or endangered. “Now is not the time for complacency in managing these species,” Simpfendorfer says.

    But shark species are floundering in these very same waters where tuna and billfish are fished, where the sharks are often caught as bycatch. 

    Many open ocean sharks — like the silky shark (Carcharhinus falciformis) (pictured) — continue to decline, often accidentally caught by fishers seeking other large fish.Fabio Forget

    “While we are increasingly sustainably managing the commercially important, valuable target species of tunas and billfishes,” says Juan-Jordá, “shark populations continue to decline, therefore, the risk of extinction has continued to increase.”

    Some solutions going forward, says Juan-Jordá, include catch limits for some species and establishing sustainability goals within tuna and billfish fisheries beyond just the targeted species, addressing the issue of sharks that are incidentally caught. And it’s important to see if measures taken to reduce shark bycatch deaths are actually effective, she says. 

    “There is a clear need for significant improvement in shark-focused management, and organizations responsible for their management need to act quickly before it is too late,” Simpfendorfer says.  More