More stories

  • in

    Computational model captures the elusive transition states of chemical reactions

    During a chemical reaction, molecules gain energy until they reach what’s known as the transition state — a point of no return from which the reaction must proceed. This state is so fleeting that it’s nearly impossible to observe it experimentally.
    The structures of these transition states can be calculated using techniques based on quantum chemistry, but that process is extremely time-consuming. A team of MIT researchers has now developed an alternative approach, based on machine learning, that can calculate these structures much more quickly — within a few seconds.
    Their new model could be used to help chemists design new reactions and catalysts to generate useful products like fuels or drugs, or to model naturally occurring chemical reactions such as those that might have helped to drive the evolution of life on Earth.
    “Knowing that transition state structure is really important as a starting point for thinking about designing catalysts or understanding how natural systems enact certain transformations,” says Heather Kulik, an associate professor of chemistry and chemical engineering at MIT, and the senior author of the study.
    Chenru Duan PhD ’22 is the lead author of a paper describing the work, which appears today in Nature Computational Science. Cornell University graduate student Yuanqi Du and MIT graduate student Haojun Jia are also authors of the paper.
    Fleeting transitions
    For any given chemical reaction to occur, it must go through a transition state, which takes place when it reaches the energy threshold needed for the reaction to proceed. The probability of any chemical reaction occurring is partly determined by how likely it is that the transition state will form.

    “The transition state helps to determine the likelihood of a chemical transformation happening. If we have a lot of something that we don’t want, like carbon dioxide, and we’d like to convert it to a useful fuel like methanol, the transition state and how favorable that is determines how likely we are to get from the reactant to the product,” Kulik says.
    Chemists can calculate transition states using a quantum chemistry method known as density functional theory. However, this method requires a huge amount of computing power and can take many hours or even days to calculate just one transition state.
    Recently, some researchers have tried to use machine-learning models to discover transition state structures. However, models developed so far require considering two reactants as a single entity in which the reactants maintain the same orientation with respect to each other. Any other possible orientations must be modeled as separate reactions, which adds to the computation time.
    “If the reactant molecules are rotated, then in principle, before and after this rotation they can still undergo the same chemical reaction. But in the traditional machine-learning approach, the model will see these as two different reactions. That makes the machine-learning training much harder, as well as less accurate,” Duan says.
    The MIT team developed a new computational approach that allowed them to represent two reactants in any arbitrary orientation with respect to each other, using a type of model known as a diffusion model, which can learn which types of processes are most likely to generate a particular outcome. As training data for their model, the researchers used structures of reactants, products, and transition states that had been calculated using quantum computation methods, for 9,000 different chemical reactions.
    “Once the model learns the underlying distribution of how these three structures coexist, we can give it new reactants and products, and it will try to generate a transition state structure that pairs with those reactants and products,” Duan says.

    The researchers tested their model on about 1,000 reactions that it hadn’t seen before, asking it to generate 40 possible solutions for each transition state. They then used a “confidence model” to predict which states were the most likely to occur. These solutions were accurate to within 0.08 angstroms (one hundred-millionth of a centimeter) when compared to transition state structures generated using quantum techniques. The entire computational process takes just a few seconds for each reaction.
    “You can imagine that really scales to thinking about generating thousands of transition states in the time that it would normally take you to generate just a handful with the conventional method,” Kulik says.
    Modeling reactions
    Although the researchers trained their model primarily on reactions involving compounds with a relatively small number of atoms — up to 23 atoms for the entire system — they found that it could also make accurate predictions for reactions involving larger molecules.
    “Even if you look at bigger systems or systems catalyzed by enzymes, you’re getting pretty good coverage of the different types of ways that atoms are most likely to rearrange,” Kulik says.
    The researchers now plan to expand their model to incorporate other components such as catalysts, which could help them investigate how much a particular catalyst would speed up a reaction. This could be useful for developing new processes for generating pharmaceuticals, fuels, or other useful compounds, especially when the synthesis involves many chemical steps.
    “Traditionally all of these calculations are performed with quantum chemistry, and now we’re able to replace the quantum chemistry part with this fast generative model,” Duan says.
    Another potential application for this kind of model is exploring the interactions that might occur between gases found on other planets, or to model the simple reactions that may have occurred during the early evolution of life on Earth, the researchers say.
    The research was funded by the U.S. Office of Naval Research and the National Science Foundation. More

  • in

    Ultrafast lasers map electrons ‘going ballistic’ in graphene, with implications for next-gen electronic devices

    Research appearing in ACS Nano, a premier journal on nanoscience and nanotechnology, reveals the ballistic movement of electrons in graphene in real time.
    The observations, made at the University of Kansas’ Ultrafast Laser Lab, could lead to breakthroughs in governing electrons in semiconductors, fundamental components in most information and energy technology.
    “Generally, electron movement is interrupted by collisions with other particles in solids,” said lead author Ryan Scott, a doctoral student in KU’s Department of Physics & Astronomy. “This is similar to someone running in a ballroom full of dancers. These collisions are rather frequent — about 10 to 100 billion times per second. They slow down the electrons, cause energy loss and generate unwanted heat. Without collisions, an electron would move uninterrupted within a solid, similar to cars on a freeway or ballistic missiles through air. We refer to this as ‘ballistic transport.'”
    Scott performed the lab experiments under the mentorship of Hui Zhao, professor of physics & astronomy at KU. They were joined in the work by former KU doctoral student Pavel Valencia-Acuna, now a postdoctoral researcher at the Northwest Pacific National Laboratory.
    Zhao said electronic devices utilizing ballistic transport could potentially be faster, more powerful and more energy efficient.
    “Current electronic devices, such as computers and phones, utilize silicon-based field-effect transistors,” Zhao said. “In such devices, electrons can only drift with a speed on the order of centimeters per second due to the frequent collisions they encounter. The ballistic transport of electrons in graphene can be utilized in devices with fast speed and low energy consumption.”
    The KU researchers observed the ballistic movement in graphene, a promising material for next-generation electronic devices. First discovered in 2004 and awarded the Nobel Prize in Physics in 2010, graphene is made of a single layer of carbon atoms forming a hexagonal lattice structure — somewhat like a soccer net.

    “Electrons in graphene move as if their ‘effective’ mass is zero, making them more likely to avoid collisions and move ballistically,” Scott said. “Previous electrical experiments, by studying electrical currents produced by voltages under various conditions, have revealed signs of ballistic transport. However, these techniques aren’t fast enough to trace the electrons as they move.”
    According to the researchers, electrons in graphene (or any other semiconductor) are like students sitting in a full classroom, where students can’t freely move around because the desks are full. The laser light can free electrons to momentarily vacate a desk, or ‘hole’ as physicists call them.
    “Light can provide energy to an electron to liberate it so that it can move freely,” Zhao said. “This is similar to allowing a student to stand up and walk away from their seat. However, unlike a charge-neutral student, an electron is negatively charged. Once the electron has left its ‘seat,’ the seat becomes positively charged and quickly drags the electron back, resulting in no more mobile electrons — like the student sitting back down.”
    Because of this effect, the super-light electrons in graphene can only stay mobile for about one-trillionth of a second before falling back to its seat. This short time presents a severe challenge to observing the movement of the electrons. To address this problem, the KU researchers designed and fabricated a four-layer artificial structure with two graphene layers separated by two other single-layer materials, molybdenum disulphide and molybdenum diselenide.
    “With this strategy, we were able to guide the electrons to one graphene layer while keeping their ‘seats’ in the other graphene layer,” Scott said. “Separating them with two layers of molecules, with a total thickness of just 1.5 nanometers, forces the electrons to stay mobile for about 50-trillionths of a second, long enough for the researchers, equipped with lasers as fast as 0.1 trillionth of a second, to study how they move.”
    The researchers use a tightly focused laser spot to liberate some electrons in their sample. They trace these electrons by mapping out the “reflectance” of the sample, or the percentage of light they reflect.

    “We see most objects because they reflect light to our eyes,” Scott said. “Brighter objects have larger reflectance. On the other hand, dark objects absorb light, which is why dark clothes become hot in the summer. When a mobile electron moves to a certain location of the sample, it makes that location slightly brighter by changing how electrons in that location interact with light. The effect is very small — even with everything optimized, one electron only changes the reflectance by 0.1 part per million.”
    To detect such a small change, the researchers liberated 20,000 electrons at once, using a probe laser to reflect off the sample and measure this reflectance, repeating the process 80 million times for each data point. They found the electrons on average move ballistically for about 20-trillionths of a second with a speed of 22 kilometers per second before running into something that terminates their ballistic motion.
    The research was funded by a grant from the Department of Energy under the program of Physical Behavior of Materials.
    Zhao said currently his lab is working to refine their material design to guide electrons more efficiently to the desired graphene layer, and trying to find ways to make them move longer distances ballistically. More

  • in

    AI study reveals individuality of tongue’s surface

    Artificial Intelligence (AI) and 3D images of the human tongue have revealed that the surface of our tongues are unique to each of us, new findings suggest.
    The results offer an unprecedented insight into the biological make-up of our tongue’s surface and how our sense of taste and touch differ from person to person.
    The research has huge potential for discovering individual food preferences, developing healthy food alternatives and early diagnosis of oral cancers in the future, experts say.
    The human tongue is a highly sophisticated and complex organ. It’s surface is made up of hundreds of small buds — known as papillae — that assist with taste, talking and swallowing.
    Of these numerous projections, the mushroom-shaped fungiform papillae hold our taste buds whereas the crown-shaped filiform papillae give the tongue its texture and sense of touch.
    The taste function of our fungiform papillae has been well researched but little is known about the difference in shape, size and pattern of both forms of papillae between individuals.
    A team of researchers led by the University of Edinburgh’s School of Informatics, in collaboration with the University of Leeds, trained AI computer models to learn from three-dimensional microscopic scans of the human tongue, showing the unique features of papillae.

    They fed the data from over two thousand detailed scans of individual papillae — taken from silicone moulds of fifteen people’s tongues — to the AI tool.
    The AI models were designed to gain a better understanding of individual features of the participant’s papillae and to predict the age and gender of each volunteer.
    The team used small volumes of data to train the AI models about the different features of the papillae, combined with a significant use of topology — an area of mathematics which studies how certain spaces are structured and connected.
    This enabled the AI tool to predict the type of papillae to within 85 per cent accuracy and to map the position of filiform and fungiform papillae on the tongue’s surface.
    Remarkably, the papillae were also found to be distinctive across all fifteen subjects and individuals could be identified with an accuracy of 48 per cent from a single papilla.
    The findings have been published in the journal Scientific Reports.

    The study received funding from the United Kingdom Research and Innovation (UKRI) CDT in Biomedical AI and European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program.
    Senior author, Professor Rik Sakar, Reader, School of Informatics, University of Edinburgh, said:
    “This study brings us closer to understanding the complex architecture of tongue surfaces.
    “We were surprised to see how unique these micron-sized features are to each individual. Imagine being able to design personalized food customised to the conditions of specific people and vulnerable populations and thus ensure they can get proper nutrition whilst enjoying their food.
    Professor Sakar, added:
    “We are now planning to use this technique combining AI with geometry and topology to identify micron-sized features in other biological surfaces. This can help in early detection and diagnosis of unusual growths in human tissues.
    Lead author, Rayna Andreeva, PhD student at the Centre for Doctoral Training (CDT) in Biomedical AI, University of Edinburgh, said:
    “It was remarkable that the features based on topology worked so well for most types of analysis, and they were the most distinctive across individuals. This needs further study not only for the papillae, but also for other kinds of biological surfaces and medical conditions.” More

  • in

    Interactive screen use reduces sleep time in kids

    While screen time is generally known to affect sleep, new research suggests that interactive engagement, such as texting friends or playing video games, delays and reduces the time spent asleep to a greater extent than passive screen time, like watching television — especially for teens.
    The research, which published today (Dec. 13) in the Journal of Adolescent Health, demonstrates that adolescents at age 15 who used screens to communicate with friends or play video games in the hour before bed took 30 minutes longer to fall asleep than if they had refrained from interactive screen time. But it wasn’t just interactive screen time before bed that affected kids’ sleep, researchers said. For each hour during the day that kids spent playing video games beyond their usual amount, their sleep was delayed by about 10 minutes.
    “If teens typically play video games for an hour each day, but one day a new game comes out and they play for four hours, that’s three additional hours more than they typically play,” said David Reichenberger, postdoctoral scholar at Penn State and lead author on the study. “So, that means they could have 15 minutes of delayed sleep timing that night. For a child, losing 15 minutes of sleep at night is significant. It’s especially difficult when they have to get up in the morning for school; if they’re delaying their sleep, they can’t make up for it in the morning. Without adequate sleep, kids are at increased risk of obesity, as well as impaired cognition, emotion regulation and mental health.”
    The team assessed the daytime screen-based activities of 475 adolescents using daily surveys for three or more days. They asked the teens how many hours they had spent that day communicating with friends by email, instant messaging, texting on the phone or through social media sites. They also asked the kids how many hours they spent playing video games, surfing the internet and watching television or videos. Finally, the researchers asked if the adolescents had participated in any of these activities in the hour before bed.
    Next, the team used accelerometers to measure the adolescents’ sleep duration for one week. Reichenberger explained that the devices, typically worn on the wrist, measures a person’s movements. “When the participant is least active, we can infer that they are likely asleep,” Reichenberger said. “It’s more accurate than asking them how many hours they slept.”
    The researchers found that the teens spent an average of two hours per day communicating with friends via email, instant messaging, texting on the phone or through social media. They spent slightly less time — about 1.3 hours per day — playing video games, less than an hour per day surfing the internet and about 1.7 hours per day watching television or videos. In the hour before bed, the children communicated or played video games via a phone, computer or tablet 77% of the time and watched television or movies 69% of the time.
    Overall, the adolescents slept for an average of 7.8 hours per night. For every hour throughout the day that they used screens to communicate with friends, they fell asleep about 11 minutes later on average. For every hour that they used screens to play video games, they fell asleep about 9 minutes later. Those who talked, texted or played games on a device in the hour before bed lost the most sleep: their sleep onset was about 30 minutes later.

    Interestingly, Reichenberger said, the team found no significant associations between passive screen-based activities and subsequent sleep, like browsing the internet and watching television, videos and movies.
    “It could be that these more passive activities are less mentally stimulating than interactive activities, like texting and video game playing.” said Anne-Marie Chang, associate professor of biobehavioral health and study co-author.
    What can parents do to help protect their teens’ sleep?
    “It’s a tricky situation,” Chang said. “These tools are really important to everyone nowadays, so it’s hard to put a limit on them, but if you’re really looking out for an adolescent’s health and well-being, then you might consider limiting the more interactive activities, especially in the hour before bed.”
    Other authors on the paper include Lindsay Master, researcher, Penn State; Orfeu Buxton, the Elizabeth Fenton Susman Professor of Biobehavioral Health, Penn State; Gina Marie Mathew, postdoctoral associate, Stony Brook University; Lauren Hale, professor of family, population and preventive medicine, Stony Brook University; and Cynthia Snyder, assistant professor of nursing, Pennsylvania College of Health Sciences.
    The National Institutes of Health and National Aeronautics and Space Administration supported this research. More

  • in

    Cognitive strategies for augmenting the body with a wearable, robotic arm

    Neuroengineer Silvestro Micera develops advanced technological solutions to help people regain sensory and motor functions that have been lost due to traumatic events or neurological disorders. Until now, he had never before worked on enhancing the human body and cognition with the help of technology.
    Now in a study published in Science Robotics, Micera and his team report on how diaphragm movement can be monitored for successful control of an extra arm, essentially augmenting a healthy individual with a third — robotic — arm.
    “This study opens up new and exciting opportunities, showing that extra arms can be extensively controlled and that simultaneous control with both natural arms is possible,” says Micera, Bertarelli Foundation Chair in Translational Neuroengineering at EPFL, and professor of Bioelectronics at Scuola Superiore Sant’Anna.
    The study is part of the Third-Arm project, previously funded by the Swiss National Science Foundation (NCCR Robotics), that aims to provide a wearable robotic arm to assist in daily tasks or to help in search and rescue. Micera believes that exploring the cognitive limitations of third-arm control may actually provide gateways towards better understanding of the human brain.
    Micera continues, “The main motivation of this third arm control is to understand the nervous system. If you challenge the brain to do something that is completely new, you can learn if the brain has the capacity to do it and if it’s possible to facilitate this learning. We can then transfer this knowledge to develop, for example, assistive devices for people with disabilities, or rehabilitation protocols after stroke.”
    “We want to understand if our brains are hardwired to control what nature has given us, and we’ve shown that the human brain can adapt to coordinate new limbs in tandem with our biological ones,” explains Solaiman Shokur, co-PI of the study and EPFL Senior Scientist at the Neuro-X Institute. “It’s about acquiring new motor functions, enhancement beyond the existing functions of a given user, be it a healthy individual or a disabled one. From a nervous system perspective, it’s a continuum between rehabilitation and augmentation.”
    To explore the cognitive constraints of augmentation, the researchers first built a virtual environment to test a healthy user’s capacity to control a virtual arm using movement of his or her diaphragm. They found that diaphragm control does not interfere with actions like controlling one’s physiological arms, one’s speech or gaze.

    In this virtual reality setup, the user is equipped with a belt that measures diaphragm movement. Wearing a virtual reality headset, the user sees three arms: the right arm and hand, the left arm and hand, and a third arm between the two with a symmetric, six-fingered hand.
    “We made this hand symmetric to avoid any bias towards either the left or the right hand,” explains Giulia Dominijanni, PhD student at EPFL’s Neuro-X Institute.
    In the virtual environment, the user is then prompted to reach out with either the left hand, the right hand, or in the middle with the symmetric hand. In the real environment, the user holds onto an exoskeleton with both arms, which allows for control of the virtual left and right arms. Movement detected by the belt around the diaphragm is used for controlling the virtual middle, symmetric arm. The setup was tested on 61 healthy subjects in over 150 sessions.
    “Diaphragm control of the third arm is actually very intuitive, with participants learning to control the extra limb very quickly,” explains Dominijanni. “Moreover, our control strategy is inherently independent from the biological limbs and we show that diaphragm control does not impact a user’s ability to speak coherently.”
    The researchers also successfully tested diaphragm control with an actual robotic arm, a simplified one that consists of a rod that can be extended out, and back in. When the user contracts the diaphragm, the rod is extended out. In an experiment similar to the VR environment, the user is asked to reach and hover over target circles with her left or right hand, or with the robotic rod.
    Besides the diaphragm, but not reported in the study, vestigial ear muscles have also been tested for feasibility in performing new tasks. In this approach, a user is equipped with ear sensors and trained to use fine ear muscle movement to control the displacement of a computer mouse.
    “Users could potentially use these ear muscles to control an extra limb,” says Shokur, emphasizing that these alternative control strategies may help one day for the development of rehabilitation protocols for people with motor deficiencies.
    Part of the third arm project, previous studies regarding the control of robotic arms have been focused on helping amputees. The latest Science Robotics study is a step beyond repairing the human body towards augmentation.
    “Our next step is to explore the use of more complex robotic devices using our various control strategies, to perform real-life tasks, both inside and outside of the laboratory. Only then will we be able to grasp the real potential of this approach,” concludes Micera. More

  • in

    Deep neural networks show promise as models of human hearing

    Computational models that mimic the structure and function of the human auditory system could help researchers design better hearing aids, cochlear implants, and brain-machine interfaces. A new study from MIT has found that modern computational models derived from machine learning are moving closer to this goal.
    In the largest study yet of deep neural networks that have been trained to perform auditory tasks, the MIT team showed that most of these models generate internal representations that share properties of representations seen in the human brain when people are listening to the same sounds.
    The study also offers insight into how to best train this type of model: The researchers found that models trained on auditory input including background noise more closely mimic the activation patterns of the human auditory cortex.
    “What sets this study apart is it is the most comprehensive comparison of these kinds of models to the auditory system so far. The study suggests that models that are derived from machine learning are a step in the right direction, and it gives us some clues as to what tends to make them better models of the brain,” says Josh McDermott, an associate professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines, and the senior author of the study.
    MIT graduate student Greta Tuckute and Jenelle Feather PhD ’22 are the lead authors of the open-access paper, which appears today in PLOS Biology.
    Models of hearing
    Deep neural networks are computational models that consists of many layers of information-processing units that can be trained on huge volumes of data to perform specific tasks. This type of model has become widely used in many applications, and neuroscientists have begun to explore the possibility that these systems can also be used to describe how the human brain performs certain tasks.

    “These models that are built with machine learning are able to mediate behaviors on a scale that really wasn’t possible with previous types of models, and that has led to interest in whether or not the representations in the models might capture things that are happening in the brain,” Tuckute says.
    When a neural network is performing a task, its processing units generate activation patterns in response to each audio input it receives, such as a word or other type of sound. Those model representations of the input can be compared to the activation patterns seen in fMRI brain scans of people listening to the same input.
    In 2018, McDermott and then-graduate student Alexander Kell reported that when they trained a neural network to perform auditory tasks (such as recognizing words from an audio signal), the internal representations generated by the model showed similarity to those seen in fMRI scans of people listening to the same sounds.
    Since then, these types of models have become widely used, so McDermott’s research group set out to evaluate a larger set of models, to see if the ability to approximate the neural representations seen in the human brain is a general trait of these models.
    For this study, the researchers analyzed nine publicly available deep neural network models that had been trained to perform auditory tasks, and they also created 14 models of their own, based on two different architectures. Most of these models were trained to perform a single task — recognizing words, identifying the speaker, recognizing environmental sounds, and identifying musical genre — while two of them were trained to perform multiple tasks.
    When the researchers presented these models with natural sounds that had been used as stimuli in human fMRI experiments, they found that the internal model representations tended to exhibit similarity with those generated by the human brain. The models whose representations were most similar to those seen in the brain were models that had been trained on more than one task and had been trained on auditory input that included background noise.

    “If you train models in noise, they give better brain predictions than if you don’t, which is intuitively reasonable because a lot of real-world hearing involves hearing in noise, and that’s plausibly something the auditory system is adapted to,” Feather says.
    Hierarchical processing
    The new study also supports the idea that the human auditory cortex has some degree of hierarchical organization, in which processing is divided into stages that support distinct computational functions. As in the 2018 study, the researchers found that representations generated in earlier stages of the model most closely resemble those seen in the primary auditory cortex, while representations generated in later model stages more closely resemble those generated in brain regions beyond the primary cortex.
    Additionally, the researchers found that models that had been trained on different tasks were better at replicating different aspects of audition. For example, models trained on a speech-related task more closely resembled speech-selective areas.
    “Even though the model has seen the exact same training data and the architecture is the same, when you optimize for one particular task, you can see that it selectively explains specific tuning properties in the brain,” Tuckute says.
    McDermott’s lab now plans to make use of their findings to try to develop models that are even more successful at reproducing human brain responses. In addition to helping scientists learn more about how the brain may be organized, such models could also be used to help develop better hearing aids, cochlear implants, and brain-machine interfaces.
    “A goal of our field is to end up with a computer model that can predict brain responses and behavior. We think that if we are successful in reaching that goal, it will open a lot of doors,” McDermott says.
    The research was funded by the National Institutes of Health, an Amazon Fellowship from the Science Hub, an International Doctoral Fellowship from the American Association of University Women, an MIT Friends of McGovern Institute Fellowship, and a Department of Energy Computational Science Graduate Fellowship. More

  • in

    Smartwatches can pick up abnormal heart rhythms in kids, study finds

    Smartwatches can help physicians detect and diagnose irregular heart rhythms in children, according to a new study from the Stanford School of Medicine.
    The finding comes from a survey of electronic medical records for pediatric cardiology patients receiving care at Stanford Medicine Children’s Health. The study will publish online Dec. 13 in Communications Medicine.
    Over a four-year period, patients’ medical records mentioned “Apple Watch” 145 times. Among patients whose medical records mentioned the smartwatch, 41 had abnormal heart rhythms confirmed by traditional diagnostic methods; of these, 29 children had their arrythmias diagnosed for the first time.
    “I was surprised by how often our standard monitoring didn’t pick up arrythmias and thewatch did,” said senior study author Scott Ceresnak, MD, professor of pediatrics. Ceresnak is a pediatric cardiologist who treats patients at Stanford Medicine. “It’s awesome to see that newer technology can really make a difference in how we’re able to care for patients.”
    The study’s lead author is Aydin Zahedivash, MD, a clinical instructor in pediatrics.
    Most of the abnormal rhythms detected were not life-threatening, Ceresnak said. However, he added that the arrythmias detected can cause distressing symptoms such as a racing heartbeat, dizziness and fainting.
    Skipping a beat, sometimes
    Doctors face two challenges in diagnosing children’s cardiac arrythmias, or heart rhythm abnormalities.

    The first is that cardiac diagnostic devices, though they have improved in recent years, still aren’t ideal for kids. Ten to 20 years ago, a child had to wear, for 24 to 48 hours, a Holter monitor consisting of a device about the size of a smartphone attached by wires to five electrodes that were adhered to the child’s chest. Patients can now wear event monitors — in the form of a single sticker placed on the chest — for a few weeks. Although the event monitors are more comfortable and can be worn longer than a Holter monitor, they sometimes fall off early or cause problems such as skin irritation from adhesives.
    The second challenge is that even a few weeks of continuous monitoring may not capture the heart’s erratic behavior, as children experience arrythmias unpredictably. Kids may go months between episodes, making it tricky for their doctors to determine what’s going on.
    Connor Heinz and his family faced both challenges when he experienced periods of a racing heartbeat starting at age 12: An adhesive monitor was too irritating, and he was having irregular heart rhythms only once every few months. Ceresnak thought he knew what was causing the racing rhythms, but he wanted confirmation. He suggested that Connor and his mom, Amy Heinz, could try using Amy’s smartwatch to record the rhythm the next time Connor’s heart began racing.
    Using smartwatches for measuring children’s heart rhythms is limited by the fact that existing smartwatch algorithms that detect heart problems have not been optimized for kids. Children have faster heartbeats than adults; they also tend to experience different types of abnormal rhythms than do adults who have cardiac arrythmias.
    The paper showed that the smartwatches appear to help detect arrhythmias in kids, suggesting that it would be useful to design versions of the smartwatch algorithms based on real-world heart rhythm data from children.
    Evaluating medical records
    The researchers searched patients’ electronic medical records from 2018 to 2022 for the phrase “Apple Watch,” then checked to see which patients with this phrase in their records had submitted smartwatch data and received a diagnosis of a cardiac arrythmia.

    Data from watches included alerts about patients’ heart rates and patient-initiated electrocardiograms, or ECGs, from an app that uses the electrical sensors in the watch. When patients activate the app, the ECG function records the heart’s electrical signals; physicians can use this pattern of electrical pulses to diagnose different types of heart problems.
    From 145 mentions of the smartwatch in patient records, 41 patients had arrythmias confirmed. Of these, 18 patients had collected an ECG with their watches, and 23 patients had received a notification from the watch about a high heart rate.
    The information from the smartwatches prompted the children’s physicians to conduct medical workups, from which 29 children received new arrythmia diagnoses. In 10 patients, the smartwatch diagnosed arrythmias that traditional monitoring methods never picked up.
    One of those patients was Connor Heinz.
    “At a basketball tryout, he had another episode,” Amy Heinz recalled. “I put the watch on him and emailed a bunch of captures [of his heartbeat] to Dr. Ceresnak.” The information from the watch confirmed Ceresnak’s suspicion that Connor had supraventricular tachycardia.
    Most children with arrythmias had the same condition as Connor, a pattern of racing heartbeats originating in the heart’s upper chambers.
    “These irregular heartbeats are not life-threatening, but they make kids feel terrible,” Ceresnak said. “They can be a problem and they’re scary, and if wearable devices can help us get to the bottom of what this arrythmia is, that’s super helpful.”
    In many cases of supraventricular tachycardia, the abnormal heart rhythm is caused by a small short-circuit in the heart’s electrical circuitry. The problem can often be cured by a medical procedure called catheter ablation that destroys a small, precisely targeted region of heart cells causing the short circuit.
    Now 15, Connor has been successfully treated with catheter ablation and is playing basketball for his high school team in Menlo Park, California.
    The study also found smartwatch use noted in the medical records of 73 patients who did not ultimately receive diagnoses of arrythmias.
    “A lot of kids have palpitations, a feeling of funny heartbeats, but the vast majority don’t have medically significant arrythmias,” Ceresnak said. “In the future, I think this technology may help us rule out anything serious.”
    A new study
    The Stanford Medicine research team plans to conduct a study to further assess the utility of the Apple Watch for detecting children’s heart problems. The study will measure whether, in kids, heart rate and heart rhythm measurements from the watches match measurements from standard diagnostic devices.
    The study is open only to children who are already cardiology patients at Stanford Medicine Children’s Health.
    “The wearable market is exploding, and our kids are going to use them,” Ceresnak said. “We want to make sure the data we get from these devices is reliable and accurate for children. Down the road, we’d love to help develop pediatric-specific algorithms for monitoring heart rhythm.”
    The study was conducted without external funding. Apple was not involved in the work. Apple’s Investigator Support Program has agreed to donate watches for the next phase of the research.
    Apple’s Irregular Rhythm Notification and ECG app are cleared by the Food and Drug Administration for use by people 22 years of age or older. The high heart rate notification is available only to users 13 years of age or older. More

  • in

    Highly resolved precipitation maps based on AI

    Strong precipitation may cause natural disasters, such as flooding or landslides. Global climate models are required to forecast the frequency of these extreme events, which is expected to change as a result of climate change. Researchers of Karlsruhe Institute of Technology (KIT) have now developed a first method based on artificial intelligence (AI), by means of which the precision of coarse precipitation fields generated by global climate models can be increased. The researchers succeeded in improving spatial resolution of precipitation fields from 32 to two kilometers and temporal resolution from one hour to ten minutes. This higher resolution is required to better forecast the more frequent occurrence of heavy local precipitation and the resulting natural disasters in future.
    Many natural disasters, such as flooding or landslides, are directly caused by extreme precipitation. Researchers expect that increasing average temperatures will cause extreme precipitation events to further increase. To adapt to a changing climate and prepare for disasters at an early stage, precise local and global data on the current and future water cycle are indispensable. “Precipitation is highly variable in space and time and, hence, difficult to forecast, in particular on the local level,” says Dr. Christian Chwala from the Atmospheric Environmental Research Division of KIT’s Institute of Meteorology and Climate Research (IMK-IFU), KIT’s Campus Alpine in Garmisch-Partenkirchen.” For this reason, we want to enhance the resolution of precipitation fields generated e.g. by global climate models and improve their classification as regards possible threats, such as floodings.”
    Higher Resolution for More Precise Regional Climate Models
    Currently used global climate models are based on a grid that is not fine enough to precisely present the variability of precipitation. Highly resolved precipitation maps can only be produced with computationally expensive and, hence, spatially or temporally limited models. “For this reason, we have developed an AI-based generative neural network, called GAN, and trained it with high-resolution radar precipitation fields. In this way, the GAN learns how to generate realistic precipitation fields and derive their temporal sequence from coarsely resolved data,” says Luca Glawion from IMK-IFU. “The network is able to generate highly resolved radar precipitation films from very coarsely resolved maps.” These refined radar maps not only show how rain cells develop and move, but precisely reconstruct local rain statistics and the corresponding extreme value distribution.
    “Our method serves as a basis to increase the resolution of coarsely grained precipitation fields, such that the high spatial and temporal variability of precipitation can be reproduced adequately and local effects can be studied,” says Julius Polz from IMK-IFU. “Our deep learning method is quicker by several orders of magnitude than the calculation of such highly resolved precipitation fields with numerical weather models usually applied to regionally refine data of global climate models.” The researchers point out that their method also generates an ensemble of different potential precipitation fields. This is important, as a multitude of physically plausible highly resolved solutions exists for each coarsely resolved precipitation field. Similar to a weather forecast, an ensemble allows for a more precise determination of the associated uncertainty.
    Higher Resolution for Better Forecasts under Climate Change
    The results show that the AI model and methodology developed by the researchers will enable future use of neural networks to improve the spatial and temporal resolution of precipitation calculated by climate models. This will allow for a more precise analysis of the impacts and developments of precipitation in a changing climate.
    “In a next step, we will apply the method to global climate simulations that transfer specific large-scale weather situations to a future world with a changed climate, e.g. to the year of 2100. The higher resolution of precipitation events simulated with our method will allow for a better estimation of the impacts the weather conditions that caused the flooding of the river Ahr in 2021 would have had in a world warmer by 2 degrees,” Glawion explains. Such information is of decisive importance to develop climate adaptation methods. More