More stories

  • in

    Artificial intelligence identifies individuals at risk for heart disease complications

    For the first time, University of Utah Health scientists have shown that artificial intelligence could lead to better ways to predict the onset and course of cardiovascular disease. The researchers, working in conjunction with physicians from Intermountain Primary Children’s Hospital, developed unique computational tools to precisely measure the synergistic effects of existing medical conditions on the heart and blood vessels.
    The researchers say this comprehensive approach could help physicians foresee, prevent, or treat serious heart problems, perhaps even before a patient is aware of the underlying condition.
    Although the study only focused on cardiovascular disease, the researchers believe it could have far broader implications. In fact, they suggest that these findings could eventually lead to a new era of personalized, preventive medicine. Doctors would proactively contact patients to alert them to potential ailments and what can be done to alleviate the problem.
    “We can turn to AI to help refine the risk for virtually every medical diagnosis,” says Martin Tristani-Firouzi, M.D. the study’s corresponding author and a pediatric cardiologist at U of U Health and Intermountain Primary Children’s Hospital, and scientist at the Nora Eccles Harrison Cardiovascular Research and Training Institute. “The risk of cancer, the risk of thyroid surgery, the risk of diabetes — any medical term you can imagine.”
    The study appears in the online journal PLOS Digital Health.
    Current methods for calculating the combined effects of various risk factors — such as demographics and medical history — on cardiovascular disease are often imprecise and subjective, according to Mark Yandell, Ph.D., senior author of the study, a professor of human genetics, H.A. and Edna Benning Presidential Endowed Chair at U of U Health, and co-founder of Backdrop Health. As a result, these methods fail to identify certain interactions that could have profound effects on the health of the heart and blood vessels. More

  • in

    New way of gaining quantum control from loss

    Researchers at the Hong Kong University of Science and Technology (HKUST) have demonstrated a new way to control the quantum state through the loss of particles — a process that is usually avoided in the quantum device, offering a new way towards the realization of unprecedented quantum states.
    Manipulating a quantum system requires a subtle control of quantum state with zero imperfect operations, otherwise the useful information encoded in the quantum states is scrambled. One of the most common detrimental processes is the loss of particles that consist of the system. This issue has long been seen as an enemy of quantum control and was avoided through the isolation of the system. But now, researchers at the HKUST have discovered a way that could gain quantum control from loss in an atomic quantum system.
    The finding was published recently in Nature Physics.
    Prof. Gyu-Boong JO, lead researcher of the study and Hari Harilela Associate Professor of Physics at HKUST, said the result demonstrated loss as a potential knob for the quantum control.
    “The textbook taught us that in quantum mechanics, the system of interest will not suffer from a loss of particles as it is well isolated from the environment,” said Prof. Jo. “However, an open system — ranging from classical to quantum ones, is ubiquitous. Such open systems, effectively described by non-Hermitian physics, exhibit various counter-intuitive phenomena that cannot be observed in the Hermitian system.”
    The idea of non-Hermitian physics with loss has been actively examined in classical systems, but such counter-intuitive phenomena were only recently realised and observed in genuine quantum systems. In the study, HKUST researchers adjusted the systems’ parameters such that they sweep out a closed loop around a special point — also known as an exceptional point occurring in the non-Hermitian system. It was discovered that the direction of the loop (i.e. whether it goes clockwise or anti-clockwise) determines the final quantum state.
    Jensen LI, Professor of Physics at HKUST and the other leader of the team, said, “This chiral behavior of a directional quantum state transferring around an exceptional point can be an important ingredient in quantum control. We are at the starting point in controlling non-Hermitian quantum systems.”
    Another implication of the findings is how two seemingly unrelated mechanisms: non-Hermitian physics (induced by loss) and spin-orbit coupling, interplay. Spin-orbit coupling (SOC) is an essential mechanism behind intriguing quantum phenomena such as topological insulator, which behaves as an insulator in its interior but whose surface flow electrons act like a conductor.
    Despite the major advances in non-Hermitian physics, an SOC mechanism is only widely studied in Hermitian systems, much less is known experimentally on the major role played by the loss in spin-orbit-coupled quantum systems. The better understanding of such non-Hermitian SOC is of paramount importance to the development of novel materials, but it remains elusive in the area of condensed matter physics.
    In this work however, researchers realized for the first time a dissipative spin-orbit-coupled system for ultracold atoms, fully characterizing its quantum state and demonstrating chiral quantum control in the context of non-Hermitian physics. This finding sets the stage for future exploration of spin-orbit coupling physics in the non-Hermitian regime, and highlights the remarkable capabilities of non-Hermitian quantum systems to realize, characterize, and harness two fundamental mechanisms, namely loss and SOC, providing a new approach for precisely simulating such competing mechanisms in a highly controllable quantum simulator with ultracold atoms.
    The research was funded by the Research Grants Council of Hong Kong, the Croucher Foundation, and Harilela Foundation. More

  • in

    New software may help neurology patients capture clinical data with their own smartphones

    New pose estimation software has the potential to help neurologists and their patients capture important clinical data using simple tools such as smartphones and tablets, according to a study by Johns Hopkins Medicine, the Kennedy Krieger Institute and the University of Maryland. Human pose estimation is a form of artificial intelligence that automatically detects and labels specific landmarks on the human body, such as elbows and fingers, from simple images or videos.
    To measure the speed, rhythm and range of a patient’s motor function, neurologists will often have the patient perform certain repetitive movements, such as tapping fingers or opening and closing hands. An objective assessment of these tests provides the most accurate insight into the severity of a patient’s condition, thus better informing treatment decisions. However, objective motion capture devices are often expensive or only have the ability to measure one type of movement. Therefore, most neurologists must make subjective assessments of their patients’ motor function, usually by simply watching patients as they carry out different tasks.
    The new Hopkins-led study sought to find whether pose estimation software developed by the research team could track human motion as accurately as manual, frame-by-frame visual inspections of video recordings of patients performing movements.
    “Our goal was to develop a fast, inexpensive and easily accessible method to objectively measure a patient’s movements across multiple extremities,” says study lead author Ryan Roemmich, Ph.D., an assistant professor in the Department of Physical Medicine and Rehabilitation at the Johns Hopkins University School of Medicine and a human movement scientist at the Kennedy Krieger Institute.
    The research team had 10 healthy subjects between the ages of 24 and 33 record smartphone video of themselves performing five tasks often assigned to neurology patients during motor function assessments: finger taps, hand closures, toe taps, heel taps and hand rotations. The subjects performed each task at four different speeds. Their movements were tracked using a freely available human pose estimation algorithm, then fed into the team’s software for evaluation.
    The results showed that across all five tasks, the software accurately detected more than 96% of the movements detected by the manual inspection method. These results held up across several variables, including location, type of smartphone used and method of recording: Some subjects placed their smartphone on a stable surface and hit “record,” while others had a family member or friend hold the device.
    With encouraging results from their sample of young, healthy people, the research team’s next step is to test the software on people who require neurological care. Currently, the team is collecting a large sample of videos of people with Parkinson’s disease doing the same five motor function tasks that the healthy subjects performed.
    “We want anyone with a smartphone or tablet to be able to record video that can be successfully analyzed by their physician,” says Roemmich. “With further development of this pose estimation software, motor assessments could eventually be performed and analyzed without the patient having to leave their home.”
    Story Source:
    Materials provided by Johns Hopkins Medicine. Note: Content may be edited for style and length. More

  • in

    Social media use tied to poor physical health

    Social media use has been linked to biological and psychological indicators associated with poor physical health among college students, according to the results of a new study by a University at Buffalo researcher.
    Research participants who used social media excessively were found to have higher levels of C-reactive protein (CRP), a biological marker of chronic inflammation that predicts serious illnesses, such as diabetes, certain cancers and cardiovascular disease. In addition to elevated CRP levels, results suggest higher social media use was also related to somatic symptoms, like headaches, chest and back pains, and more frequent visits to doctors and health centers for the treatment of illness.
    “Social media use has become an integral part of many young adults’ daily lives,” said David Lee, PhD, the paper’s first author and assistant professor of communication in the UB College of Arts and Sciences. “It’s critical that we understand how engagement across these platforms contributes to physical health.”
    The findings appear in the journal Cyberpsychology, Behavior, and Social Networking.
    For decades, researchers have devoted attention to how social media engagement relates to users’ mental health, but its effects on physical health have not been thoroughly investigated. Recent surveys indicate social media usage is particularly high for people in their late teens and early 20s, a population that spends about six hours a day texting, online or using social media. And though a few studies have found links between social media usage and physical health, that research relied largely on self-reporting or the effects of usage with exclusively one platform.
    “Our goal was to extend prior work by examining how social media use across several platforms is associated with physical health outcomes measured with biological, behavioral and self-report measures,” said Lee, an expert on health outcomes related to social interactions. More

  • in

    Harnessing noise in optical computing for AI

    Artificial intelligence and machine learning are currently affecting our lives in many small but impactful ways. For example, AI and machine learning applications recommend entertainment we might enjoy through streaming services such as Netflix and Spotify.
    In the near future, it’s predicted that these technologies will have an even larger impact on society through activities such as driving fully autonomous vehicles, enabling complex scientific research and facilitating medical discoveries.
    But the computers used for AI and machine learning demand a lot of energy. Currently, the need for computing power related to these technologies is doubling roughly every three to four months. And cloud computing data centers used by AI and machine learning applications worldwide are already devouring more electrical power per year than some small countries. It’s easy to see that this level of energy consumption is unsustainable.
    A research team led by the University of Washington has developed new optical computing hardware for AI and machine learning that is faster and much more energy efficient than conventional electronics. The research also addresses another challenge — the ‘noise’ inherent to optical computing that can interfere with computing precision.
    In a new paper, published Jan. 21 in Science Advances, the team demonstrates an optical computing system for AI and machine learning that not only mitigates this noise but actually uses some of it as input to help enhance the creative output of the artificial neural network within the system.
    “We’ve built an optical computer that is faster than a conventional digital computer,” said lead author Changming Wu, a UW doctoral student in electrical and computer engineering. “And also, this optical computer can create new things based on random inputs generated from the optical noise that most researchers tried to evade.”
    Optical computing noise essentially comes from stray light particles, or photons, that originate from the operation of lasers within the device and background thermal radiation. To target noise, the researchers connected their optical computing core to a special type of machine learning network, called a Generative Adversarial Network. More

  • in

    How robots learn to hike

    ETH Zurich researchers led by Marco Hutter have developed a new control approach that enables a legged robot, called ANYmal, to move quickly and robustly over difficult terrain. Thanks to machine learning, the robot can combine its visual perception of the environment with its sense of touch for the first time.
    Steep sections on slippery ground, high steps, scree and forest trails full of roots: the path up the 1,098-​metre-high Mount Etzel at the southern end of Lake Zurich is peppered with numerous obstacles. But ANYmal, the quadrupedal robot from the Robotic Systems Lab at ETH Zurich, overcomes the 120 vertical metres effortlessly in a 31-​minute hike. That’s 4 minutes faster than the estimated duration for human hikers — and with no falls or missteps.
    This is made possible by a new control technology, which researchers at ETH Zurich led by robotics professor Marco Hutter recently presented in the journal Science Robotics. “The robot has learned to combine visual perception of its environment with proprioception — its sense of touch — based on direct leg contact. This allows it to tackle rough terrain faster, more efficiently and, above all, more robustly,” Hutter says. In the future, ANYmal can be used anywhere that is too dangerous for humans or too impassable for other robots.
    Perceiving the environment accurately
    To navigate difficult terrain, humans and animals quite automatically combine the visual perception of their environment with the proprioception of their legs and hands. This allows them to easily handle slippery or soft ground and move around with confidence, even when visibility is low. Until now, legged robots have been able to do this only to a limited extent.
    “The reason is that the information about the immediate environment recorded by laser sensors and cameras is often incomplete and ambiguous,” explains Takahiro Miki, a doctoral student in Hutter’s group and lead author of the study. For example, tall grass, shallow puddles or snow appear as insurmountable obstacles or are partially invisible, even though the robot could actually traverse them. In addition, the robot’s view can be obscured in the field by difficult lighting conditions, dust or fog. More

  • in

    AI light-field camera reads 3D facial expressions

    A joint research team led by Professors Ki-Hun Jeong and Doheon Lee from the KAIST Department of Bio and Brain Engineering reported the development of a technique for facial expression detection by merging near-infrared light-field camera techniques with artificial intelligence (AI) technology.
    Unlike a conventional camera, the light-field camera contains micro-lens arrays in front of the image sensor, which makes the camera small enough to fit into a smart phone, while allowing it to acquire the spatial and directional information of the light with a single shot. The technique has received attention as it can reconstruct images in a variety of ways including multi-views, refocusing, and 3D image acquisition, giving rise to many potential applications.
    However, the optical crosstalk between shadows caused by external light sources in the environment and the micro-lens has limited existing light-field cameras from being able to provide accurate image contrast and 3D reconstruction.
    The joint research team applied a vertical-cavity surface-emitting laser (VCSEL) in the near-IR range to stabilize the accuracy of 3D image reconstruction that previously depended on environmental light. When an external light source is shone on a face at 0-, 30-, and 60-degree angles, the light field camera reduces 54% of image reconstruction errors. Additionally, by inserting a light-absorbing layer for visible and near-IR wavelengths between the micro-lens arrays, the team could minimize optical crosstalk while increasing the image contrast by 2.1 times.
    Through this technique, the team could overcome the limitations of existing light-field cameras and was able to develop their NIR-based light-field camera (NIR-LFC), optimized for the 3D image reconstruction of facial expressions. Using the NIR-LFC, the team acquired high-quality 3D reconstruction images of facial expressions expressing various emotions regardless of the lighting conditions of the surrounding environment.
    The facial expressions in the acquired 3D images were distinguished through machine learning with an average of 85% accuracy — a statistically significant figure compared to when 2D images were used. Furthermore, by calculating the interdependency of distance information that varies with facial expression in 3D images, the team could identify the information a light-field camera utilizes to distinguish human expressions.
    Professor Ki-Hun Jeong said, “The sub-miniature light-field camera developed by the research team has the potential to become the new platform to quantitatively analyze the facial expressions and emotions of humans.” To highlight the significance of this research, he added, “It could be applied in various fields including mobile healthcare, field diagnosis, social cognition, and human-machine interactions.”
    This research was published in Advanced Intelligent Systems online on December 16, under the title, “Machine-Learned Light-field Camera that Reads Facial Expression from High-Contrast and Illumination Invariant 3D Facial Images.” This research was funded by the Ministry of Science and ICT and the Ministry of Trade, Industry and Energy. More

  • in

    Intense drought or flash floods can shock the global economy

    Extremes in rainfall — whether intense drought or flash floods — can catastrophically slow the global economy, researchers report in the Jan. 13 Nature. And those impacts are most felt by wealthy, industrialized nations, the researchers found.

    A global analysis showed that episodes of intense drought led to the biggest shocks to economic productivity. But days with intense deluges — such as occurred in July 2021 in Europe — also produced strong shocks to the economic system (SN: 8/23/21). Most surprising, though, was that agricultural economies appeared to be relatively resilient against these types of shocks, says Maximilian Kotz, an environmental economist at the Potsdam Institute for Climate Impact Research in Germany. Instead, two other business sectors — manufacturing and services — were the most hard-hit.

    As a result, the nations most affected by rainfall extremes weren’t those that tended to be poorer, with agriculture-dependent societies, but the wealthiest nations, whose economies are tied more heavily to manufacturing and services, such as banking, health care and entertainment.

    It’s well established that rising temperatures can take a toll on economic productivity, for example by contributing to days lost at work or doctors’ visits (SN: 11/28/18). Extreme heat also has clear impacts on human behavior (SN: 8/18/21). But what effect climate change–caused shifts in rainfall might have on the global economy hasn’t been so straightforward.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    That’s in part because previous studies looking at a possible connection between rainfall and productivity have focused on changes in yearly precipitation, a timeframe that “is just too coarse to really describe what’s actually happening [in] the economy,” Kotz says. Such studies showed that more rain in a given year was basically beneficial, which makes sense in that having more water available is good for agriculture and other human activities, he adds. “But these findings were mainly focused on agriculturally dependent economies and poorer economies.”

    In the new study, Kotz and his colleagues looked at three timescales — annual, monthly and daily rainfall — and examined what happened to economic output for time periods in which the rainfall deviated from average historical values. In particular, Kotz says, they introduced two new measures not considered in previous studies: the amount of rainy days that a region gets in a year and extreme daily rainfall. The team then examined these factors across 1,554 regions around the world — which included many subregions within 77 countries — from 1979 to 2019.

    The disparity over which regions are hit hardest is “at odds with the conventional wisdom” — and with some previous studies — that agriculture is vulnerable to extreme rainfall, writes Xin-Zhong Liang, an atmospheric scientist at the University of Maryland in College Park, in a commentary in the same issue of Nature. Researchers may need to incorporate other factors in future assessments, such as growth stages of crops, land drainage or irrigation, in order to really understand how these extremes affect agriculture, Liang writes.

    “That was definitely surprising for us as well,” Kotz says. Although the study doesn’t specifically try to answer why manufacturing and services were so affected, it makes intuitive sense, he says. Flooding, for example, can damage infrastructure and disrupt transportation, effects that can then propagate along supply chains. “It’s feasible that these things might be most important in manufacturing, where infrastructure is very important, or in the services sectors, where the human experience is very much dictated by these daily aspects of weather and rainfall.”

    Including daily and monthly rainfall extremes in this type of analysis was “an important innovation” because it revealed new economic vulnerabilities, says Tamma Carleton, an environmental economist at the University of California, Santa Barbara, who was not involved in the new work. However, Carleton says, “the findings in the paper are not yet conclusive on who is most vulnerable and why, and instead raise many important questions for future research to unpack.”

    Extreme rainfall events, including both drought and deluge, will occur more frequently as global temperatures rise, the United Nations’ Intergovernmental Panel on Climate Change noted in August (SN: 8/9/21). The study’s findings, Kotz says, offer yet another stark warning to the industrialized, wealthy world: Human-caused climate change will have “large economic consequences.” More