More stories

  • in

    New software may help neurology patients capture clinical data with their own smartphones

    New pose estimation software has the potential to help neurologists and their patients capture important clinical data using simple tools such as smartphones and tablets, according to a study by Johns Hopkins Medicine, the Kennedy Krieger Institute and the University of Maryland. Human pose estimation is a form of artificial intelligence that automatically detects and labels specific landmarks on the human body, such as elbows and fingers, from simple images or videos.
    To measure the speed, rhythm and range of a patient’s motor function, neurologists will often have the patient perform certain repetitive movements, such as tapping fingers or opening and closing hands. An objective assessment of these tests provides the most accurate insight into the severity of a patient’s condition, thus better informing treatment decisions. However, objective motion capture devices are often expensive or only have the ability to measure one type of movement. Therefore, most neurologists must make subjective assessments of their patients’ motor function, usually by simply watching patients as they carry out different tasks.
    The new Hopkins-led study sought to find whether pose estimation software developed by the research team could track human motion as accurately as manual, frame-by-frame visual inspections of video recordings of patients performing movements.
    “Our goal was to develop a fast, inexpensive and easily accessible method to objectively measure a patient’s movements across multiple extremities,” says study lead author Ryan Roemmich, Ph.D., an assistant professor in the Department of Physical Medicine and Rehabilitation at the Johns Hopkins University School of Medicine and a human movement scientist at the Kennedy Krieger Institute.
    The research team had 10 healthy subjects between the ages of 24 and 33 record smartphone video of themselves performing five tasks often assigned to neurology patients during motor function assessments: finger taps, hand closures, toe taps, heel taps and hand rotations. The subjects performed each task at four different speeds. Their movements were tracked using a freely available human pose estimation algorithm, then fed into the team’s software for evaluation.
    The results showed that across all five tasks, the software accurately detected more than 96% of the movements detected by the manual inspection method. These results held up across several variables, including location, type of smartphone used and method of recording: Some subjects placed their smartphone on a stable surface and hit “record,” while others had a family member or friend hold the device.
    With encouraging results from their sample of young, healthy people, the research team’s next step is to test the software on people who require neurological care. Currently, the team is collecting a large sample of videos of people with Parkinson’s disease doing the same five motor function tasks that the healthy subjects performed.
    “We want anyone with a smartphone or tablet to be able to record video that can be successfully analyzed by their physician,” says Roemmich. “With further development of this pose estimation software, motor assessments could eventually be performed and analyzed without the patient having to leave their home.”
    Story Source:
    Materials provided by Johns Hopkins Medicine. Note: Content may be edited for style and length. More

  • in

    Social media use tied to poor physical health

    Social media use has been linked to biological and psychological indicators associated with poor physical health among college students, according to the results of a new study by a University at Buffalo researcher.
    Research participants who used social media excessively were found to have higher levels of C-reactive protein (CRP), a biological marker of chronic inflammation that predicts serious illnesses, such as diabetes, certain cancers and cardiovascular disease. In addition to elevated CRP levels, results suggest higher social media use was also related to somatic symptoms, like headaches, chest and back pains, and more frequent visits to doctors and health centers for the treatment of illness.
    “Social media use has become an integral part of many young adults’ daily lives,” said David Lee, PhD, the paper’s first author and assistant professor of communication in the UB College of Arts and Sciences. “It’s critical that we understand how engagement across these platforms contributes to physical health.”
    The findings appear in the journal Cyberpsychology, Behavior, and Social Networking.
    For decades, researchers have devoted attention to how social media engagement relates to users’ mental health, but its effects on physical health have not been thoroughly investigated. Recent surveys indicate social media usage is particularly high for people in their late teens and early 20s, a population that spends about six hours a day texting, online or using social media. And though a few studies have found links between social media usage and physical health, that research relied largely on self-reporting or the effects of usage with exclusively one platform.
    “Our goal was to extend prior work by examining how social media use across several platforms is associated with physical health outcomes measured with biological, behavioral and self-report measures,” said Lee, an expert on health outcomes related to social interactions. More

  • in

    Harnessing noise in optical computing for AI

    Artificial intelligence and machine learning are currently affecting our lives in many small but impactful ways. For example, AI and machine learning applications recommend entertainment we might enjoy through streaming services such as Netflix and Spotify.
    In the near future, it’s predicted that these technologies will have an even larger impact on society through activities such as driving fully autonomous vehicles, enabling complex scientific research and facilitating medical discoveries.
    But the computers used for AI and machine learning demand a lot of energy. Currently, the need for computing power related to these technologies is doubling roughly every three to four months. And cloud computing data centers used by AI and machine learning applications worldwide are already devouring more electrical power per year than some small countries. It’s easy to see that this level of energy consumption is unsustainable.
    A research team led by the University of Washington has developed new optical computing hardware for AI and machine learning that is faster and much more energy efficient than conventional electronics. The research also addresses another challenge — the ‘noise’ inherent to optical computing that can interfere with computing precision.
    In a new paper, published Jan. 21 in Science Advances, the team demonstrates an optical computing system for AI and machine learning that not only mitigates this noise but actually uses some of it as input to help enhance the creative output of the artificial neural network within the system.
    “We’ve built an optical computer that is faster than a conventional digital computer,” said lead author Changming Wu, a UW doctoral student in electrical and computer engineering. “And also, this optical computer can create new things based on random inputs generated from the optical noise that most researchers tried to evade.”
    Optical computing noise essentially comes from stray light particles, or photons, that originate from the operation of lasers within the device and background thermal radiation. To target noise, the researchers connected their optical computing core to a special type of machine learning network, called a Generative Adversarial Network. More

  • in

    How robots learn to hike

    ETH Zurich researchers led by Marco Hutter have developed a new control approach that enables a legged robot, called ANYmal, to move quickly and robustly over difficult terrain. Thanks to machine learning, the robot can combine its visual perception of the environment with its sense of touch for the first time.
    Steep sections on slippery ground, high steps, scree and forest trails full of roots: the path up the 1,098-​metre-high Mount Etzel at the southern end of Lake Zurich is peppered with numerous obstacles. But ANYmal, the quadrupedal robot from the Robotic Systems Lab at ETH Zurich, overcomes the 120 vertical metres effortlessly in a 31-​minute hike. That’s 4 minutes faster than the estimated duration for human hikers — and with no falls or missteps.
    This is made possible by a new control technology, which researchers at ETH Zurich led by robotics professor Marco Hutter recently presented in the journal Science Robotics. “The robot has learned to combine visual perception of its environment with proprioception — its sense of touch — based on direct leg contact. This allows it to tackle rough terrain faster, more efficiently and, above all, more robustly,” Hutter says. In the future, ANYmal can be used anywhere that is too dangerous for humans or too impassable for other robots.
    Perceiving the environment accurately
    To navigate difficult terrain, humans and animals quite automatically combine the visual perception of their environment with the proprioception of their legs and hands. This allows them to easily handle slippery or soft ground and move around with confidence, even when visibility is low. Until now, legged robots have been able to do this only to a limited extent.
    “The reason is that the information about the immediate environment recorded by laser sensors and cameras is often incomplete and ambiguous,” explains Takahiro Miki, a doctoral student in Hutter’s group and lead author of the study. For example, tall grass, shallow puddles or snow appear as insurmountable obstacles or are partially invisible, even though the robot could actually traverse them. In addition, the robot’s view can be obscured in the field by difficult lighting conditions, dust or fog. More

  • in

    AI light-field camera reads 3D facial expressions

    A joint research team led by Professors Ki-Hun Jeong and Doheon Lee from the KAIST Department of Bio and Brain Engineering reported the development of a technique for facial expression detection by merging near-infrared light-field camera techniques with artificial intelligence (AI) technology.
    Unlike a conventional camera, the light-field camera contains micro-lens arrays in front of the image sensor, which makes the camera small enough to fit into a smart phone, while allowing it to acquire the spatial and directional information of the light with a single shot. The technique has received attention as it can reconstruct images in a variety of ways including multi-views, refocusing, and 3D image acquisition, giving rise to many potential applications.
    However, the optical crosstalk between shadows caused by external light sources in the environment and the micro-lens has limited existing light-field cameras from being able to provide accurate image contrast and 3D reconstruction.
    The joint research team applied a vertical-cavity surface-emitting laser (VCSEL) in the near-IR range to stabilize the accuracy of 3D image reconstruction that previously depended on environmental light. When an external light source is shone on a face at 0-, 30-, and 60-degree angles, the light field camera reduces 54% of image reconstruction errors. Additionally, by inserting a light-absorbing layer for visible and near-IR wavelengths between the micro-lens arrays, the team could minimize optical crosstalk while increasing the image contrast by 2.1 times.
    Through this technique, the team could overcome the limitations of existing light-field cameras and was able to develop their NIR-based light-field camera (NIR-LFC), optimized for the 3D image reconstruction of facial expressions. Using the NIR-LFC, the team acquired high-quality 3D reconstruction images of facial expressions expressing various emotions regardless of the lighting conditions of the surrounding environment.
    The facial expressions in the acquired 3D images were distinguished through machine learning with an average of 85% accuracy — a statistically significant figure compared to when 2D images were used. Furthermore, by calculating the interdependency of distance information that varies with facial expression in 3D images, the team could identify the information a light-field camera utilizes to distinguish human expressions.
    Professor Ki-Hun Jeong said, “The sub-miniature light-field camera developed by the research team has the potential to become the new platform to quantitatively analyze the facial expressions and emotions of humans.” To highlight the significance of this research, he added, “It could be applied in various fields including mobile healthcare, field diagnosis, social cognition, and human-machine interactions.”
    This research was published in Advanced Intelligent Systems online on December 16, under the title, “Machine-Learned Light-field Camera that Reads Facial Expression from High-Contrast and Illumination Invariant 3D Facial Images.” This research was funded by the Ministry of Science and ICT and the Ministry of Trade, Industry and Energy. More

  • in

    Intense drought or flash floods can shock the global economy

    Extremes in rainfall — whether intense drought or flash floods — can catastrophically slow the global economy, researchers report in the Jan. 13 Nature. And those impacts are most felt by wealthy, industrialized nations, the researchers found.

    A global analysis showed that episodes of intense drought led to the biggest shocks to economic productivity. But days with intense deluges — such as occurred in July 2021 in Europe — also produced strong shocks to the economic system (SN: 8/23/21). Most surprising, though, was that agricultural economies appeared to be relatively resilient against these types of shocks, says Maximilian Kotz, an environmental economist at the Potsdam Institute for Climate Impact Research in Germany. Instead, two other business sectors — manufacturing and services — were the most hard-hit.

    As a result, the nations most affected by rainfall extremes weren’t those that tended to be poorer, with agriculture-dependent societies, but the wealthiest nations, whose economies are tied more heavily to manufacturing and services, such as banking, health care and entertainment.

    It’s well established that rising temperatures can take a toll on economic productivity, for example by contributing to days lost at work or doctors’ visits (SN: 11/28/18). Extreme heat also has clear impacts on human behavior (SN: 8/18/21). But what effect climate change–caused shifts in rainfall might have on the global economy hasn’t been so straightforward.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    That’s in part because previous studies looking at a possible connection between rainfall and productivity have focused on changes in yearly precipitation, a timeframe that “is just too coarse to really describe what’s actually happening [in] the economy,” Kotz says. Such studies showed that more rain in a given year was basically beneficial, which makes sense in that having more water available is good for agriculture and other human activities, he adds. “But these findings were mainly focused on agriculturally dependent economies and poorer economies.”

    In the new study, Kotz and his colleagues looked at three timescales — annual, monthly and daily rainfall — and examined what happened to economic output for time periods in which the rainfall deviated from average historical values. In particular, Kotz says, they introduced two new measures not considered in previous studies: the amount of rainy days that a region gets in a year and extreme daily rainfall. The team then examined these factors across 1,554 regions around the world — which included many subregions within 77 countries — from 1979 to 2019.

    The disparity over which regions are hit hardest is “at odds with the conventional wisdom” — and with some previous studies — that agriculture is vulnerable to extreme rainfall, writes Xin-Zhong Liang, an atmospheric scientist at the University of Maryland in College Park, in a commentary in the same issue of Nature. Researchers may need to incorporate other factors in future assessments, such as growth stages of crops, land drainage or irrigation, in order to really understand how these extremes affect agriculture, Liang writes.

    “That was definitely surprising for us as well,” Kotz says. Although the study doesn’t specifically try to answer why manufacturing and services were so affected, it makes intuitive sense, he says. Flooding, for example, can damage infrastructure and disrupt transportation, effects that can then propagate along supply chains. “It’s feasible that these things might be most important in manufacturing, where infrastructure is very important, or in the services sectors, where the human experience is very much dictated by these daily aspects of weather and rainfall.”

    Including daily and monthly rainfall extremes in this type of analysis was “an important innovation” because it revealed new economic vulnerabilities, says Tamma Carleton, an environmental economist at the University of California, Santa Barbara, who was not involved in the new work. However, Carleton says, “the findings in the paper are not yet conclusive on who is most vulnerable and why, and instead raise many important questions for future research to unpack.”

    Extreme rainfall events, including both drought and deluge, will occur more frequently as global temperatures rise, the United Nations’ Intergovernmental Panel on Climate Change noted in August (SN: 8/9/21). The study’s findings, Kotz says, offer yet another stark warning to the industrialized, wealthy world: Human-caused climate change will have “large economic consequences.” More

  • in

    Quantum dots boost perovskite solar cell efficiency and scalability

    Perovskites are hybrid compounds made from metal halides and organic constituents. They show great potential in a range of applications, e.g. LED lights, lasers, and photodetectors, but their major contribution is in solar cells, where they are poised to overtake the market from their silicon counterparts.
    One of the obstacles facing the commercialization of perovskite solar cells is that their power-conversion efficiency and operational stability drop as they scale up, making it a challenge to maintain high performance in a complete solar cell.
    The problem is partly with the cell’s electron-transport layer, which ensures that the electrons produced when the cell absorbs light will transfer efficiently to the device’s electrode. In perovskite solar cells, the electron-transport layer is made with mesoporous titanium dioxide, which shows low electron mobility, and is also susceptible to adverse, photocatalytic events under ultraviolet light.
    In a new publication in Science, scientists led by Professor Michael Grätzel at EPFL and Dr Dong Suk Kim at the Korea Institute of Energy Research have found an innovative way to increase the performance and maintain it at a high level in perovskite solar cells even at large scales. The innovative idea was to replace the electron-transport layer with a thin layer of quantum dots.
    Quantum dots are nanometer-sized particle that act as semiconductors, and emit light of specific wavelengths (colors) when they illuminated. Their unique optical properties make quantum dots ideal for use in a variety of optical applications, including photovoltaic devices.
    The scientists replaced the titanium dioxide electron-transport layer of their perovskite cells with a thin layer of polyacrylic acid-stabilized tin(IV) oxide quantum dots, and found that it enhanced the devices’ light-capturing capacity, while also suppressing nonradiative recombination, an efficiency-sapping phenomenon that sometimes takes on the interface between the electron-transport layer and the actual perovskite layer.
    By using the quantum dot layer, the researchers found that perovskite solar cells of 0.08 square centimeters attained a record power-conversion efficiency of 25.7% (certified 25.4%) and high operational stability, while facilitating the scale-up. When increasing the surface area of the solar cells to 1, 20, and 64 square centimeters, power-conversion efficiency measured at 23.3, 21.7, and 20.6% respectively.
    Other contributors Ulsan National Institute of Science and Technology University of Ulsan Zurich University of Applied Sciences Uppsala University
    Story Source:
    Materials provided by Ecole Polytechnique Fédérale de Lausanne. Original written by Nik Papageorgiou. Note: Content may be edited for style and length. More

  • in

    Advancing materials science with the help of biology and a dash of dish soap

    Compounds that form tiny crystals hold secrets that could advance renewable energy generation and semiconductor development. Revealing the arrangement of their atoms has already allowed for breakthroughs in materials science and solar cells. However, existing techniques for determining these structures can damage sensitive microcrystals.
    Now scientists have a new tool in their tool belts: a system for investigating microcrystals by the thousands with ultrafast pulses from an X-ray free-electron laser (XFEL), which can collect structural information before damage sets in. This approach, developed over the past decade to study proteins and other large biological molecules at the Department of Energy’s SLAC National Accelerator Laboratory, has now been applied for the first time to small molecules that are of interest to chemistry and materials science.
    Researchers from the University of Connecticut, SLAC, DOE’s Lawrence Berkeley National Laboratory and other institutions developed the new process, called small molecule serial femtosecond X-ray crystallography or smSFX, to determine the structures of three compounds that form microcrystal powders, including two that were previously unknown. The experiments took place at SLAC’s Linac Coherent Light Source (LCLS) XFEL and the SACLA XFEL in Japan.
    The new approach is likely to have a big impact since it should be “broadly applicable across XFEL and synchrotron radiation facilities equipped for serial crystallography,” the research team wrote in a paper published today in Nature.
    Disentangling metal compounds
    Researchers used the method to determine the structures of two metal-organic materials, thiorene and tethrene, for the first time. Both are potential candidates for use in next-generation field effect transistors, energy storage devices, and solar cells and panels. Mapping thiorene and tethrene allowed researchers to better understand why some other metal-organic materials glow bright blue under ultraviolet light, which the scientists compared to Frodo’s magical sword, Sting, in The Lord of the Rings. More