More stories

  • in

    New software may help neurology patients capture clinical data with their own smartphones

    New pose estimation software has the potential to help neurologists and their patients capture important clinical data using simple tools such as smartphones and tablets, according to a study by Johns Hopkins Medicine, the Kennedy Krieger Institute and the University of Maryland. Human pose estimation is a form of artificial intelligence that automatically detects and labels specific landmarks on the human body, such as elbows and fingers, from simple images or videos.
    To measure the speed, rhythm and range of a patient’s motor function, neurologists will often have the patient perform certain repetitive movements, such as tapping fingers or opening and closing hands. An objective assessment of these tests provides the most accurate insight into the severity of a patient’s condition, thus better informing treatment decisions. However, objective motion capture devices are often expensive or only have the ability to measure one type of movement. Therefore, most neurologists must make subjective assessments of their patients’ motor function, usually by simply watching patients as they carry out different tasks.
    The new Hopkins-led study sought to find whether pose estimation software developed by the research team could track human motion as accurately as manual, frame-by-frame visual inspections of video recordings of patients performing movements.
    “Our goal was to develop a fast, inexpensive and easily accessible method to objectively measure a patient’s movements across multiple extremities,” says study lead author Ryan Roemmich, Ph.D., an assistant professor in the Department of Physical Medicine and Rehabilitation at the Johns Hopkins University School of Medicine and a human movement scientist at the Kennedy Krieger Institute.
    The research team had 10 healthy subjects between the ages of 24 and 33 record smartphone video of themselves performing five tasks often assigned to neurology patients during motor function assessments: finger taps, hand closures, toe taps, heel taps and hand rotations. The subjects performed each task at four different speeds. Their movements were tracked using a freely available human pose estimation algorithm, then fed into the team’s software for evaluation.
    The results showed that across all five tasks, the software accurately detected more than 96% of the movements detected by the manual inspection method. These results held up across several variables, including location, type of smartphone used and method of recording: Some subjects placed their smartphone on a stable surface and hit “record,” while others had a family member or friend hold the device.
    With encouraging results from their sample of young, healthy people, the research team’s next step is to test the software on people who require neurological care. Currently, the team is collecting a large sample of videos of people with Parkinson’s disease doing the same five motor function tasks that the healthy subjects performed.
    “We want anyone with a smartphone or tablet to be able to record video that can be successfully analyzed by their physician,” says Roemmich. “With further development of this pose estimation software, motor assessments could eventually be performed and analyzed without the patient having to leave their home.”
    Story Source:
    Materials provided by Johns Hopkins Medicine. Note: Content may be edited for style and length. More

  • in

    Social media use tied to poor physical health

    Social media use has been linked to biological and psychological indicators associated with poor physical health among college students, according to the results of a new study by a University at Buffalo researcher.
    Research participants who used social media excessively were found to have higher levels of C-reactive protein (CRP), a biological marker of chronic inflammation that predicts serious illnesses, such as diabetes, certain cancers and cardiovascular disease. In addition to elevated CRP levels, results suggest higher social media use was also related to somatic symptoms, like headaches, chest and back pains, and more frequent visits to doctors and health centers for the treatment of illness.
    “Social media use has become an integral part of many young adults’ daily lives,” said David Lee, PhD, the paper’s first author and assistant professor of communication in the UB College of Arts and Sciences. “It’s critical that we understand how engagement across these platforms contributes to physical health.”
    The findings appear in the journal Cyberpsychology, Behavior, and Social Networking.
    For decades, researchers have devoted attention to how social media engagement relates to users’ mental health, but its effects on physical health have not been thoroughly investigated. Recent surveys indicate social media usage is particularly high for people in their late teens and early 20s, a population that spends about six hours a day texting, online or using social media. And though a few studies have found links between social media usage and physical health, that research relied largely on self-reporting or the effects of usage with exclusively one platform.
    “Our goal was to extend prior work by examining how social media use across several platforms is associated with physical health outcomes measured with biological, behavioral and self-report measures,” said Lee, an expert on health outcomes related to social interactions. More

  • in

    Harnessing noise in optical computing for AI

    Artificial intelligence and machine learning are currently affecting our lives in many small but impactful ways. For example, AI and machine learning applications recommend entertainment we might enjoy through streaming services such as Netflix and Spotify.
    In the near future, it’s predicted that these technologies will have an even larger impact on society through activities such as driving fully autonomous vehicles, enabling complex scientific research and facilitating medical discoveries.
    But the computers used for AI and machine learning demand a lot of energy. Currently, the need for computing power related to these technologies is doubling roughly every three to four months. And cloud computing data centers used by AI and machine learning applications worldwide are already devouring more electrical power per year than some small countries. It’s easy to see that this level of energy consumption is unsustainable.
    A research team led by the University of Washington has developed new optical computing hardware for AI and machine learning that is faster and much more energy efficient than conventional electronics. The research also addresses another challenge — the ‘noise’ inherent to optical computing that can interfere with computing precision.
    In a new paper, published Jan. 21 in Science Advances, the team demonstrates an optical computing system for AI and machine learning that not only mitigates this noise but actually uses some of it as input to help enhance the creative output of the artificial neural network within the system.
    “We’ve built an optical computer that is faster than a conventional digital computer,” said lead author Changming Wu, a UW doctoral student in electrical and computer engineering. “And also, this optical computer can create new things based on random inputs generated from the optical noise that most researchers tried to evade.”
    Optical computing noise essentially comes from stray light particles, or photons, that originate from the operation of lasers within the device and background thermal radiation. To target noise, the researchers connected their optical computing core to a special type of machine learning network, called a Generative Adversarial Network. More

  • in

    How robots learn to hike

    ETH Zurich researchers led by Marco Hutter have developed a new control approach that enables a legged robot, called ANYmal, to move quickly and robustly over difficult terrain. Thanks to machine learning, the robot can combine its visual perception of the environment with its sense of touch for the first time.
    Steep sections on slippery ground, high steps, scree and forest trails full of roots: the path up the 1,098-​metre-high Mount Etzel at the southern end of Lake Zurich is peppered with numerous obstacles. But ANYmal, the quadrupedal robot from the Robotic Systems Lab at ETH Zurich, overcomes the 120 vertical metres effortlessly in a 31-​minute hike. That’s 4 minutes faster than the estimated duration for human hikers — and with no falls or missteps.
    This is made possible by a new control technology, which researchers at ETH Zurich led by robotics professor Marco Hutter recently presented in the journal Science Robotics. “The robot has learned to combine visual perception of its environment with proprioception — its sense of touch — based on direct leg contact. This allows it to tackle rough terrain faster, more efficiently and, above all, more robustly,” Hutter says. In the future, ANYmal can be used anywhere that is too dangerous for humans or too impassable for other robots.
    Perceiving the environment accurately
    To navigate difficult terrain, humans and animals quite automatically combine the visual perception of their environment with the proprioception of their legs and hands. This allows them to easily handle slippery or soft ground and move around with confidence, even when visibility is low. Until now, legged robots have been able to do this only to a limited extent.
    “The reason is that the information about the immediate environment recorded by laser sensors and cameras is often incomplete and ambiguous,” explains Takahiro Miki, a doctoral student in Hutter’s group and lead author of the study. For example, tall grass, shallow puddles or snow appear as insurmountable obstacles or are partially invisible, even though the robot could actually traverse them. In addition, the robot’s view can be obscured in the field by difficult lighting conditions, dust or fog. More

  • in

    AI light-field camera reads 3D facial expressions

    A joint research team led by Professors Ki-Hun Jeong and Doheon Lee from the KAIST Department of Bio and Brain Engineering reported the development of a technique for facial expression detection by merging near-infrared light-field camera techniques with artificial intelligence (AI) technology.
    Unlike a conventional camera, the light-field camera contains micro-lens arrays in front of the image sensor, which makes the camera small enough to fit into a smart phone, while allowing it to acquire the spatial and directional information of the light with a single shot. The technique has received attention as it can reconstruct images in a variety of ways including multi-views, refocusing, and 3D image acquisition, giving rise to many potential applications.
    However, the optical crosstalk between shadows caused by external light sources in the environment and the micro-lens has limited existing light-field cameras from being able to provide accurate image contrast and 3D reconstruction.
    The joint research team applied a vertical-cavity surface-emitting laser (VCSEL) in the near-IR range to stabilize the accuracy of 3D image reconstruction that previously depended on environmental light. When an external light source is shone on a face at 0-, 30-, and 60-degree angles, the light field camera reduces 54% of image reconstruction errors. Additionally, by inserting a light-absorbing layer for visible and near-IR wavelengths between the micro-lens arrays, the team could minimize optical crosstalk while increasing the image contrast by 2.1 times.
    Through this technique, the team could overcome the limitations of existing light-field cameras and was able to develop their NIR-based light-field camera (NIR-LFC), optimized for the 3D image reconstruction of facial expressions. Using the NIR-LFC, the team acquired high-quality 3D reconstruction images of facial expressions expressing various emotions regardless of the lighting conditions of the surrounding environment.
    The facial expressions in the acquired 3D images were distinguished through machine learning with an average of 85% accuracy — a statistically significant figure compared to when 2D images were used. Furthermore, by calculating the interdependency of distance information that varies with facial expression in 3D images, the team could identify the information a light-field camera utilizes to distinguish human expressions.
    Professor Ki-Hun Jeong said, “The sub-miniature light-field camera developed by the research team has the potential to become the new platform to quantitatively analyze the facial expressions and emotions of humans.” To highlight the significance of this research, he added, “It could be applied in various fields including mobile healthcare, field diagnosis, social cognition, and human-machine interactions.”
    This research was published in Advanced Intelligent Systems online on December 16, under the title, “Machine-Learned Light-field Camera that Reads Facial Expression from High-Contrast and Illumination Invariant 3D Facial Images.” This research was funded by the Ministry of Science and ICT and the Ministry of Trade, Industry and Energy. More

  • in

    Quantum dots boost perovskite solar cell efficiency and scalability

    Perovskites are hybrid compounds made from metal halides and organic constituents. They show great potential in a range of applications, e.g. LED lights, lasers, and photodetectors, but their major contribution is in solar cells, where they are poised to overtake the market from their silicon counterparts.
    One of the obstacles facing the commercialization of perovskite solar cells is that their power-conversion efficiency and operational stability drop as they scale up, making it a challenge to maintain high performance in a complete solar cell.
    The problem is partly with the cell’s electron-transport layer, which ensures that the electrons produced when the cell absorbs light will transfer efficiently to the device’s electrode. In perovskite solar cells, the electron-transport layer is made with mesoporous titanium dioxide, which shows low electron mobility, and is also susceptible to adverse, photocatalytic events under ultraviolet light.
    In a new publication in Science, scientists led by Professor Michael Grätzel at EPFL and Dr Dong Suk Kim at the Korea Institute of Energy Research have found an innovative way to increase the performance and maintain it at a high level in perovskite solar cells even at large scales. The innovative idea was to replace the electron-transport layer with a thin layer of quantum dots.
    Quantum dots are nanometer-sized particle that act as semiconductors, and emit light of specific wavelengths (colors) when they illuminated. Their unique optical properties make quantum dots ideal for use in a variety of optical applications, including photovoltaic devices.
    The scientists replaced the titanium dioxide electron-transport layer of their perovskite cells with a thin layer of polyacrylic acid-stabilized tin(IV) oxide quantum dots, and found that it enhanced the devices’ light-capturing capacity, while also suppressing nonradiative recombination, an efficiency-sapping phenomenon that sometimes takes on the interface between the electron-transport layer and the actual perovskite layer.
    By using the quantum dot layer, the researchers found that perovskite solar cells of 0.08 square centimeters attained a record power-conversion efficiency of 25.7% (certified 25.4%) and high operational stability, while facilitating the scale-up. When increasing the surface area of the solar cells to 1, 20, and 64 square centimeters, power-conversion efficiency measured at 23.3, 21.7, and 20.6% respectively.
    Other contributors Ulsan National Institute of Science and Technology University of Ulsan Zurich University of Applied Sciences Uppsala University
    Story Source:
    Materials provided by Ecole Polytechnique Fédérale de Lausanne. Original written by Nik Papageorgiou. Note: Content may be edited for style and length. More

  • in

    Advancing materials science with the help of biology and a dash of dish soap

    Compounds that form tiny crystals hold secrets that could advance renewable energy generation and semiconductor development. Revealing the arrangement of their atoms has already allowed for breakthroughs in materials science and solar cells. However, existing techniques for determining these structures can damage sensitive microcrystals.
    Now scientists have a new tool in their tool belts: a system for investigating microcrystals by the thousands with ultrafast pulses from an X-ray free-electron laser (XFEL), which can collect structural information before damage sets in. This approach, developed over the past decade to study proteins and other large biological molecules at the Department of Energy’s SLAC National Accelerator Laboratory, has now been applied for the first time to small molecules that are of interest to chemistry and materials science.
    Researchers from the University of Connecticut, SLAC, DOE’s Lawrence Berkeley National Laboratory and other institutions developed the new process, called small molecule serial femtosecond X-ray crystallography or smSFX, to determine the structures of three compounds that form microcrystal powders, including two that were previously unknown. The experiments took place at SLAC’s Linac Coherent Light Source (LCLS) XFEL and the SACLA XFEL in Japan.
    The new approach is likely to have a big impact since it should be “broadly applicable across XFEL and synchrotron radiation facilities equipped for serial crystallography,” the research team wrote in a paper published today in Nature.
    Disentangling metal compounds
    Researchers used the method to determine the structures of two metal-organic materials, thiorene and tethrene, for the first time. Both are potential candidates for use in next-generation field effect transistors, energy storage devices, and solar cells and panels. Mapping thiorene and tethrene allowed researchers to better understand why some other metal-organic materials glow bright blue under ultraviolet light, which the scientists compared to Frodo’s magical sword, Sting, in The Lord of the Rings. More

  • in

    Researchers simulate behavior of living 'minimal cell' in three dimensions

    Scientists report that they have built a living “minimal cell” with a genome stripped down to its barest essentials — and a computer model of the cell that mirrors its behavior. By refining and testing their model, the scientists say they are developing a system for predicting how changes to the genomes, living conditions or physical characteristics of live cells will alter how they function.
    They report their findings in the journal Cell.
    Minimal cells have pared-down genomes that carry the genes necessary to replicate their DNA, grow, divide and perform most of the other functions that define life, said Zaida (Zan) Luthey-Schulten, a chemistry professor at the University of Illinois Urbana-Champaign who led the work with graduate student Zane Thornburg. “What’s new here is that we developed a three-dimensional, fully dynamic kinetic model of a living minimal cell that mimics what goes on in the actual cell,” Luthey-Schulten said.
    The simulation maps out the precise location and chemical characteristics of thousands of cellular components in 3D space at an atomic scale. It tracks how long it takes for these molecules to diffuse through the cell and encounter one another, what kinds of chemical reactions occur when they do, and how much energy is required for each step.
    To build the minimal cell, scientists at the J. Craig Venter Institute in La Jolla, California, turned to the simplest living cells — the mycoplasmas, a genus of bacteria that parasitize other organisms. In previous studies, the JCVI team built a synthetic genome missing as many nonessential genes as possible and grew the cell in an environment enriched with all the nutrients and factors needed to sustain it. For the new study, the team added back a few genes to improve the cell’s viability. This cell is simpler than any naturally occurring cell, making it easier to model on a computer.
    Simulating something as enormous and complex as a living cell relies on data from decades of research, Luthey-Schulten said. To build the computer model, she and her colleagues at Illinois had to account for the physical and chemical characteristics of the cell’s DNA; lipids; amino acids; and gene-transcription, translation and protein-building machinery. They also had to model how each component diffused through the cell, keeping track of the energy required for each step in the cell’s life cycle. NVIDIA graphic processing units were used to perform the simulations.
    “We built a computer model based on what we knew about the minimal cell, and then we ran simulations,” Thornburg said. “And we checked to see if our simulated cell was behaving like the real thing.”
    The simulations gave the researchers insight into how the actual cell “balances the demands of its metabolism, genetic processes and growth,” Luthey-Schulten said. For example, the model revealed that the cell used the bulk of its energy to import essential ions and molecules across its cell membrane. This makes sense, Luthey-Schulten said, because mycoplasmas get most of what they need to survive from other organisms.
    The simulations also allowed Thornburg to calculate the natural lifespan of messenger RNAs, the genetic blueprints for building proteins. They also revealed a relationship between the rate at which lipids and membrane proteins were synthesized and changes in membrane surface area and cell volume.
    “We simulated all of the chemical reactions inside a minimal cell — from its birth until the time it divides two hours later,” Thornburg said. “From this, we get a model that tells us about how the cell behaves and how we can complexify it to change its behavior.”
    “We developed a three-dimensional, fully dynamic kinetic model of a living minimal cell,” Luthey-Schulten said. “Our model opens a window on the inner workings of the cell, showing us how all of the components interact and change in response to internal and external cues. This model — and other, more sophisticated models to come — will help us better understand the fundamental principles of life.” More