More stories

  • in

    AI-enabled EKGs find difference between numerical age and biological age significantly affects health

    You might be older — or younger — than you think. A new study found that differences between a person’s age in years and his or her biological age, as predicted by an artificial intelligence (AI)-enabled EKG, can provide measurable insights into health and longevity.
    The AI model accurately predicted the age of most subjects, with a mean age gap of 0.88 years between EKG age and actual age. However, a number of subjects had a gap that was much larger, either seemingly much older or much younger by EKG age.
    The likelihood to die during follow-up was much higher among those seemingly older by EKG age, compared to those whose EKG age was the same as their chronologic or actual age. The association was even stronger when predicting death caused by heart disease. Conversely, those who had a lesser age gap ? considered younger by EKG — had decreased risk.
    “Our results validate and expand on our prior observations that EKG age using AI may detect accelerated aging by proving that those with older-than-expected age by EKG die sooner, particularly from heart disease. We know that mortality rate is one of the best ways to measure biological age, and our model proved that,” says Francisco Lopez-Jimenez, M.D., chair of the Division of Preventive Cardiology at Mayo Clinic. Dr. Lopez-Jimenez is senior author of the study.
    When researchers adjusted these data to consider multiple standard risk factors, the association between the age gap and cardiovascular mortality was even more pronounced. Subjects who were found to be oldest by EKG compared to their actual age had the greatest risk, even after accounting for medical conditions that would predict their survival, while those found the youngest compared to their actual age had lower cardiovascular risks.
    Mayo Clinic researchers evaluated the12-lead EKG data of more than 25,000 subjects with an AI algorithm previously trained and validated to provide a biologic age prediction. Subjects with a positive age gap — an EKG age higher than their chronological or actual age — showed a clear connection to all-cause and cardiovascular mortality over time. The findings are published in European Heart Journal — Digital Health.
    Study subjects were selected through the Rochester Epidemiology Project, an index of health-related information from medical providers in Olmsted County, Minnesota. The subjects had a mean age around 54 and were followed for approximately 12.5 years. The study excluded those with a baseline history of heart attacks, bypass surgery or stents, stroke or atrial fibrillation.
    “Our findings open up a number of opportunities to help identify those who may benefit from preventive strategies the most. Now that the concept has been proven that EKG age relates to survival, it is time to think how we can incorporate this in clinical practice.More research will be needed to find the best ways to do it,” says Dr. Lopez-Jimenez.
    Story Source:
    Materials provided by Mayo Clinic. Original written by Terri Malloy. Note: Content may be edited for style and length. More

  • in

    Brain stimulation evoking sense of touch improves control of robotic arm

    Most able-bodied people take their ability to perform simple daily tasks for granted — when they reach for a warm mug of coffee, they can feel its weight and temperature and adjust their grip accordingly so that no liquid is spilled. People with full sensory and motor control of their arms and hands can feel that they’ve made contact with an object the instant they touch or grasp it, allowing them to start moving or lifting it with confidence.
    But those tasks become much more difficult when a person operates a prosthetic arm, let alone a mind-controlled one.
    In a paper published today in Science, a team of bioengineers from the University of Pittsburgh Rehab Neural Engineering Labs describe how adding brain stimulation that evokes tactile sensations makes it easier for the operator to manipulate a brain-controlled robotic arm. In the experiment, supplementing vision with artificial tactile perception cut the time spent grasping and transferring objects in half, from a median time of 20.9 to 10.2 seconds.
    “In a sense, this is what we hoped would happen — but perhaps not to the degree that we observed,” said co-senior author Jennifer Collinger, Ph.D., associate professor in the Pitt Department of Physical Medicine and Rehabilitation. “Sensory feedback from limbs and hands is hugely important for doing normal things in our daily lives, and when that feedback is lacking, people’s performance is impaired.”
    Study participant Nathan Copeland, whose progress was described in the paper, is the first person in the world who was implanted with tiny electrode arrays not just in his brain’s motor cortex but in his somatosensory cortex as well — a region of the brain that processes sensory information from the body. Arrays allow him to not only control the robotic arm with his mind, but also to receive tactile sensory feedback, which is similar to how neural circuits operate when a person’s spinal cord is intact.
    “I was already extremely familiar with both the sensations generated by stimulation and performing the task without stimulation. Even though the sensation isn’t ‘natural’ — it feels like pressure and gentle tingle — that never bothered me,” said Copeland. “There wasn’t really any point where I felt like stimulation was something I had to get used to. Doing the task while receiving the stimulation just went together like PB&J.”
    After a car crash that left him with limited use of his arms, Copeland enrolled in a clinical trial testing the sensorimotor microelectrode brain-computer interface (BCI) and was implanted with four microelectrode arrays developed by Blackrock Microsystems (also commonly referred to as Utah arrays).
    This paper is a step forward from an earlier study that described for the first time how stimulating sensory regions of the brain using tiny electrical pulses can evoke sensation in distinct regions of a person’s hand, even though they lost feeling in their limbs due to spinal cord injury. In this new study, the researchers combined reading the information from the brain to control the movement of the robotic arm with writing information back in to provide sensory feedback.
    In a series of tests, where the BCI operator was asked to pick up and transfer various objects from a table to a raised platform, providing tactile feedback through electrical stimulation allowed the participant to complete tasks twice as fast compared to tests without stimulation.
    In the new paper, the researchers wanted to test the effect of sensory feedback in conditions that would resemble the real world as closely as possible.
    “We didn’t want to constrain the task by removing the visual component of perception,” said co-senior author Robert Gaunt, Ph.D., associate professor in the Pitt Department of Physical Medicine and Rehabilitation. “When even limited and imperfect sensation is restored, the person’s performance improved in a pretty significant way. We still have a long way to go in terms of making the sensations more realistic and bringing this technology to people’s homes, but the closer we can get to recreating the normal inputs to the brain, the better off we will be.”
    This work was supported by the Defense Advanced Research Projects Agency (DARPA) and Space and Naval Warfare Systems Center Pacific (SSC Pacific) under Contract No. N66001-16-C-4051 and the Revolutionizing Prosthetics program (Contract No. N66001-10-C-4056).
    Story Source:
    Materials provided by University of Pittsburgh. Note: Content may be edited for style and length. More

  • in

    A new form of carbon opens door to nanosized wires

    Carbon exists in various forms. In addition to diamond and graphite, there are recently discovered forms with astonishing properties. For example graphene, with a thickness of just one atomic layer, is the thinnest known material, and its unusual properties make it an extremely exciting candidate for applications like future electronics and high-tech engineering. In graphene, each carbon atom is linked to three neighbours, forming hexagons arranged in a honeycomb network. Theoretical studies have shown that carbon atoms can also arrange in other flat network patterns, while still binding to three neighbours, but none of these predicted networks had been realized until now.
    Researchers at the University of Marburg in Germany and Aalto University in Finland have now discovered a new carbon network, which is atomically thin like graphene, but is made up of squares, hexagons, and octagons forming an ordered lattice. They confirmed the unique structure of the network using high-resolution scanning probe microscopy and interestingly found that its electronic properties are very different from those of graphene.
    In contrast to graphene and other forms of carbon, the new Biphenylene network — as the new material is named — has metallic properties. Narrow stripes of the network, only 21 atoms wide, already behave like a metal, while graphene is a semiconductor at this size. “These stripes could be used as conducting wires in future carbon-based electronic devices.” said professor Michael Gottfried, at University of Marburg, who leads the team who developed the idea. The lead author of the study, Qitang Fan from Marburg continues, “This novel carbon network may also serve as a superior anode material in lithium-ion batteries, with a larger lithium storage capacity compared to that of the current graphene-based materials.”
    The team at Aalto University helped image the material and decipher its properties. The group of Professor Peter Liljeroth carried out the high-resolution microscopy that showed the structure of the material, while researchers led by Professor Adam Foster used computer simulations and analysis to understand the exciting electrical properties of the material.
    The new material is made by assembling carbon-containing molecules on an extremely smooth gold surface. These molecules first form chains, which consist of linked hexagons, and a subsequent reaction connects these chains together to form the squares and octagons. An important feature of the chains is that they are chiral, which means that they exist in two mirroring types, like left and right hands. Only chains of the same type aggregate on the gold surface, forming well-ordered assemblies, before they connect. This is critical for the formation of the new carbon material, because the reaction between two different types of chains leads only to graphene. “The new idea is to use molecular precursors that are tweaked to yield biphenylene instead of graphene” explains Linghao Yan, who carried out the high-resolution microscopy experiments at Aalto University.
    For now, the teams work to produce larger sheets of the material, so that its application potential can be further explored. However, “We are confident that this new synthesis method will lead to the discovery of other novel carbon networks.” said Professor Liljeroth.
    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More

  • in

    Silicon chips combine light and ultrasound for better signal processing

    The continued growth of wireless and cellular data traffic relies heavily on light waves. Microwave photonics is the field of technology that is dedicated to the distribution and processing of electrical information signals using optical means. Compared with traditional solutions based on electronics alone, microwave photonic systems can handle massive amounts of data. Therefore, microwave photonics has become increasingly important as part of 5G cellular networks and beyond. A primary task of microwave photonics is the realization of narrowband filters: the selection of specific data, at specific frequencies, out of immense volumes that are carried over light.
    Many microwave photonic systems are built of discrete, separate components and long optical fiber paths. However, the cost, size, power consumption and production volume requirements of advanced networks call for a new generation of microwave photonic systems that are realized on a chip. Integrated microwave photonic filters, particularly in silicon, are highly sought after. There is, however, a fundamental challenge: Narrowband filters require that signals are delayed for comparatively long durations as part of their processing.
    “Since the speed of light is so fast,” says Prof. Avi Zadok from Bar-Ilan University, Israel, “we run out of chip space before the necessary delays are accommodated. The required delays may reach over 100 nanoseconds. Such delays may appear to be short considering daily experience, however the optical paths that support them are over ten meters long! We cannot possibly fit such long paths as part of a silicon chip. Even if we could somehow fold over that many meters in a certain layout, the extent of optical power losses to go along with it would be prohibitive.”
    These long delays require a different type of wave, one that travels much more slowly. In a study recently published in the journal Optica, Zadok and his team from the Faculty of Engineering and Institute of Nanotechnology and Advanced Materials at Bar-Ilan University, and collaborators from the Hebrew University of Jerusalem and Tower Semiconductors, suggest a solution. They brought together light and ultrasonic waves to realize ultra-narrow filters of microwave signals, in silicon integrated circuits. The concept allows large freedom for filters design.
    Bar-Ilan University doctoral student Moshe Katzman explains: “We’ve learned how to convert the information of interest from the form of light waves to ultrasonic, surface acoustic waves, and then back to optics. The surface acoustic waves travel at a speed that is 100,000 slower. We can accommodate the delays that we need as part of our silicon chip, within less than a millimeter, and with losses that are very reasonable.”
    Acoustic waves have served for the processing of information for sixty years, however their chip-level integration alongside light waves has proven tricky. Moshe Katzman continues: “Over the last decade we have seen landmark demonstrations of how light and ultrasound waves can be brought together on a chip device, to make up excellent microwave photonic filters. However, the platforms used were more specialized. Part of the appeal of the solution is in its simplicity. The fabrication of devices is based on routine protocols of silicon waveguides. We are not doing anything fancy here.” The realized filters are very narrowband: the spectral width of the filters passbands is only 5 MHz.
    In order to realize narrowband filters, the information-carrying surface acoustic waves is imprinted upon the output light wave multiple times. Doctoral student Maayan Priel elaborates: “The acoustic signal crosses the light path up to 12 times, depending on choice of layout. Each such event imprints a replica of our signal of interest on the optical wave. Due to the slow acoustic speed, these events are separated by long delays. Their overall summation is what makes the filters work.” As part of their research, the team reports complete control over each replica, towards the realization of arbitrary filter responses. Maayan Priel concludes: “The freedom to design the response of the filters is making the most out of the integrated, microwave-photonic platform.”
    Story Source:
    Materials provided by Bar-Ilan University. Note: Content may be edited for style and length. More

  • in

    Ultra-sensitive light detector gives self-driving tech a jolt

    Realizing the potential of self-driving cars hinges on technology that can quickly sense and react to obstacles and other vehicles in real time. Engineers from The University of Texas at Austin and the University of Virginia created a new first-of-its-kind light detecting device that can more accurately amplify weak signals bouncing off of faraway objects than current technology allows, giving autonomous vehicles a fuller picture of what’s happening on the road.
    The new device is more sensitive than other light detectors in that it also eliminates inconsistency, or noise, associated with the detection process. Such noise can cause systems to miss signals and put autonomous vehicle passengers at risk.
    “Autonomous vehicles send out laser signals that bounce off objects to tell you how far away you are. Not much light comes back, so if your detector is putting out more noise than the signal coming in you get nothing,” said Joe Campbell, professor of electrical and computer engineering at the University of Virginia School of Engineering.
    Researchers around the globe are working on devices, known as avalanche photodiodes, to meet these needs. But what makes this new device stand out is its staircase-like alignment. It includes physical steps in energy that electrons roll down, multiplying along the way and creating a stronger electrical current for light detection as they go.
    In 2015, the researchers created a single-step staircase device. In this new discovery, detailed in Nature Photonics, they’ve shown, for the first time, a staircase avalanche photodiode with multiple steps.
    “The electron is like a marble rolling down a flight of stairs,” said Seth Bank, professor in the Cockrell School’s Department of Electrical and Computer Engineering who led the research with Campbell, a former professor in the Cockrell School from 1989 to 2006 and UT Austin alumnus (B.S., Physics, 1969). “Each time the marble rolls off a step, it drops and crashes into the next one. In our case, the electron does the same thing, but each collision releases enough energy to actually free another electron. We may start with one electron, but falling off each step doubles the number of electrons: 1, 2, 4, 8, and so on.”
    The new pixel-sized device is ideal for Light Detection and Ranging (lidar) receivers, which require high-resolution sensors that detect optical signals reflected from distant objects. Lidar is an important part of self-driving car technology, and it also has applications in robotics, surveillance and terrain mapping. More

  • in

    These cognitive exercises help young children boost their math skills, study shows

    Young children who practice visual working memory and reasoning tasks improve their math skills more than children who focus on spatial rotation exercises, according to a large study by researchers at Karolinska Institutet in Sweden. The findings support the notion that training spatial cognition can enhance academic performance and that when it comes to math, the type of training matters. The study is published in the journal Nature Human Behaviour.
    “In this large, randomized study we found that when it comes to enhancing mathematical learning in young children, the type of cognitive training performed plays a significant role,” says corresponding author Torkel Klingberg, professor in the Department of Neuroscience, Karolinska Institutet. “It is an important finding because it provides strong evidence that cognitive training transfers to an ability that is different from the one you practiced.”
    Numerous studies have linked spatial ability — that is the capacity to understand and remember dimensional relations among objects — to performance in science, technology, engineering and mathematics. As a result, some employers in these fields use spatial ability tests to vet candidates during the hiring process. This has also fueled an interest in spatial cognition training, which focuses on improving one’s ability to memorize and manipulate various shapes and objects and spot patterns in recurring sequences. Some schools today include spatial exercises as part of their tutoring.
    However, previous studies assessing the effect of spatial training on academic performance have had mixed results, with some showing significant improvement and others no effect at all. Thus, there is a need for large, randomized studies to determine if and to what extent spatial cognition training actually improves performance.
    In this study, more than 17,000 Swedish schoolchildren between the ages of six and eight completed cognitive training via an app for either 20 or 33 minutes per day over the course of seven weeks. In the first week, the children were given identical exercises, after which they were randomly split into one of five training plans. In all groups, children spent about half of their time on mathematical number line tasks. The remaining time was randomly allotted to different proportions of cognitive training in the form of rotation tasks (2D mental rotation and tangram puzzle), visual working memory tasks or non-verbal reasoning tasks (see examples below for details). The children’s math performance was tested in the first, fifth and seventh week.
    The researchers found that all groups improved on mathematical performance, but that reasoning training had the largest positive impact followed by working memory tasks. Both reasoning and memory training significantly outperformed rotation training when it came to mathematical improvement. They also observed that the benefits of cognitive training could differ threefold between individuals. That could explain differences in results from some previous studies seeing as individual characteristics of study participants tend to impact the results.
    The researchers note there were some limitations to the study, including the lack of a passive control group that would allow for an estimation of the absolute effect size. Also, this study did not include a group of students who received math training only.
    “While it is likely that for any given test, training on that particular skill is the most time-effective way to improve test results, our study offers a proof of principle that spatial cognitive training transfers to academic abilities,” Torkel Klingberg says. “Given the wide range of areas associated with spatial cognition, it is possible that training transfers to multiple areas and we believe this should be included in any calculation by teachers and policymakers of how time-efficient spatial training is relative to training for a particular test.”
    The researchers have received funding by the Swedish Research Council. Torkel Klingberg holds an unpaid position as chief scientific officer for Cognition Matters, the non-profit foundation that owns the cognition training app Vektor that was used in this study.
    Examples of training tasks in the study In a number line task, a person is asked to identify the right position of a number on a line bound by a start and an end point. Difficulty is typically moderated by removing spatial cues, for example ticks on the number line, and progress to include mathematical problems such as addition, subtraction and division. In a visual working memory task, a person is asked to recollect visual objects. In this study, the children reproduced a sequence of dots on a grid by touching the screen. Difficulty was increased by adding more items. In a non-verbal reasoning task, a person is asked to complete sequences of spatial patterns. In this study, the children were asked to choose the correct image to fill a blank space based on previous sequences. Difficulty was increased by adding new dimensions such as colors, shapes and dots. In a rotation task, a person is asked to figure out what an object would look like if rotated. In this study, the children were asked to rotate a 2D object to fit various angles. Difficulty was moderated by increasing the angle of the rotation or the complexity of the object being rotated.
    Story Source:
    Materials provided by Karolinska Institutet. Note: Content may be edited for style and length. More

  • in

    Walking in their shoes: Using virtual reality to elicit empathy in healthcare providers

    Research has shown empathy gives healthcare workers the ability to provide appropriate supports and make fewer mistakes. This helps increase patient satisfaction and enhance patient outcomes, resulting in better overall care. In an upcoming issue of the Journal of Medical Imaging and Radiation Sciences, published by Elsevier, multidisciplinary clinicians and researchers from Dalhousie University performed an integrative review to synthesize the findings regarding virtual reality (VR) as a pedagogical tool for eliciting empathetic behavior in medical radiation technologists (MRTs).
    Informally, empathy is often described as the capacity to put oneself in the shoes of another. Empathy is essential to patient-centered care and crucial to the development of therapeutic relationships between carers (healthcare providers, healthcare students, and informal caregivers such as parents, spouses, friends, family, clergy, social workers, and fellow patients) and care recipients. Currently, there is a need for the development of effective tools and approaches that are standardizable, low-risk, safe-to-fail, easily repeatable, and could assist in eliciting empathetic behavior.
    This research synthesis looked at studies investigating VR experiences that ranged from a single eight-minute session to sessions lasting 20-25 minutes in duration delivered on two separate days, both in immersive VR environments where participants assumed the role of a care recipient, and non-immersive VR environments where the participants assumed the role of a care provider in a simulated care setting. The two types of studies helped researchers gain an understanding of what it is like to have a specific disease or need and to practice interacting with virtual care recipients.
    “Although the studies we looked at don’t definitively show VR can help sustain empathy behaviors over time, there is a lot of promise for research and future applications in this area,” explained lead author Megan Brydon, MSc, BHSc, RTNM, IWK Health Centre, Halifax, Nova Scotia, Canada.
    The authors conclude that VR may provide an effective and wide-ranging tool for the learning of care recipients’ perspectives and that future studies should seek to determine which VR experiences are the most effective in evoking empathetic behaviors. They recommend that these studies employ high order designs that are better able to control biases.
    Story Source:
    Materials provided by Elsevier. Note: Content may be edited for style and length. More

  • in

    Envisioning safer cities with AI

    Artificial intelligence is providing new opportunities in a range of fields, from business to industrial design to entertainment. But how about civil engineering and city planning? How might machine- and deep-learning help us create safer, more sustainable, and resilient built environments?
    A team of researchers from the NSF NHERI SimCenter, a computational modeling and simulation center for the natural hazards engineering community based at the University of California, Berkeley, have developed a suite of tools called BRAILS — Building Recognition using AI at Large-Scale — that can automatically identify characteristics of buildings in a city and even detect the risks that a city’s structures would face in an earthquake, hurricane, or tsunami.
    Charles (Chaofeng) Wang, a postdoctoral researcher at the University of California, Berkeley, and the lead developer of BRAILS, says the project grew out of a need to quickly and reliably characterize the structures in a city.
    “We want to simulate the impact of hazards on all of the buildings in a region, but we don’t have a description of the building attributes,” Wang said. “For example, in the San Francisco Bay area, there are millions of buildings. Using AI, we are able to get the needed information. We can train neural network models to infer building information from images and other sources of data.”
    BRAILS uses machine learning, deep learning, and computer vision to extract information about the built environment. It is envisioned as a tool for architects, engineers and planning professionals to more efficiently plan, design, and manage buildings and infrastructure systems.
    The SimCenter recently released BRAILS version 2.0 which includes modules to predict a larger spectrum of building characteristics. These include occupancy class (commercial, single-family, or multi-family), roof type (flat, gabled, or hipped), foundation elevation, year built, number of floors, and whether a building has a “soft-story” — a civil engineering term for structures that include ground floors with large openings (like storefronts) that may be more prone to collapse during an earthquake. More