More stories

  • in

    New tool can diagnose strokes with a smartphone

    A new tool created by researchers at Penn State and Houston Methodist Hospital could diagnose a stroke based on abnormalities in a patient’s speech ability and facial muscular movements, and with the accuracy of an emergency room physician — all within minutes from an interaction with a smartphone.
    “When a patient experiences symptoms of a stroke, every minute counts,” said James Wang, professor of information sciences and technology at Penn State. “But when it comes to diagnosing a stroke, emergency room physicians have limited options: send the patient for often expensive and time-consuming radioactivity-based scans or call a neurologist — a specialist who may not be immediately available — to perform clinical diagnostic tests.”
    Wang and his colleagues have developed a machine learning model to aid in, and potentially speed up, the diagnostic process by physicians in a clinical setting.
    “Currently, physicians have to use their past training and experience to determine at what stage a patient should be sent for a CT scan,” said Wang. “We are trying to simulate or emulate this process by using our machine learning approach.”
    The team’s novel approach is the first to analyze the presence of stroke among actual emergency room patients with suspicion of stroke by using computational facial motion analysis and natural language processing to identify abnormalities in a patient’s face or voice, such as a drooping cheek or slurred speech.
    The results could help emergency room physicians to more quickly determine critical next steps for the patient. Ultimately, the application could be utilized by caregivers or patients to make self-assessments before reaching the hospital.

    advertisement

    “This is one of the first works that is enabling AI to help with stroke diagnosis in emergency settings,” added Sharon Huang, associate professor of information sciences and technology at Penn State.
    To train the computer model, the researchers built a dataset from more than 80 patients experiencing stroke symptoms at Houston Methodist Hospital in Texas. Each patient was asked to perform a speech test to analyze their speech and cognitive communication while being recorded on an Apple iPhone.
    “The acquisition of facial data in natural settings makes our work robust and useful for real-world clinical use, and ultimately empowers our method for remote diagnosis of stroke and self-assessment,” said Huang.
    Testing the model on the Houston Methodist dataset, the researchers found that its performance achieved 79% accuracy — comparable to clinical diagnostics by emergency room doctors, who use additional tests such as CT scans. However, the model could help save valuable time in diagnosing a stroke, with the ability to assess a patient in as little as four minutes.
    “There are millions of neurons dying every minute during a stroke,” said John Volpi, a vascular neurologist and co-director of the Eddy Scurlock Stroke Center at Houston Methodist Hospital. “In severe strokes it is obvious to our providers from the moment the patient enters the emergency department, but studies suggest that in the majority of strokes, which have mild to moderate symptoms, that a diagnosis can be delayed by hours and by then a patient may not be eligible for the best possible treatments.”
    “The earlier you can identify a stroke, the better options (we have) for the patients,” added Stephen T.C. Wong, John S. Dunn, Sr. Presidential Distinguished Chair in Biomedical Engineering at the Ting Tsung and Wei Fong Chao Center for BRAIN and Houston Methodist Cancer Center. “That’s what makes an early diagnosis essential.”

    advertisement

    Volpi said that physicians currently use a binary approach toward diagnosing strokes: They either suspect a stroke, sending the patient for a series of scans that could involve radiation; or they do not suspect a stroke, potentially overlooking patients who may need further assessment.
    “What we think in that triage moment is being either biased toward overutilization (of scans, which have risks and benefits) or underdiagnosis,” said Volpi, a co-author on the paper. “If we can improve diagnostics at the front end, then we can better expose the right patients to the right risks and not miss patients who would potentially benefit.”
    He added, “We have great therapeutics, medicines and procedures for strokes, but we have very primitive and, frankly, inaccurate diagnostics.”
    Other collaborators on the project include Tongan Cai and Mingli Yu, graduate students working with Wang and Huang at Penn State; and Kelvin Wong, associate research professor of electronic engineering in oncology at Houston Methodist Hospital.

    Story Source:
    Materials provided by Penn State. Original written by Jessica Hallman. Note: Content may be edited for style and length. More

  • in

    Reviewing multiferroics for future, low-energy data storage

    A new UNSW study comprehensively reviews the magnetic structure of the multiferroic material bismuth ferrite (BiFeO3 — BFO).
    The review advances FLEET’s search for low-energy electronics, bringing together current knowledge on the magnetic order in BFO films, and giving researchers a solid platform to further develop this material in low-energy magnetoelectric memories.
    BFO is unique in that it displays both magnetic and electronic ordering (ie, is ‘multiferroic’) at room temperature, allowing for low-energy switching in data storage devices.
    MULTIFERROICS: COMBINED MAGNETIC AND ELECTRONIC ORDERING FOR LOW-ENERGY DATA STORAGE
    Multiferroics are materials that have more than one ‘order parameter’.
    For example, a magnetic material displays magnetic order: you can imagine that the material is made up of lots of neatly arranged (ordered), tiny magnets.

    advertisement

    BFO cycloid diagram
    Spin (magnetic order) in the multi-ferroic material bismuth-ferrite ‘cycles’ through the crystal, offering potential application in emerging electronics fields such as magnonics
    Some materials display electronic order — a property referred to as ferroelectricity — which can be considered the electrical equivalent of magnetism.
    In a ferroelectric material, some atoms are positively charged, others are negatively charged, and the way these atoms are arranged in the material gives a specific order to the charge in the material.
    In nature, a small fraction of known materials possess both magnetic and ferroelectric order (as is the case for BFO) and are thus referred to as multiferroic materials.

    advertisement

    The coupling between magnetic and ferroelectric order in a multiferroic material unlocks interesting physics and opens the way for applications such as energy-efficient electronics, for example in non-volatile memory devices.
    Studies at FLEET focus on the potential use of such materials as a switching mechanism.
    Ferroelectric materials can be considered the electrical equivalent of a permanent magnet, possessing a spontaneous polarisation. This polarisation is switchable by an electric field.
    The storage of data on traditional hard disks relies on switching each bit’s magnetic state: from zero, to one, to zero. But it takes a relatively large amount of energy to generate the magnetic field required to accomplish this.
    In a ‘multiferroic memory,’ the coupling between the magnetic and ferroelectric order could allow ‘flipping’ of the state of a bit by electric field, rather than a magnetic field.
    Electric fields are a lot less energetically costly to generate than magnetic fields, so multiferroic memory would be a significant win for ultra-low-energy electronics, a key aim in FLEET.
    BFO: A UNIQUE MULTIFERROIC MATERIAL
    Bismuth ferrite (BFO) is unique among multiferroics: its magnetic and ferroelectric persist up to room temperature. Most multiferroics only exhibit both order parameters at far below room temperature, making them impractical for low-energy electronics.
    (There’s no point in designing low-energy electronics if it costs you more energy to cool the system than you save in operation.)
    THE STUDY
    Co-author Dr Dan Sando preparing materials for study at UNSW
    The new UNSW study reviews the magnetic structure of bismuth ferrite; in particular, when it is grown as a thin single crystal layer on a substrate.
    The paper examines BFO’s complicated magnetic order, and the many different experimental tools used to probe and help understand it.
    Multiferroics is a challenging topic. For example, for researchers trying to enter the field, it’s very difficult to get a full picture on the magnetism of BFO from any one reference.
    “So, we decided to write it,” says Dr Daniel Sando. “We were in the perfect position to do so, as we had all the information in our heads, Stuart wrote a literature review chapter, and we had the combined necessary physics background to explain the important concepts in a tutorial-style manner.”
    The result is a comprehensive, complete, and detailed review article that will attract significant attention from researchers and will serve as a useful reference for many.
    Co-lead author Dr Stuart Burns explains what new researchers to the field of multiferroics will gain from the article:
    “We structured the review as a build-your-own-experiment starter pack: readers will be taken through the chronology of BFO, a selection of techniques to utilize (alongside the advantages and pitfalls of each) and various interesting ways to modify the physics at play. With these pieces in place, experimentalists will know what to expect, and can focus on engineering new low-energy devices and memory architectures.”
    The other lead author, Oliver Paull, says “We hope that other researchers in our field will use this work to train their students, learn the nuances of the material, and have a one-stop reference article which contains all pertinent references — the latter in itself an extremely valuable contribution.”
    Prof Nagy Valanoor added “The most fulfilling aspect of this paper was its style as a textbook chapter. We left no stone unturned!”
    The discussion paper includes incorporation of BFO into functional devices that use the cross coupling between ferroelectricity and magnetism, and very new fields such as antiferromagnetic spintronics, where the quantum mechanical property of the spin of the electron can be used to process information. More

  • in

    A wearable sensor to help ALS patients communicate

    People with amyotrophic lateral sclerosis (ALS) suffer from a gradual decline in their ability to control their muscles. As a result, they often lose the ability to speak, making it difficult to communicate with others.
    A team of MIT researchers has now designed a stretchable, skin-like device that can be attached to a patient’s face and can measure small movements such as a twitch or a smile. Using this approach, patients could communicate a variety of sentiments, such as “I love you” or “I’m hungry,” with small movements that are measured and interpreted by the device.
    The researchers hope that their new device would allow patients to communicate in a more natural way, without having to deal with bulky equipment. The wearable sensor is thin and can be camouflaged with makeup to match any skin tone, making it unobtrusive.
    “Not only are our devices malleable, soft, disposable, and light, they’re also visually invisible,” says Canan Dagdeviren, the LG Electronics Career Development Assistant Professor of Media Arts and Sciences at MIT and the leader of the research team. “You can camouflage it and nobody would think that you have something on your skin.”
    The researchers tested the initial version of their device in two ALS patients (one female and one male, for gender balance) and showed that it could accurately distinguish three different facial expressions — smile, open mouth, and pursed lips.
    MIT graduate student Farita Tasnim and former research scientist Tao Sun are the lead authors of the study, which appears today in Nature Biomedical Engineering. Other MIT authors are undergraduate Rachel McIntosh, postdoc Dana Solav, and research scientist Lin Zhang. Yuandong Gu of the A*STAR Institute of Microelectronics in Singapore and Nikta Amiri, Mostafa Tavakkoli Anbarani, and M. Amin Karami of the University of Buffalo are also authors.

    advertisement

    A skin-like sensor
    Dagdeviren’s lab, the Conformable Decoders group, specializes in developing conformable (flexible and stretchable) electronic devices that can adhere to the body for a variety of medical applications. She became interested in working on ways to help patients with neuromuscular disorders communicate after meeting Stephen Hawking in 2016, when the world-renowned physicist visited Harvard University and Dagdeviren was a junior fellow in Harvard’s Society of Fellows.
    Hawking, who passed away in 2018, suffered from a slow-progressing form of ALS. He was able to communicate using an infrared sensor that could detect twitches of his cheek, which moved a cursor across rows and columns of letters. While effective, this process could be time-consuming and required bulky equipment.
    Other ALS patients use similar devices that measure the electrical activity of the nerves that control the facial muscles. However, this approach also requires cumbersome equipment, and it is not always accurate.
    “These devices are very hard, planar, and boxy, and reliability is a big issue. You may not get consistent results, even from the same patients within the same day,” Dagdeviren says.

    advertisement

    Most ALS patients also eventually lose the ability to control their limbs, so typing is not a viable strategy to help them communicate. The MIT team set out to design a wearable interface that patients could use to communicate in a more natural way, without the bulky equipment required by current technologies.
    The device they created consists of four piezoelectric sensors embedded in a thin silicone film. The sensors, which are made of aluminum nitride, can detect mechanical deformation of the skin and convert it into an electric voltage that can be easily measured. All of these components are easy to mass-produce, so the researchers estimate that each device would cost around $10.
    The researchers used a process called digital imaging correlation on healthy volunteers to help them select the most useful locations to place the sensor. They painted a random black-and-white speckle pattern on the face and then took many images of the area with multiple cameras as the subjects performed facial motions such as smiling, twitching the cheek, or mouthing the shape of certain letters. The images were processed by software that analyzes how the small dots move in relation to each other, to determine the amount of strain experienced in a single area.
    “We had subjects doing different motions, and we created strain maps of each part of the face,” McIntosh says. “Then we looked at our strain maps and determined where on the face we were seeing a correct strain level for our device, and determined that that was an appropriate place to put the device for our trials.”
    The researchers also used the measurements of skin deformations to train a machine-learning algorithm to distinguish between a smile, open mouth, and pursed lips. Using this algorithm, they tested the devices with two ALS patients, and were able to achieve about 75 percent accuracy in distinguishing between these different movements. The accuracy rate in healthy subjects was 87 percent.
    Enhanced communication
    Based on these detectable facial movements, a library of phrases or words could be created to correspond to different combinations of movements, the researchers say.
    “We can create customizable messages based on the movements that you can do,” Dagdeviren says. “You can technically create thousands of messages that right now no other technology is available to do. It all depends on your library configuration, which can be designed for a particular patient or group of patients.”
    The information from the sensor is sent to a handheld processing unit, which analyzes it using the algorithm that the researchers trained to distinguish between facial movements. In the current prototype, this unit is wired to the sensor, but the connection could also be made wireless for easier use, the researchers say.
    The researchers have filed for a patent on this technology and they now plan to test it with additional patients. In addition to helping patients communicate, the device could also be used to track the progression of a patient’s disease, or to measure whether treatments they are receiving are having any effect, the researchers say.
    “There are a lot of clinical trials that are testing whether or not a particular treatment is effective for reversing ALS,” Tasnim says. “Instead of just relying on the patients to report that they feel better or they feel stronger, this device could give a quantitative measure to track the effectiveness.”
    The research was funded by the MIT Media Lab Consortium, the National Science Foundation, and the National Institute of Biomedical Imaging and Bioengineering. More

  • in

    'Spooky' similarity in how brains and computers see

    The brain detects 3D shape fragments (bumps, hollows, shafts, spheres) in the beginning stages of object vision — a newly discovered strategy of natural intelligence that Johns Hopkins University researchers also found in artificial intelligence networks trained to recognize visual objects.
    A new paper in Current Biology details how neurons in area V4, the first stage specific to the brain’s object vision pathway, represent 3D shape fragments, not just the 2D shapes used to study V4 for the last 40 years. The Johns Hopkins researchers then identified nearly identical responses of artificial neurons, in an early stage (layer 3) of AlexNet, an advanced computer vision network. In both natural and artificial vision, early detection of 3D shape presumably aids interpretation of solid, 3D objects in the real world.
    “I was surprised to see strong, clear signals for 3D shape as early as V4,” said Ed Connor, a neuroscience professor and director of the Zanvyl Krieger Mind/Brain Institute. “But I never would have guessed in a million years that you would see the same thing happening in AlexNet, which is only trained to translate 2D photographs into object labels.”
    One of the long-standing challenges for artificial intelligence has been to replicate human vision. Deep (multilayer) networks like AlexNet have achieved major gains in object recognition, based on high capacity Graphical Processing Units (GPU) developed for gaming and massive training sets fed by the explosion of images and videos on the Internet.
    Connor and his team applied the same tests of image responses to natural and artificial neurons and discovered remarkably similar response patterns in V4 and AlexNet layer 3. What explains what Connor describes as a “spooky correspondence” between the brain — a product of evolution and lifetime learning — and AlexNet — designed by computer scientists and trained to label object photographs?
    AlexNet and similar deep networks were actually designed in part based on the multi-stage visual networks in the brain, Connor said. He said the close similarities they observed may point to future opportunities to leverage correlations between natural and artificial intelligence.
    “Artificial networks are the most promising current models for understanding the brain. Conversely, the brain is the best source of strategies for bringing artificial intelligence closer to natural intelligence,” Connor said.

    Story Source:
    Materials provided by Johns Hopkins University. Note: Content may be edited for style and length. More

  • in

    For the first time: Realistic simulation of plasma edge instabilities in tokamaks

    Among the loads to which the plasma vessel in a fusion device may be exposed, so-called edge localized modes are particularly undesirable. By computer simulations the origin and the course of this plasma-edge instability could now be explained for the first time in detail.
    Edge Localised Modes, ELMs for short, are one of the disturbances of the plasma confinement that are caused by the interaction between the charged plasma particles and the confining magnetic field cage. During ELM events, the edge plasma loses its confinement for a short time and periodically throws plasma particles and energy outwards onto the vessel walls. Typically, one tenth of the total energy content can thus be ejected abruptly. While the present generation of medium-sized fusion devices can cope with this, large devices such as ITER or a future power plant would not be able to withstand this strain.
    Experimental methods to attenuate, suppress or avoid ELMs have already been successfully developed in current fusion devices (see PI 3/2020). After extensive previous work, it has now been possible for the first time by means of computational simulations to identify the trigger responsible for the explosive onset of these edge instabilities and to reconstruct the course of several ELM cycles — in good agreement with experimentally observed values. A publication accepted in the scientific journal Nuclear Fusion explains this important prerequisite for predicting and avoiding ELM instabilities in future fusion devices.
    The ELM instability builds up after a quiet phase of about 5 to 20 milliseconds — depending on the external conditions — until in half a millisecond between 5 and 15 percent of the energy stored in the plasma is flung onto the walls. Then the equilibrium is restored until the next ELM eruption follows.
    The plasma theorists around first author Andres Cathey of IPP, who come from several laboratories of the European fusion programme EUROfusion, were able to describe and explain the complex physical processes behind this phenomenon in detail: as a non-linear interplay between destabilising effects — the steep rise in plasma pressure at the plasma edge and the increase in current density — and the stabilising plasma flow. If the heating power fed into the plasma is changed in the simulation, the calculated result shows the same effect on the repetition rate of the ELMs, i.e. the frequency, as an increase of the heating power in a plasma experiment at ASDEX Upgrade tokamak: experiment and simulation are in Agreement.
    Although the processes take place in a very short time, their simulation requires a great deal of computing effort. This is because the simulation must resolve into small calculation steps both the short ELM crash and the long development phase between two ELMs — a calculation problem that could only be solved with one of the fastest supercomputers currently available.
    For the simulations the JOREK code was used, a non-linear code for the calculation of tokamak plasmas in realistic geometry, which is being developed in European and international cooperation with strong contributions from IPP.

    Story Source:
    Materials provided by Max-Planck-Institut für Plasmaphysik (IPP). Note: Content may be edited for style and length. More

  • in

    Optical wiring for large quantum computers

    Hitting a specific point on a screen with a laser pointer during a presentation isn’t easy — even the tiniest nervous shaking of the hand becomes one big scrawl at a distance. Now imagine having to do that with several laser pointers at once. That is exactly the problem faced by physicists who try to build quantum computers using individual trapped atoms. They, too, need to aim laser beams — hundreds or even thousands of them in the same apparatus — precisely over several metres such as to hit regions only a few micrometres in size that contain the atoms. Any unwanted vibration will severely disturb the operation of the quantum computer.
    At ETH in Zurich, Jonathan Home and his co-workers at the Institute for Quantum Electronics have now demonstrated a new method that allows them to deliver multiple laser beams precisely to the right locations from within a chip in such a stable manner that even the most delicate quantum operations on the atoms can be carried out.
    Aiming for the quantum computer
    To build quantum computers has been an ambitious goal of physicists for more than thirty years. Electrically charged atoms — ions — trapped in electric fields have turned out to be ideal candidates for the quantum bits or qubits, which quantum computers use for their calculations. So far, mini computers containing around a dozen qubits could be realized in this way. “However, if you want to build quantum computers with several thousand qubits, which will probably be necessary for practically relevant applications, current implementations present some major hurdles,” says Karan Mehta, a postdoc in Home’s laboratory and first author of the study recently published in the scientific journal “Nature.” Essentially, the problem is how to send laser beams over several metres from the laser into a vacuum apparatus and eventually hit the bull’s eye inside a cryostat, in which the ion traps are cooled down to just a few degrees above absolute zero in order to minimize thermal disturbances.
    Optical setup as an obstacle
    “Already in current small-scale systems, conventional optics are a significant source of noise and errors — and that gets much harder to manage when trying to scale up,” Mehta explains. The more qubits one adds, the more complex the optics for the laser beams becomes which is needed for controlling the qubits. “This is where our approach comes in,” adds Chi Zhang, a PhD student in Home’s group: “By integrating tiny waveguides into the chips that contain the electrodes for trapping the ions, we can send the light directly to those ions. In this way, vibrations of the cryostat or other parts of the apparatus produce far less disturbance.”
    The researchers commissioned a commercial foundry to produce chips which contain both gold electrodes for the ion traps and, in a deeper layer, waveguides for laser light. At one end of the chips, optical fibres feed the light into the waveguides, which are only 100 nanometres thick, effectively forming optical wiring within the chips. Each of those waveguides leads to a specific point on the chip, where the light is eventually deflected towards the trapped ions on the surface.
    Work from a few years ago (by some of the authors of the present study, together with researchers at MIT and MIT Lincoln Laboratory) had demonstrated that this approach works in principle. Now the ETH group has developed and refined the technique to the point where it is also possible to use it for implementing low-error quantum logic gates between different atoms, an important prerequisite for building quantum computers.
    High-fidelity logic gates
    In a conventional computer chip, logic gates are used to carry out logic operations such as AND or NOR. To build a quantum computer, one has make sure that it can to carry out such logic operations on the qubits. The problem with this is that logic gates acting on two or more qubits are particularly sensitive to disturbances. This is because they create fragile quantum mechanical states in which two ions are simultaneously in a superposition, also known as entangled states.
    In such a superposition, a measurement of one ion influences the result of a measurement on the other ion, without the two being in direct contact. How well the production of those superposition states works, and thus how good the logic gates are, is expressed by the so-called fidelity. “With the new chip we were able to carry out two-qubit logic gates and use them to produce entangled states with a fidelity that up to now could only be achieved in the very best conventional experiments,” says Maciej Malinowski, who was also involved in the experiment as a PhD student.
    The researchers have thus shown that their approach is interesting for future ion trap quantum computers as it is not just extremely stable, but also scalable. They are currently working with different chips that are intended to control up to ten qubits at a time. Furthermore, they are pursuing new designs for fast and precise quantum operations that are made possible by the optical wiring.

    Story Source:
    Materials provided by ETH Zurich. Original written by Oliver Morsch. Note: Content may be edited for style and length. More

  • in

    Analyzing web searches can help experts predict, respond to COVID-19 hot spots

    Web-based analytics have demonstrated their value in predicting the spread of infectious disease, and a new study from Mayo Clinic indicates the value of analyzing Google web searches for keywords related to COVID-19.
    Strong correlations were found between keyword searches on the internet search engine Google Trends and COVID-19 outbreaks in parts of the U.S., according to a study published in Mayo Clinic Proceedings. These correlations were observed up to 16 days prior to the first reported cases in some states.
    “Our study demonstrates that there is information present in Google Trends that precedes outbreaks, and with predictive analysis, this data can be used for better allocating resources with regards to testing, personal protective equipment, medications and more,” says Mohamad Bydon, M.D., a Mayo Clinic neurosurgeon and principal investigator at Mayo’s Neuro-Informatics Laboratory.
    “The Neuro-Informatics team is focused on analytics for neural diseases and neuroscience. However, when the novel coronavirus emerged, my team and I directed resources toward better understanding and tracking the spread of the pandemic,” says Dr. Bydon, the study’s senior author. “Looking at Google Trends data, we found that we were able to identify predictors of hot spots, using keywords, that would emerge over a six-week timeline.”
    Several studies have noted the role of internet surveillance in early prediction of previous outbreaks such as H1N1 and Middle East respiratory syndrome. There are several benefits to using internet surveillance methods versus traditional methods, and this study says a combination of the two methods is likely the key to effective surveillance.
    The study searched for 10 keywords that were chosen based on how commonly they were used and emerging patterns on the internet and in Google News at that time.

    advertisement

    The keywords were:
    COVID symptoms
    Coronavirus symptoms
    Sore throat+shortness of breath+fatigue+cough
    Coronavirus testing center
    Loss of smell
    Lysol
    Antibody
    Face mask
    Coronavirus vaccine
    COVID stimulus check
    Most of the keywords had moderate to strong correlations days before the first COVID-19 cases were reported in specific areas, with diminishing correlations following the first case.
    “Each of these keywords had varying strengths of correlation with case numbers,” says Dr. Bydon. “If we had looked at 100 keywords, we may have found even stronger correlations to cases. As the pandemic progresses, people will search for new and different information, so the search terms also need to evolve.”
    The use of web search surveillance data is important as an adjunct for data science teams who are attempting to predict outbreaks and new hot spots in a pandemic. “Any delay in information could lead to missed opportunities to improve preparedness for an outbreak in a certain location,” says Dr. Bydon.
    Traditional surveillance, including widespread testing and public health reporting, can lag behind the incidence of infectious disease. The need for more testing, and more rapid and accurate testing, is paramount. Delayed or incomplete reporting of results can lead to inaccuracies when data is released and public health decisions are being made.
    “If you wait for the hot spots to emerge in the news media coverage, it will be too late to respond effectively,” Dr. Bydon says. “In terms of national preparedness, this is a great way of helping to understand where future hot spots will emerge.”
    Mayo Clinic recently introduced an interactive COVID-19 tracking tool that reports the latest data for every county in all 50 states, and in Washington, D.C., with insight on how to assess risk and plan accordingly. “Adding variables such as Google Trends data from Dr. Bydon’s team, as well as other leading indicators, have greatly enhanced our ability to forecast surges, plateaus and declines of cases across regions of the country,” says Henry Ting, M.D., Mayo Clinic’s chief value officer.
    Dr. Ting worked with Mayo Clinic data scientists to develop content sources, validate information and correlate expertise for the tracking tool, which is in Mayo’s COVID-19 resource center on mayoclinic.org.
    The study was conducted in collaboration with the Mayo Clinic Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery. The authors report no conflicts of interest. More

  • in

    Novel method for measuring spatial dependencies turns less data into more data

    The identification of human migration driven by climate change, the spread of COVID-19, agricultural trends, and socioeconomic problems in neighboring regions depends on data — the more complex the model, the more data is required to understand such spatially distributed phenomena. However, reliable data is often expensive and difficult to obtain, or too sparse to allow for accurate predictions.
    Maurizio Porfiri, Institute Professor of mechanical and aerospace, biomedical, and civil and urban engineering and a member of the Center for Urban Science and Progress (CUSP) at the NYU Tandon School of Engineering, devised a novel solution based on network and information theory that makes “little data” act big through, the application of mathematical techniques normally used for time-series, to spatial processes.
    The study, “An information-theoretic approach to study spatial dependencies in small datasets,” featured on the cover of Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, describes how, from a small sample of attributes in a limited number of locations, observers can make robust inferences of influences, including interpolations to intermediate areas or even distant regions that share similar key attributes.
    “Most of the time the data sets are poor,” Porfiri explained. “Therefore, we took a very basic approach, applying information theory to explore whether influence in the temporal sense could be extended to space, which allows us to work with a very small data set, between 25 and 50 observations,” he said. “We are taking one snapshot of the data and drawing connections — not based on cause-and-effect, but on interaction between the individual points — to see if there is some form of underlying, collective response in the system.”
    The method, developed by Porfiri and collaborator Manuel Ruiz Marín of the Department of Quantitative Methods, Law and Modern Languages, Technical University of Cartagena, Spain, involved:
    Consolidating a given data set into a small range of admissible symbols, similar to the way a machine learning system can identify a face with limited pixel data: a chin, cheekbones, forehead, etc.
    Applying an information-theory principle to create a test that is non-parametric (one that assumes no underlying model for the interaction between locations) to draw associations between events and to discover whether uncertainty at a particular location is reduced if one has knowledge about the uncertainty in another location.
    Porfiri explained that since a non-parametric approach posits no underlying structure for the influences between nodes, it confers flexibility in how nodes can be associated, or even how the concept of a neighbor is defined.
    “Because we abstract this concept of a neighbor, we can define it in the context of any quality that you like, for example, ideology. Ideologically, California can be a neighbor of New York, though they are not geographically co-located. They may share similar values.”
    The team validated the system against two case studies: population migrations in Bangladesh due to sea level rise and motor vehicle deaths in the U.S., to derive a statistically principled insight into the mechanisms of important socioeconomic problems.
    “In the first case, we wanted to see if migration between locations could be predicted by geographic distance or the severity of the inundation of that particular district — whether knowledge of which district is close to another district or knowledge of the level of flooding will help predict the size of migration,” said Ruiz Marín .
    For the second case, they looked at the spatial distribution of alcohol-related automobile accidents in 1980, 1994, and 2009, comparing states with a high degree of such accidents to adjacent states and to states with similar legislative ideologies about drinking and driving.
    “We discovered a stronger relationship between states sharing borders than between states sharing legislative ideologies pertaining to alcohol consumption and driving.”
    Next, Porfiri and Ruiz Marín are planning to extend their method to the analysis of spatio-temporal processes, such as gun violence in the U.S. — a major research project recently funded by the National Science Foundation’s LEAP HI program — or epileptic seizures in the brain. Their work could help understand when and where gun violence can happen or seizures may initiate. More