More stories

  • in

    A NEAT reduction of complex neuronal models accelerates brain research

    Neurons, the fundamental units of the brain, are complex computers by themselves. They receive input signals on a tree-like structure — the dendrite. This structure does more than simply collect the input signals: it integrates and compares them to find those special combinations that are important for the neurons’ role in the brain. Moreover, the dendrites of neurons come in a variety of shapes and forms, indicating that distinct neurons may have separate roles in the brain.
    A simple yet faithful model
    In neuroscience, there has historically been a tradeoff between a model’s faithfulness to the underlying biological neuron and its complexity. Neuroscientists have constructed detailed computational models of many different types of dendrites. These models mimic the behavior of real dendrites to a high degree of accuracy. The tradeoff, however, is that such models are very complex. Thus, it is hard to exhaustively characterize all possible responses of such models and to simulate them on a computer. Even the most powerful computers can only simulate a small fraction of the neurons in any given brain area.
    Researchers from the Department of Physiology at the University of Bern have long sought to understand the role of dendrites in computations carried out by the brain. On the one hand, they have constructed detailed models of dendrites from experimental measurements, and on the other hand they have constructed neural network models with highly abstract dendrites to learn computations such as object recognition. A new study set out to find a computational method to make highly detailed models of neurons simpler, while retaining a high degree of faithfulness. This work emerged from the collaboration between experimental and computational neuroscientists from the research groups of Prof. Thomas Nevian and Prof. Walter Senn, and was led by Dr Willem Wybo. “We wanted the method to be flexible, so that it could be applied to all types of dendrites. We also wanted it to be accurate, so that it could faithfully capture the most important functions of any given dendrite. With these simpler models, neural responses can more easily be characterized and simulation of large networks of neurons with dendrites can be conducted,” Dr Wybo explains.
    This new approach exploits an elegant mathematical relation between the responses of detailed dendrite models and of simplified dendrite models. Due to this mathematical relation, the objective that is optimized is linear in the parameters of the simplified model. “This crucial observation allowed us to use the well-known linear least squares method to find the optimized parameters. This method is very efficient compared to methods that use non-linear parameter searches, but also achieves a high degree of accuracy,” says Prof. Senn.
    Tools available for AI applications
    The main result of the work is the methodology itself: a flexible yet accurate way to construct reduced neuron models from experimental data and morphological reconstructions. “Our methodology shatters the perceived tradeoff between faithfulness and complexity, by showing that extremely simplified models can still capture much of the important response properties of real biological neurons,” Prof. Senn explains. “Which also provides insight into ‘the essential dendrite’, the simplest possible dendrite model that still captures all possible responses of the real dendrite from which it is derived,” Dr Wybo adds.
    Thus, in specific situations, hard bounds can be established on how much a dendrite can be simplified, while retaining its important response properties. “Furthermore, our methodology greatly simplifies deriving neuron models directly from experimental data,” Prof. Senn highlights, who is also a member of the steering committe of the Center for Artifical Intelligence (CAIM) of the University of Bern. The methodology has been compiled into NEAT (NEural Analysis Toolkit) — an open-source software toolbox that automatizes the simplification process. NEAT is publicly available on GitHub.
    The neurons used currently in AI applications are exceedingly simplistic compared to their biological counterparts, as they don’t include dendrites at all. Neuroscientists believe that including dendrite-like operations in artificial neural networks will lead to the next leap in AI technology. By enabling the inclusion of very simple, but very accurate dendrite models in neural networks, this new approach and toolkit provide an important step towards that goal.
    This work was supported by the Human Brain Project, by the Swiss National Science foundation and by the European Research Council.

    Story Source:
    Materials provided by University of Bern. Note: Content may be edited for style and length. More

  • in

    Mira's last journey: Exploring the dark universe

    A team of physicists and computer scientists from the U.S. Department of Energy’s (DOE) Argonne National Laboratory performed one of the five largest cosmological simulations ever. Data from the simulation will inform sky maps to aid leading large-scale cosmological experiments.
    The simulation, called the Last Journey, follows the distribution of mass across the universe over time — in other words, how gravity causes a mysterious invisible substance called “dark matter” to clump together to form larger-scale structures called halos, within which galaxies form and evolve.
    The scientists performed the simulation on Argonne’s supercomputer Mira. The same team of scientists ran a previous cosmological simulation called the Outer Rim in 2013, just days after Mira turned on. After running simulations on the machine throughout its seven-year lifetime, the team marked Mira’s retirement with the Last Journey simulation.
    The Last Journey demonstrates how far observational and computational technology has come in just seven years, and it will contribute data and insight to experiments such as the Stage-4 ground-based cosmic microwave background experiment (CMB-S4), the Legacy Survey of Space and Time (carried out by the Rubin Observatory in Chile), the Dark Energy Spectroscopic Instrument and two NASA missions, the Roman Space Telescope and SPHEREx.
    “We worked with a tremendous volume of the universe, and we were interested in large-scale structures, like regions of thousands or millions of galaxies, but we also considered dynamics at smaller scales,” said Katrin Heitmann, deputy division director for Argonne’s High Energy Physics (HEP) division.
    The code that constructed the cosmos
    The six-month span for the Last Journey simulation and major analysis tasks presented unique challenges for software development and workflow. The team adapted some of the same code used for the 2013 Outer Rim simulation with some significant updates to make efficient use of Mira, an IBM Blue Gene/Q system that was housed at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility.

    advertisement

    Specifically, the scientists used the Hardware/Hybrid Accelerated Cosmology Code (HACC) and its analysis framework, CosmoTools, to enable incremental extraction of relevant information at the same time as the simulation was running.
    “Running the full machine is challenging because reading the massive amount of data produced by the simulation is computationally expensive, so you have to do a lot of analysis on the fly,” said Heitmann. “That’s daunting, because if you make a mistake with analysis settings, you don’t have time to redo it.”
    The team took an integrated approach to carrying out the workflow during the simulation. HACC would run the simulation forward in time, determining the effect of gravity on matter during large portions of the history of the universe. Once HACC determined the positions of trillions of computational particles representing the overall distribution of matter, CosmoTools would step in to record relevant information — such as finding the billions of halos that host galaxies — to use for analysis during post-processing.
    “When we know where the particles are at a certain point in time, we characterize the structures that have formed by using CosmoTools and store a subset of data to make further use down the line,” said Adrian Pope, physicist and core HACC and CosmoTools developer in Argonne’s Computational Science (CPS) division. “If we find a dense clump of particles, that indicates the location of a dark matter halo, and galaxies can form inside these dark matter halos.”
    The scientists repeated this interwoven process — where HACC moves particles and CosmoTools analyzes and records specific data — until the end of the simulation. The team then used features of CosmoTools to determine which clumps of particles were likely to host galaxies. For reference, around 100 to 1,000 particles represent single galaxies in the simulation.

    advertisement

    “We would move particles, do analysis, move particles, do analysis,” said Pope. “At the end, we would go back through the subsets of data that we had carefully chosen to store and run additional analysis to gain more insight into the dynamics of structure formation, such as which halos merged together and which ended up orbiting each other.”
    Using the optimized workflow with HACC and CosmoTools, the team ran the simulation in half the expected time.
    Community contribution
    The Last Journey simulation will provide data necessary for other major cosmological experiments to use when comparing observations or drawing conclusions about a host of topics. These insights could shed light on topics ranging from cosmological mysteries, such as the role of dark matter and dark energy in the evolution of the universe, to the astrophysics of galaxy formation across the universe.
    “This huge data set they are building will feed into many different efforts,” said Katherine Riley, director of science at the ALCF. “In the end, that’s our primary mission — to help high-impact science get done. When you’re able to not only do something cool, but to feed an entire community, that’s a huge contribution that will have an impact for many years.”
    The team’s simulation will address numerous fundamental questions in cosmology and is essential for enabling the refinement of existing models and the development of new ones, impacting both ongoing and upcoming cosmological surveys.
    “We are not trying to match any specific structures in the actual universe,” said Pope. “Rather, we are making statistically equivalent structures, meaning that if we looked through our data, we could find locations where galaxies the size of the Milky Way would live. But we can also use a simulated universe as a comparison tool to find tensions between our current theoretical understanding of cosmology and what we’ve observed.”
    Looking to exascale
    “Thinking back to when we ran the Outer Rim simulation, you can really see how far these scientific applications have come,” said Heitmann, who performed Outer Rim in 2013 with the HACC team and Salman Habib, CPS division director and Argonne Distinguished Fellow. “It was awesome to run something substantially bigger and more complex that will bring so much to the community.”
    As Argonne works towards the arrival of Aurora, the ALCF’s upcoming exascale supercomputer, the scientists are preparing for even more extensive cosmological simulations. Exascale computing systems will be able to perform a billion billion calculations per second — 50 times faster than many of the most powerful supercomputers operating today.
    “We’ve learned and adapted a lot during the lifespan of Mira, and this is an interesting opportunity to look back and look forward at the same time,” said Pope. “When preparing for simulations on exascale machines and a new decade of progress, we are refining our code and analysis tools, and we get to ask ourselves what we weren’t doing because of the limitations we have had until now.”
    The Last Journey was a gravity-only simulation, meaning it did not consider interactions such as gas dynamics and the physics of star formation. Gravity is the major player in large-scale cosmology, but the scientists hope to incorporate other physics in future simulations to observe the differences they make in how matter moves and distributes itself through the universe over time.
    “More and more, we find tightly coupled relationships in the physical world, and to simulate these interactions, scientists have to develop creative workflows for processing and analyzing,” said Riley. “With these iterations, you’re able to arrive at your answers — and your breakthroughs — even faster.” More

  • in

    Smart algorithm cleans up images by searching for clues buried in noise

    To enter the world of the fantastically small, the main currency is either a ray of light or electrons.
    Strong beams, which yield clearer images, are damaging to specimens. On the other hand, weak beams can give noisy, low-resolution images.
    In a new study published in Nature Machine Intelligence, researchers at Texas A&M University describe a machine learning-based algorithm that can reduce graininess in low-resolution images and reveal new details that were otherwise buried within the noise.
    “Images taken with low-powered beams can be noisy, which can hide interesting and valuable visual details of biological specimens,” said Shuiwang Ji, associate professor in the Department of Computer Science and Engineering. “To solve this problem, we use a pure computational approach to create higher-resolution images, and we have shown in this study that we can improve the resolution up to an extent very similar to what you might obtain using a high beam.”
    Ji added that unlike other denoising algorithms that can only use information coming from a small patch of pixels within a low-resolution image, their smart algorithm can identify pixel patterns that may be spread across the entire noisy image, increasing its efficacy as a denoising tool.
    Instead of solely relying on microscope hardware to improve the images’ resolution, a technique known as augmented microscopy uses a combination of software and hardware to enhance the quality of images. Here, a regular image taken on a microscope is superimposed on a computer-generated digital image. This image processing method holds promise to not just cut down costs but also automatize medical image analysis and reveal details that the eye can sometimes miss.

    advertisement

    Currently, a type of software based on a machine-learning algorithm called deep learning has been shown to be effective at removing the blurriness or noise in images. These algorithms can be visualized as consisting of many interconnected layers or processing steps that take in a low-resolution input image and generate a high-resolution output image.
    In conventional deep-learning-based image processing techniques, the number and network between layers decide how many pixels in the input image contribute to the value of a single pixel in the output image. This value is immutable after the deep-learning algorithm has been trained and is ready to denoise new images. However, Ji said fixing the number for the input pixels, technically called the receptive field, limits the performance of the algorithm.
    “Imagine a piece of specimen having a repeating motif, like a honeycomb pattern. Most deep-learning algorithms only use local information to fill in the gaps in the image created by the noise,” Ji said. “But this is inefficient because the algorithm is, in essence, blind to the repeating pattern within the image since the receptive field is fixed. Instead, deep-learning algorithms need to have adaptive receptive fields that can capture the information in the overall image structure.”
    To overcome this hurdle, Ji and his students developed another deep-learning algorithm that can dynamically change the size of the receptive field. In other words, unlike earlier algorithms that can only aggregate information from a small number of pixels, their new algorithm, called global voxel transformer networks (GVTNets), can pool information from a larger area of the image if required.
    When they analyzed their algorithm’s performance against other deep-learning software, the researchers found that GVTNets required less training data and could denoise images better than other deep-learning algorithms. Furthermore, the high-resolution images obtained were comparable to those obtained using a high-energy light beam.
    The researchers noted that their new algorithm can easily be adapted to other applications in addition to denoising, such as label-free fluorescence imaging and 3D to 2D conversions for computer graphics.
    “Our research contributes to the emerging area of a smart microscopy, where artificial intelligence is seamlessly integrated into the microscope,” Ji said. “Deep-learning algorithms such as ours will allow us to potentially transcend the physical limit posed by light that was not possible before. This can be extremely valuable for a myriad of applications, including clinical ones, like estimating the stage of cancer progression and distinguishing between cell types for disease prognosis.”
    This research is funded by the National Science Foundation, the National Institutes of Health and the Defense Advanced Research Projects Agency.

    Story Source:
    Materials provided by Texas A&M University. Original written by Vandana Suresh. Note: Content may be edited for style and length. More

  • in

    Pace of prehistoric human innovation could be revealed by 'linguistic thermometer'

    Multi-disciplinary researchers at The University of Manchester have helped develop a powerful physics-based tool to map the pace of language development and human innovation over thousands of years — even stretching into pre-history before records were kept.
    Tobias Galla, a professor in theoretical physics, and Dr Ricardo Bermúdez-Otero, a specialist in historical linguistics, from The University of Manchester, have come together as part of an international team to share their diverse expertise to develop the new model, revealed in a paper entitled ‘Geospatial distributions reflect temperatures of linguistic feature’ authored by Henri Kauhanen, Deepthi Gopal, Tobias Galla and Ricardo Bermúdez-Otero, and published by the journal Science Advances.
    Professor Galla has applied statistical physics — usually used to map atoms or nanoparticles — to help build a mathematically-based model that responds to the evolutionary dynamics of language. Essentially, the forces that drive language change can operate across thousands of years and leave a measurable “geospatial signature,” determining how languages of different types are distributed over the surface of the Earth.
    Dr Bermúdez-Otero explained: “In our model each language has a collection of properties or features and some of those features are what we describe as ‘hot’ or ‘cold’.
    “So, if a language puts the object before the verb, then it is relatively likely to get stuck with that order for a long period of time — so that’s a ‘cold’ feature. In contrast, markers like the English article ‘the’ come and go a lot faster: they may be here in one historical period, and be gone in the next. In that sense, definite articles are ‘hot’ features.
    “The striking thing is that languages with ‘cold’ properties tend to form big clumps, whereas languages with ‘hot’ properties tend to be more scattered geographically.”
    This method therefore works like a thermometer, enabling researchers to retrospectively tell whether one linguistic property is more prone to change in historical time than another. This modelling could also provide a similar benchmark for the pace of change in other social behaviours or practices over time and space.
    “For example, suppose that you have a map showing the spatial distribution of some variable cultural practice for which you don’t have any historical records — this could be be anything, like different rules on marriage or on the inheritance of possessions,” added Dr Bermúdez-Otero.
    “Our method could, in principle, be used to ascertain whether one practice changes in the course of historical time faster than another, ie whether people are more innovative in one area than in another, just by looking at how the present-day variation is distributed in space.”
    The source data for the linguistic modelling comes from present-day languages and the team relied on The World Atlas of Language Structures (WALS). This records information of 2,676 contemporary languages.
    Professor Galla explained: “We were interested in emergent phenomena, such as how large-scale effects, for example patterns in the distribution of language features arise from relatively simple interactions. This is a common theme in complex systems research.
    “I was able to help with my expertise in the mathematical tools we used to analyse the language model and in simulation techniques. I also contributed to setting up the model in the first place, and by asking questions that a linguist would perhaps not ask in the same way.”

    Story Source:
    Materials provided by University of Manchester. Note: Content may be edited for style and length. More

  • in

    To find the right network model, compare all possible histories

    Two family members test positive for COVID-19 — how do we know who infected whom? In a perfect world, network science could provide a probable answer to such questions. It could also tell archaeologists how a shard of Greek pottery came to be found in Egypt, or help evolutionary biologists understand how a long-extinct ancestor metabolized proteins.
    As the world is, scientists rarely have the historical data they need to see exactly how nodes in a network became connected. But a new paper published in Physical Review Letters offers hope for reconstructing the missing information, using a new method to evaluate the rules that generate network models.
    “Network models are like impressionistic pictures of the data,” says physicist George Cantwell, one of the study’s authors and a postdoctoral researcher at the Santa Fe Institute. “And there have been a number of debates about whether the real networks look enough like these models for the models to be good or useful.”
    Normally when researchers try to model a growing network — say, a group of individuals infected with a virus — they build up the model network from scratch, following a set of mathematical instructions to add a few nodes at a time. Each node could represent an infected individual, and each edge a connection between those individuals. When the clusters of nodes in the model resemble the data drawn from the real-world cases, the model is considered to be representative — a problematic assumption when the same pattern can result from different sets of instructions.
    Cantwell and co-authors Guillaume St-Onge (University Laval, Quebec) and Jean-Gabriel Young (University of Vermont) wanted to bring a shot of statistical rigor to the modeling process. Instead of comparing features from a snapshot of the network model against the features from the real-world data, they developed methods to calculate the probability of each possible history for a growing network. Given competing sets of rules, which could represent real-world processes such as contact, droplet, or airborne transmission, the authors can apply their new tool to determine the probability of specific rules resulting in the observed pattern.
    “Instead of just asking ‘does this picture look more like the real thing?'” Cantwell says, “We can now ask material questions like, ‘did it grow by these rules?'” Once the most likely network model is found, it can be rewound to answer questions such as who was infected first.
    In their current paper, the authors demonstrate their algorithm on three simple networks that correspond to previously-documented datasets with known histories. They are now working to apply the tool to more complicated networks, which could find applications across any number of complex systems.

    Story Source:
    Materials provided by Santa Fe Institute. Note: Content may be edited for style and length. More

  • in

    Cell 'bones' mystery solved with supercomputers

    Our cells are filled with ‘bones,’ in a sense. Thin, flexible protein strands called actin filaments help support and move around the bulk of the cells of eukaryotes, which includes all plants and animals. Always on the go, actin filaments constantly grow, shrink, bind with other things, and branch off when cells move.
    Supercomputer simulations have helped solve the mystery of how actin filaments polymerize, or chain together. This fundamental research could be applied to treatments to stop cancer spread, develop self-healing materials, and more.
    “The major findings of our paper explain a property of actin filaments that has been known for about 50 years or so,” said Vilmos Zsolnay, co-author of a study published in November 2020 in the Proceedings of the National Academy of Sciences.
    Researchers have known for decades that one end of the actin filament is very different from the other end, and that this polarization is needed for the actin filament to function like it needs to. The mystery has been how the filaments used this polarization to grow, shrink, and bind.
    “One end of the actin filament elongates much, much faster than the other at a physiological actin protein concentration,” Zsolnay said. “What our study shows is that there is a structural basis for the different polymerization kinetics.” Vilmos Zsolnay is a graduate student in the Department of Biophysical Sciences at the University of Chicago, developing simulations in the group of Gregory Voth.
    “Actin filaments are what give the cell its shape and many other properties,” said Voth, the corresponding author of the study and the Haig P. Papazian Distinguished Service Professor at the University of Chicago. Over time, a cell’s shape changes in a dynamic process.

    advertisement

    “When a cell wants to move forward, for instance, it will polymerize actin in a particular direction. Those actin filaments then push on the cell membrane, which allow the cell to move in a particular direction.” Voth said. What’s more, other proteins in the cell help align the actin ends that polymerize, or build up more quickly to push in the exact same direction, directing the cell’s path.
    “We find that one end of the filament has a very loose connection between actin subunits,” Zsolnay said. “That’s the fast end. The loose connection within the actin polymer allows incoming actin monomers to have access to a binding site, such that it can make a new connection quickly and the polymerization reaction can continue.” He contrasted this to the slow end with very tight connections between actin subunits that block an incoming monomer’s ability to polymerize onto that end.
    Zsolnay developed the study’s all-atom molecular dynamics simulation with the Voth Group on the Midway2 computing cluster at the University of Chicago Research Computing Center. He used GROMACS and NAMD software to investigate the equilibrium conformations of the subunits at the filament ends. “This was one of my first projects using a high performance computing cluster,” he said.
    XSEDE, the NSF-funded Extreme Science and Engineering Discovery Environment, then awarded the scientists allocations on the Stampede2 supercomputer at the Texas Advanced Computing Center. “It was very straightforward to test the code on our local cluster here, and then drop a couple of files onto the machines at Stampede2 to start running again within a day,” Zsolnay said.
    “The high performance computing clusters of Stampede2 are really what allowed this work to take place,” he added. “They were able to reach the time and length scales in our simulations that we were interested in. Without the resources provided by XSEDE, we would not have been able to analyze as large of a dataset or have had as much confidence in our findings.”
    They ran nine simulations, each of roughly a million atoms propagated for about a microsecond. “There are three nucleotide states that we were interested in — the ATP, the ADP plus the gamma phosphate, and once phosphate is released, it’s in an ADP nucleotide state.” Zsolnay said.

    advertisement

    The simulations showed the smoking gun of the mystery — distinct equilibrium conformations between the barbed end and the pointed end subunits, which led to meaningful differences in the contacts between neighboring actin monomer subunits.
    An actin monomer in solution has a conformation that’s a little wider than when it’s part of a longer actin polymer. The previous model, said Zsolnay, assumed that the wide shape transitions into the flattened shape once it polymerizes, almost discretely.
    “What we saw when we started the filament with all of the subunits in the flattened state, the ones at the end relaxed to resemble the monomeric state characterized by a wider shape,” Zsolnay explained. “At both of the ends, that same mechanism of the widening of the actin molecule led to very different contacts between subunits.” At the fast, barbed end there was a separation between the two molecules. Whereas at the pointed end, there was a very tight connection between them.
    Research into actin filaments could find wide-ranging applications, such as improving therapeutics. “What’s in the news right now is coronavirus,” Voth said, referring to the role of the innate immune system. It involves white blood cells called neutrophils that gobble up bacteria or other pathogens in one’s blood stream. “What’s critical to their ability to sniff out and seek pathogens is their ability to move through an environment and find the pathogens wherever they are. In the immune response, it’s very important,” he added
    And then there’s metastatic cancer, where one or a couple of tumor cells will start to migrate, spreading to other parts of the body. “If you could disrupt that in some way, or make it so that it’s not as reliable for your cancerous cells, then you could make a cancer treatment based off of that information,” Voth said.
    “One angle that Prof. Voth and I find particularly interesting is from a materials science standpoint,” said Zsolnay. The amino acids in the actin molecule are roughly the same throughout plants, animals, and yeasts. “That gives a hint to us that there’s something special about the material properties of actin molecules that can’t be reproduced using a different set of amino acids,” he added.
    This understanding could help advance development of biomimetic materials that repair themselves. “You can imagine, in the future, a new type of material that heals itself. For instance, if a bucket gets a hole in it, the material could sense that a wound has occurred and heal itself, just like human tissue would,” Zsolnay added.
    Said Voth: “People are really very keen on biomimetic materials — things that behave like these polymers. Our work is explaining a critical thing, which is the polarization of actin filaments.” More

  • in

    AI used to predict early symptoms of schizophrenia in relatives of patients

    University of Alberta researchers have taken another step forward in developing an artificial intelligence tool to predict schizophrenia by analyzing brain scans.
    In recently published research, the tool was used to analyze functional magnetic resonance images of 57 healthy first-degree relatives (siblings or children) of schizophrenia patients. It accurately identified the 14 individuals who scored highest on a self-reported schizotypal personality trait scale.
    Schizophrenia, which affects 300,000 Canadians, can cause delusions, hallucinations, disorganized speech, trouble with thinking and lack of motivation, and is usually treated with a combination of drugs, psychotherapy and brain stimulation. First-degree relatives of patients have up to a 19 per cent risk of developing schizophrenia during their lifetime, compared with the general population risk of less than one per cent.
    “Our evidence-based tool looks at the neural signature in the brain, with the potential to be more accurate than diagnosis by the subjective assessment of symptoms alone,” said lead author Sunil Kalmady Vasu, senior machine learning specialist in the Faculty of Medicine & Dentistry.
    Kalmady Vasu noted that the tool is designed to be a decision support tool and would not replace diagnosis by a psychiatrist. He also pointed out that while having schizotypal personality traits may cause people to be more vulnerable to psychosis, it is not certain that they will develop full-blown schizophrenia.
    “The goal is for the tool to help with early diagnosis, to study the disease process of schizophrenia and to help identify symptom clusters,” said Kalmady Vasu, who is also a member of the Alberta Machine Intelligence Institute.
    The tool, dubbed EMPaSchiz (Ensemble algorithm with Multiple Parcellations for Schizophrenia prediction), was previously used to predict a diagnosis of schizophrenia with 87 per cent accuracy by examining patient brain scans. It was developed by a team of researchers from U of A and the National Institute of Mental Health and Neurosciences in India. The team also includes three members of the U of A’s Neuroscience and Mental Health Institute — computing scientist and Canada CIFAR AI Chair Russ Greiner from the Faculty of Science, and psychiatrists Andrew Greenshaw and Serdar Dursun, who are authors on the latest paper as well.
    Kalmady Vasu said next steps for the research will test the tool’s accuracy on non-familial individuals with schizotypal traits, and to track assessed individuals over time to learn whether they develop schizophrenia later in life.
    Kalmady Vasu is also using the same principles to develop algorithms to predict outcomes such as mortality and readmissions for heart failure in cardiovascular patients through the Canadian VIGOUR Centre.
    “Severe mental illness and cardiovascular problems cause functional disability and impair quality of life,” Kalmady Vasu said. “It is very important to develop objective, evidence-based tools for these complex disorders that afflict humankind.”

    Story Source:
    Materials provided by University of Alberta Faculty of Medicine & Dentistry. Original written by Gillian Rutherford. Note: Content may be edited for style and length. More

  • in

    Anonymous cell phone data can quantify behavioral changes for flu-like illnesses

    Cell phone data that is routinely collected by telecommunications providers can reveal changes of behavior in people who are diagnosed with a flu-like illness, while also protecting their anonymity, a new study finds. The Proceedings of the National Academy of Sciences (PNAS) published the research, led by computer scientists at Emory University and based on data drawn from a 2009 outbreak of H1N1 flu in Iceland.
    “To our knowledge, our project is the first major, rigorous study to individually link passively-collected cell phone metadata with actual public health data,” says Ymir Vigfusson, assistant professor in Emory University’s Department of Computer Science and a first author of the study. “We’ve shown that it’s possible to do so without comprising privacy and that our method could potentially provide a useful tool to help monitor and control infectious disease outbreaks.”
    The researchers collaborated with a major cell phone service provider in Iceland, along with public health officials of the island nation. They analyzed data for more than 90,000 encrypted cell phone numbers, which represents about a quarter of Iceland’s population. They were permitted to link the encrypted cell phone metadata to 1,400 anonymous individuals who received a clinical diagnosis of a flu-like illness during the H1N1 outbreak.
    “The individual linkage is key,” Vigfusson says. “Many public-health applications for smartphone data have emerged during the COVID-19 pandemic but tend to be based around correlations. In contrast, we can definitively measure the differences in routine behavior between the diagnosed group and the rest of the population.”
    The results showed, on average, those who received a flu-like diagnosis changed their cell phone usage behavior a day before their diagnosis and the two-to-four days afterward: They made fewer calls, from fewer unique locations. On average, they also spent longer time than usual on the calls that they made on the day following their diagnosis.
    The study, which began long before the COVID-19 pandemic, took 10 years to complete. “We were going into new territory and we wanted to make sure we were doing good science, not just fast science,” Vigfusson says. “We worked hard and carefully to develop protocols to protect privacy and conducted rigorous analyses of the data.”
    Vignusson is an expert on data security and developing software and programming algorithms that work at scale.

    advertisement

    He shares first authorship of the study with two of his former students: Thorgeir Karlsson, a graduate student at Reykjavik University who spent a year at Emory working on the project, and Derek Onken, a Ph.D. student in the Computer Science department. Senior author Leon Danon — from the University of Bristol, and the Alan Turing Institute of the British Library — conceived of the study.
    While only about 40 percent of humanity has access to the Internet, cell phone ownership is ubiquitous, even in lower and middle-income countries, Vigfusson notes. And cell phone service providers routinely collect billing data that provide insights into the routine behaviors of a population, he adds.
    “The COVID pandemic has raised awareness of the importance of monitoring and measuring the progression of an infectious disease outbreak, and how it is essentially a race against time,” Vigfusson says. “More people also realize that there will likely be more pandemics during our lifetimes. It is vital to have the right tools to give us the best possible information quickly about the state of an epidemic outbreak.”
    Privacy concerns are a major reason why cell phone data has not been linked to public health data in the past. For the PNAS paper, the researchers developed a painstaking protocol to minimize these concerns.
    The cell phone numbers were encrypted, and their owners were not identified by name, but by a unique numerical identifier not revealed to the researchers. These unique identifiers were used to link the cell phone data to de-identified health records.

    advertisement

    “We were able to maintain anonymity for individuals throughout the process,” Vigfusson says. “The cell phone provider did not learn about any individual’s health diagnosis and the health department did not learn about any individual’s phone behaviors.”
    The study encompassed 1.5 billion call record data points including calls made, the dates of the calls, the cell tower location where the calls originated and the duration of the calls. The researchers linked this data to clinical diagnoses of a flu-like illness made by a health providers in a central database. Laboratory confirmation of influenza was not required.
    The analyses of the data focused on 29 days surrounding each clinical diagnosis, and looked at changes in mobility, the number of calls made and the duration of the calls. They measured these same factors during the same time period for location-matched controls.
    “Even though individual cell phones generated only a few data points per day, we were able to see a pattern where the population was behaving differently near the time they were diagnosed with a flu-like illness,” Vigfusson says.
    While the findings are significant, they represent only a first step for the possible broader use of the method, Vigfusson adds. The current work was limited to the unique environment of Iceland: An island with only one port of entry and a fairly homogenous, affluent and small population. It was also limited to a single infectious disease, H1N1, and those who received a clinical diagnosis for a flu-like illness.
    “Our work contributes to the discussion of what kinds of anonymous data lineages might be useful for public health monitoring purposes,” Vigfusson says. “We hope that others will build on our efforts and study whether our method can be adapted for use in other places and for other infectious diseases.”
    The work was funded by the Icelandic Center for Research, Emory University, the National Science Foundation, the Leverhulme Trust, the Alan Turing Institute, the Medical Research Council and a hardware donation from NVIDIA Corporation. More