More stories

  • in

    'Liking' an article online may mean less time spent reading it

    When people have the option to click “like” on a media article they encounter online, they spend less time actually reading the text, a new study suggests.
    In a lab experiment, researchers found that people spent about 7 percent less time reading articles on controversial topics when they had the opportunity to upvote or downvote them than if there was no interactive element.
    The finding was strongest when an article agreed with the reader’s point of view.
    The results suggest that the ability to interact with online content may change how we consume it, said Daniel Sude, who led the work while earning a doctoral degree in communication at The Ohio State University.
    “When people are voting whether they like or dislike an article, they’re expressing themselves. They are focused on their own thoughts and less on the content in the article,” Sude said.
    “It is like the old phrase, ‘If you’re talking, you’re not listening.’ People were talking back to the articles without listening to what they had to say.”
    In another finding, people’s existing views on controversial topics like gun control or abortion became stronger after voting on articles that agreed with their views, even when they spent less time reading them.

    advertisement

    “Just having the ability to like an article you agreed with was enough to amplify your attitude,” said study co-author Silvia Knobloch-Westerwick, professor of communication at Ohio State.
    “You didn’t need to read the article carefully, you didn’t have to learn anything new, but you are more committed to what you already believed.”
    The study, also co-authored by former Ohio State doctoral student George Pearson, was published online recently in the journal Computers in Human Behavior and will appear in the January 2021 print edition.
    The study involved 235 college students. Before the study, the researchers measured their views on four controversial topics used in the experiment: abortion, welfare benefits, gun control and affirmative action.
    Participants were then showed four versions of an online news website created for the study, each on one of the controversial topics. Each webpage showed headlines and first paragraphs for four articles, two with a conservative slant and two with a liberal slant. Participants could click on the headlines to read the full stories.

    advertisement

    Two versions of the websites had a banner that said, “Voting currently enabled for this topic,” and each article had an up arrow or down arrow that participants could click on to express their opinion.
    The other two websites had a banner that said, “Voting currently disabled for this topic.”
    Participants were given three minutes to browse each website as they wished, although they were not told about the time limit. The researchers measured the time participants spent on each story and whether they voted if they had the opportunity.
    As expected, for each website, participants spent more time reading articles that agreed with their views (about 1.5 minutes) than opposing views (less than a minute).
    But they spent about 12 seconds less time reading the articles they agreed with if they could vote.
    In addition, people voted on about 12 percent of articles that they didn’t select to read, the study showed.
    “Rather than increasing engagement with website content, having the ability to interact may actually distract from it,” Sude said.
    The researchers measured the participants’ views on the four topics again after they read the websites to see if their attitudes had changed at all.
    Results showed that when participants were not able to vote, time spent reading articles that agreed with their original views strengthened these views. The more time they spent reading, the stronger their views became.
    When participants were able to vote, their voting behavior was as influential as their reading time. Even if they stopped reading and upvoted an article, their attitudes still became stronger.
    “It is important that people’s views still became stronger by just having the opportunity to vote, Knobloch-Westerwick said.
    “When they had the opportunity to vote on the articles, their attitudes were getting more extreme with limited or no input from the articles themselves. They were in an echo chamber of one.”
    Sude said there is a better way to interact with online news.
    “Don’t just click the like button. Read the article and leave thoughtful comments that are more than just a positive or negative rating,” he said.
    “Say why you liked or disliked the article. The way we express ourselves is important and can influence the way we think about an issue.” More

  • in

    The secretive networks used to move money offshore

    In 2016, the world’s largest ever data leak dubbed “The Panama Papers” exposed a scandal, uncovering a vast global network of people — including celebrities and world leaders, who used offshore tax havens, anonymous transactions through intermediaries and shell corporations to hide their wealth, grow their fortunes and avoid taxes.
    Researchers at USC Viterbi School of Engineering have now conducted a deep analysis of the entities and their interrelationships that were originally revealed in the 11.5 million files leaked to the International Consortium of Investigative Journalists. The academic researchers have made some discoveries about how this network and transactions operate, uncovering uniquely fragmented network behavior, vastly different from more traditional social or organizational networks, demonstrating why these systems of transactions and associations are so robust and difficult to infiltrate or take down. The work has been published in Applied Network Science.
    Lead author Mayank Kejriwal is an assistant professor working in the Daniel J. Epstein Department of Industrial and Systems Engineering and USC’s Information Sciences Institute who studies complex (typically, social) systems like online trafficking markets using computational methods and network science. He said the research team’s aim was to study the Panama Papers network as a whole, in the same way you might study a social network like Facebook, to try to understand what the network behavior can tell us about how money can be moved.
    “In general, in any social network like LinkedIn or Facebook, there is something called ‘Small World Phenomenon’, which means that you’re only ever around six people away from anyone in the world,” Kejriwal said.
    “For instance, if you want get from yourself to Bill Gates, on average you would be around six connections away,” he said.
    However the team discovered that the Panama Papers network was about as far removed from this traditional social or organizational network behavior as it could possibly be. Instead of a network of highly integrated connections, the researchers discovered a series of secretive disconnected fragments, with entities, intermediaries and individuals involved in transactions and corporations exhibiting very few connections with other entities in the system.

    advertisement

    “It was really unusual. The degree of fragmentation is something I have never seen before,” said Kejriwal. “I’m not aware of any other network that has this kind of fragmentation.”
    “So (without any documentation or leak), if you wanted to find the chain between one organization and another organization, you would not be able to find it, because the chances are that that there is no chain — it’s completely disconnected,” Kejriwal said.
    Most social, friendship or organizational networks contain a series of triangular structures in a system known as the ‘friend of a friend phenomenon.”
    “The simple notion is that a friend of a friend is also a friend,” Kejriwal said. “And we can measure that by counting the number of triangles in the network.”
    However, the team discovered that this triangular structure was not a feature of the Panama Papers network.

    advertisement

    “It turns out that not only is it not prevalent, but it’s far less than prevalent than even for a random network,” Kejriwal said. “If you literally randomly connect things, in a haphazard fashion and then you count the triangles in that network, this network is even sparser than that.” He added, “Compared to a random network, in this type of network, links between financial entities are scrambled until they are essentially meaningless (so that anyone can be transacting with anyone else).”
    It is precisely this disconnectedness that makes the system of secret global financial dealings so robust. Because there was no way to trace relationships between entities, the network could not be easily compromised.
    “So what this suggests is that secrecy is built into the system and you cannot penetrate it,” Kejriwal said.
    “In an interconnected world, we don’t expect anyone to be impenetrable. Everyone has a weak link,” Kejriwal said. “But not in this network. The fact it is so fragmented actually protects them.”
    Kejriwal said the network behavior demonstrates that those involved in the Panama Papers network of offshore entities and transactions were very sophisticated, knowing exactly how to move money around in a way that it becomes untraceable and they are not vulnerable through their connections to others in the system. Because it is a global network, there are few options for national or international bodies to intervene in order to recoup taxes and investigate corruption and money laundering.
    “I don’t know how anyone would try to bring this down, and I’m not sure that they would be able to. The system seems unattackable,” Kejriwal said. More

  • in

    App analyzes coronavirus genome on a smartphone

    A new mobile app has made it possible to analyse the genome of the SARS-CoV-2 virus on a smartphone in less than half an hour.
    Cutting-edge nanopore devices have enabled scientists to read or ‘sequence’ the genetic material in a biological sample outside a laboratory, however analysing the raw data has still required access to high-end computing power — until now.
    The app Genopo, developed by the Garvan Institute of Medical Research, in collaboration with the University of Peradeniya in Sri Lanka, makes genomics more accessible to remote or under-resourced regions, as well as the hospital bedside.
    “Not everyone has access to the high-power computing resources that are required for DNA and RNA analysis, but most people have access to a smartphone,” says co-senior author Dr Ira Deveson, who heads the Genomic Technologies Group at Garvan’s Kinghorn Centre for Clinical Genomics.
    “Fast, real-time genomic analysis is more crucial today than ever, as a central method for tracking the spread of coronavirus. Our app makes genomic analysis more accessible, literally placing the technology into the pockets of scientists around the world.”
    The researchers report the app Genopo in the journal Communications Biology.

    advertisement

    Taking genome analysis off-line
    Genomic sequencing no longer requires a sophisticated lab setup.
    At the size of a USB stick, portable devices such as the Oxford Nanopore Technologies MinION sequencer can rapidly generate genomic sequences from a sample in the field or the clinic. The technology has been used for Ebola surveillance in West Africa, to profile microbial communities in the Arctic and determine coronavirus evolution during the current pandemic.
    However, analysing genome sequencing data requires powerful computation. Scientists need to piece the many strings of genetic letters from the raw data into a single sequence and pinpoint the instances of genetic variation that shed light on how a virus evolves.
    “Until now, genomic analysis has required the processing power of high-end server computers or cloud services. We set out to change that,” explains co-senior author Hasindu Gamaarachchi, Genomics Computing Systems Engineer at the Garvan Institute.

    advertisement

    “To enable in situ genomic sequencing and analysis, in real time and without major laboratory infrastructure, we developed an app that could execute bioinformatics workflows on nanopore sequencing datasets that are downloaded to a smartphone. The reengineering process, spearheaded by first author Hiruna Samarakoon, required overcoming a number of technical challenges due to various resource constraints in smartphones. The app Genopo combines a number of available bioinformatics tools into a single Android application, ‘miniaturised’ to work on the processing power of a consumer Android device.”
    Coronavirus testing
    The researchers tested Genopo on the raw sequencing data of virus samples isolated from nine Sydney patients infected with SARS-CoV-2, which involved extracting and amplifying the virus RNA from a swab sample, sequencing the amplified DNA with a MinION device and analysing the data on a smartphone. The researchers tested their app on different Android devices, including models from Nokia, Huawei, LG and Sony.
    The Genopo app took an average 27 minutes to determine the complete SARS-CoV-2 genome sequence from the raw data, which the researchers say opens the possibility to do genomic analysis at the point of care, in real time. The researchers also showed that Genopo can be used to profile DNA methylation — a modification which changes gene activity — in a sample of the human genome.
    “This illustrates a flexible, efficient architecture that is suitable to run many popular bioinformatics tools and accommodate small or large genomes,” says Dr Deveson. “We hope this will make genomics much more accessible to researchers to unlock the information in DNA or RNA to the benefit of human health, including in the current pandemic.”
    Genopo is a free, open-source application available through the Google Play store (https://play.google.com/store/apps/details?id=com.mobilegenomics.genopo&hl=en).
    This project was supported by a Medical Research Future Fund (grant APP1173594), a Cancer Institute NSW Early Career Fellowship and The Kinghorn Foundation. Garvan is affiliated with St Vincent’s Hospital Sydney and UNSW Sydney. More

  • in

    Driving behavior less 'robotic' thanks to new model

    Researchers from TU Delft have now developed a new model that describes driving behaviour on the basis of one underlying ‘human’ principle: managing the risk below a threshold level. This model can accurately predict human behaviour during a wide range of driving tasks. In time, the model could be used in intelligent cars, to make them feel less ‘robotic’. The research conducted by doctoral candidate Sarvesh Kolekar and his supervisors Joost de Winter and David Abbink will be published in Nature Communications on Tuesday 29 September 2020.
    Risk threshold
    Driving behaviour is usually described using models that predict an optimum path. But this is not how people actually drive. ‘You don’t always adapt your driving behaviour to stick to one optimum path,’ says researcher Sarvesh Kolekar from the Department of Cognitive Robotics. ‘People don’t drive continuously in the middle of their lane, for example: as long as they are within the acceptable lane limits, they are fine with it.’
    Models that predict an optimum path are not only popular in research, but also in vehicle applications. ‘The current generation of intelligent cars drive very neatly. They continuously search for the safest path: i.e. one path at the appropriate speed. This leads to a “robotic” style of driving,’ continues Kolekar. ‘To get a better understanding of human driving behaviour, we tried to develop a new model that used the human risk threshold as the underlying principle.’
    Driver’s Risk Field
    To get to grips with this concept, Kolekar introduced the so-called Driver’s Risk Field (DRF). This is an ever-changing two-dimensional field around the car that indicates how high the driver considers the risk to be at each point. Kolekar devised these risk assessments in previous research. The gravity of the consequences of the risk in question are then taken into account in the DRF. For example, having a cliff on one side of the road boundary is much more dangerous than having grass. ‘The DRF was inspired by a concept from psychology, put forward a long time ago (in 1938) by Gibson and Crooks. These authors claimed that car drivers ‘feel’ the risk field around them, as it were, and base their traffic manoeuvres on these perceptions.’ Kolekar managed to turn this theory into a computer algorithm.
    Predictions
    Kolekar then tested the model in seven scenarios, including overtaking and avoiding an obstacle. ‘We compared the predictions made by the model with experimental data on human driving behaviour taken from the literature. Luckily, a lot of information is already available. It turned out that our model only needs a small amount of data to ‘get’ the underlying human driving behaviour and could even predict reasonable human behaviour in previously unseen scenarios. Thus, driving behaviour rolls out more or less automatically; it is ’emergent’.
    Elegant
    This elegant description of human driving behaviour has huge predictive and generalising value. Apart from the academic value, the model can also be used in intelligent cars. ‘If intelligent cars were to take real human driving habits into account, they would have a better chance of being accepted. The car would behave less like a robot.’

    Story Source:
    Materials provided by Delft University of Technology. Note: Content may be edited for style and length. More

  • in

    Machine learning homes in on catalyst interactions to accelerate materials development

    A machine learning technique rapidly rediscovered rules governing catalysts that took humans years of difficult calculations to reveal — and even explained a deviation. The University of Michigan team that developed the technique believes other researchers will be able to use it to make faster progress in designing materials for a variety of purposes.
    “This opens a new door, not just in understanding catalysis, but also potentially for extracting knowledge about superconductors, enzymes, thermoelectrics, and photovoltaics,” said Bryan Goldsmith, an assistant professor of chemical engineering, who co-led the work with Suljo Linic, a professor of chemical engineering.
    The key to all of these materials is how their electrons behave. Researchers would like to use machine learning techniques to develop recipes for the material properties that they want. For superconductors, the electrons must move without resistance through the material. Enzymes and catalysts need to broker exchanges of electrons, enabling new medicines or cutting chemical waste, for instance. Thermoelectrics and photovoltaics absorb light and generate energetic electrons, thereby generating electricity.
    Machine learning algorithms are typically “black boxes,” meaning that they take in data and spit out a mathematical function that makes predictions based on that data.
    “Many of these models are so complicated that it’s very difficult to extract insights from them,” said Jacques Esterhuizen, a doctoral student in chemical engineering and first author of the paper in the journal Chem. “That’s a problem because we’re not only interested in predicting material properties, we also want to understand how the atomic structure and composition map to the material properties.”
    But a new breed of machine learning algorithm lets researchers see the connections that the algorithm is making, identifying which variables are most important and why. This is critical information for researchers trying to use machine learning to improve material designs, including for catalysts.
    A good catalyst is like a chemical matchmaker. It needs to be able to grab onto the reactants, or the atoms and molecules that we want to react, so that they meet. Yet, it must do so loosely enough that the reactants would rather bind with one another than stick with the catalyst.
    In this particular case, they looked at metal catalysts that have a layer of a different metal just below the surface, known as a subsurface alloy. That subsurface layer changes how the atoms in the top layer are spaced and how available the electrons are for bonding. By tweaking the spacing, and hence the electron availability, chemical engineers can strengthen or weaken the binding between the catalyst and the reactants.
    Esterhuizen started by running quantum mechanical simulations at the National Energy Research Scientific Computing Center. These formed the data set, showing how common subsurface alloy catalysts, including metals such as gold, iridium and platinum, bond with common reactants such as oxygen, hydroxide and chlorine.
    The team used the algorithm to look at eight material properties and conditions that might be important to the binding strength of these reactants. It turned out that three mattered most. The first was whether the atoms on the catalyst surface were pulled apart from one another or compressed together by the different metal beneath. The second was how many electrons were in the electron orbital responsible for bonding, the d-orbital in this case. And the third was the size of that d-electron cloud.
    The resulting predictions for how different alloys bind with different reactants mostly reflected the “d-band” model, which was developed over many years of quantum mechanical calculations and theoretical analysis. However, they also explained a deviation from that model due to strong repulsive interactions, which occurs when electron-rich reactants bind on metals with mostly filled electron orbitals.

    Story Source:
    Materials provided by University of Michigan. Original written by Kate McAlpine. Note: Content may be edited for style and length. More

  • in

    Brain circuitry shaped by competition for space as well as genetics

    Complex brain circuits in rodents can organise themselves with genetics playing only a secondary role, according to a new computer modelling study published today in eLife.
    The findings help answer a key question about how the brain wires itself during development. They suggest that simple interactions between nerve cells contribute to the development of complex brain circuits, so that a precise genetic blueprint for brain circuitry is unnecessary. This discovery may help scientists better understand disorders that affect brain development and inform new ways to treat conditions that disrupt brain circuits.
    The circuits that help rodents process sensory information collected by their whiskers are a great example of the complexity of brain wiring. These circuits are organised into cylindrical clusters or ‘whisker barrels’ that closely match the pattern of whiskers on the animal’s face.
    “The brain cells within one whisker barrel become active when its corresponding whisker is touched,” explains lead author Sebastian James, Research Associate at the Department of Psychology, University of Sheffield, UK. “This precise mapping between the individual whisker and its brain representation makes the whisker-barrel system ideal for studying brain wiring.”
    James and his colleagues used computer modelling to determine if this pattern of brain wiring could emerge without a precise genetic blueprint. Their simulations showed that, in the cramped quarters of the developing rodent brain, strong competition for space between nerve fibers originating from different whiskers can cause them to concentrate into whisker-specific clusters. The arrangement of these clusters to form a map of the whiskers is assisted by simple patterns of gene expression in the brain tissue.
    The team also tested their model by seeing if it could recreate the results of experiments that track the effects of a rat losing a whisker on its brain development. “Our simulations demonstrated that the model can be used to accurately test how factors inside and outside of the brain can contribute to the development of cortical fields,” says co-author Leah Krubitzer, Professor of Psychology at the University of California, Davis, US.
    The authors suggest that this and similar computational models could be adapted to study the development of larger, more complex brains, including those of humans.
    “Many of the basic mechanisms of development in the rodent barrel cortex are thought to translate to development in the rest of cortex, and may help inform research into various neurodevelopmental disorders and recovery from brain injuries,” concludes senior author Stuart Wilson, Lecturer in Cognitive Neuroscience at the University of Sheffield. “As well as reducing the number of animal experiments needed to understand cortical development, exploring the parameters of computational models like ours can offer new insights into how development and evolution interact to shape the brains of mammals, including ourselves.”

    Story Source:
    Materials provided by eLife. Note: Content may be edited for style and length. More

  • in

    Understanding ghost particle interactions

    Scientists often refer to the neutrino as the “ghost particle.” Neutrinos were one of the most abundant particles at the origin of the universe and remain so today. Fusion reactions in the sun produce vast armies of them, which pour down on the Earth every day. Trillions pass through our bodies every second, then fly through the Earth as though it were not there.
    “While first postulated almost a century ago and first detected 65 years ago, neutrinos remain shrouded in mystery because of their reluctance to interact with matter,” said Alessandro Lovato, a nuclear physicist at the U.S. Department of Energy’s (DOE) Argonne National Laboratory.
    Lovato is a member of a research team from four national laboratories that has constructed a model to address one of the many mysteries about neutrinos — how they interact with atomic nuclei, complicated systems made of protons and neutrons (“nucleons”) bound together by the strong force. This knowledge is essential to unravel an even bigger mystery — why during their journey through space or matter neutrinos magically morph from one into another of three possible types or “flavors.”
    To study these oscillations, two sets of experiments have been undertaken at DOE’s Fermi National Accelerator Laboratory (MiniBooNE and NOvA). In these experiments, scientists generate an intense stream of neutrinos in a particle accelerator, then send them into particle detectors over a long period of time (MiniBooNE) or five hundred miles from the source (NOvA).
    Knowing the original distribution of neutrino flavors, the experimentalists then gather data related to the interactions of the neutrinos with the atomic nuclei in the detectors. From that information, they can calculate any changes in the neutrino flavors over time or distance. In the case of the MiniBooNE and NOvA detectors, the nuclei are from the isotope carbon-12, which has six protons and six neutrons.
    “Our team came into the picture because these experiments require a very accurate model of the interactions of neutrinos with the detector nuclei over a large energy range,” said Noemi Rocco, a postdoc in Argonne’s Physics division and Fermilab. Given the elusiveness of neutrinos, achieving a comprehensive description of these reactions is a formidable challenge.
    The team’s nuclear physics model of neutrino interactions with a single nucleon and a pair of them is the most accurate so far. “Ours is the first approach to model these interactions at such a microscopic level,” said Rocco. “Earlier approaches were not so fine grained.”
    One of the team’s important findings, based on calculations carried out on the now-retired Mira supercomputer at the Argonne Leadership Computing Facility (ALCF), was that the nucleon pair interaction is crucial to model neutrino interactions with nuclei accurately. The ALCF is a DOE Office of Science User Facility.
    “The larger the nuclei in the detector, the greater the likelihood the neutrinos will interact with them,” said Lovato. “In the future, we plan to extend our model to data from bigger nuclei, namely, those of oxygen and argon, in support of experiments planned in Japan and the U.S.”
    Rocco added that “For those calculations, we will rely on even more powerful ALCF computers, the existing Theta system and upcoming exascale machine, Aurora.”
    Scientists hope that, eventually, a complete picture will emerge of flavor oscillations for both neutrinos and their antiparticles, called “antineutrinos.” That knowledge may shed light on why the universe is built from matter instead of antimatter — one of the fundamental questions about the universe.

    Story Source:
    Materials provided by DOE/Argonne National Laboratory. Original written by Joseph E. Harmon. Note: Content may be edited for style and length. More

  • in

    New artificial intelligence platform uses deep learning to diagnose dystonia with high accuracy in less than one second

    Researchers at Mass Eye and Ear have developed a unique diagnostic tool that can detect dystonia from MRI scans, the first technology of its kind to provide an objective diagnosis of the disorder. Dystonia is a potentially disabling neurological condition that causes involuntary muscle contractions, leading to abnormal movements and postures. It is often misdiagnosed and can take people up to 10 years to get a correct diagnosis.
    In a new study published September 28 in Proceedings of the National Academy of Sciences, researchers developed an AI-based deep learning platform — called DystoniaNet — to compare brain MRIs of 612 people, including 392 patients with three different forms of isolated focal dystonia and 220 healthy individuals. The platform diagnosed dystonia with 98.8 percent accuracy. During the process, the researchers identified a new microstructural neural network biological marker of dystonia. With further testing and validation, they believe DystoniaNet can be easily integrated into clinical decision-making.
    “There is currently no biomarker of dystonia and no ‘gold standard’ test for its diagnosis. Because of this, many patients have to undergo unnecessary procedures and see different specialists until other diseases are ruled out and the diagnosis of dystonia is established,” said senior study author Kristina Simonyan, MD, PhD, Dr med, Director of Laryngology Research at Mass Eye and Ear, Associate Neuroscientist at Massachusetts General Hospital, and Associate Professor of Otolaryngology-Head and Neck Surgery at Harvard Medical School. “There is a critical need to develop, validate and incorporate objective testing tools for the diagnosis of this neurological condition, and our results show that DystoniaNet may fill this gap.”
    A disorder notoriously difficult to diagnose
    About 35 out of every 100,000 people have isolated or primary dystonia — prevalence that is likely underestimated due to the current challenges in diagnosing this disorder. In some cases, dystonia can be a result of a neurological event, such as Parkinson’s disease or a stroke. However, the majority of isolated dystonia cases have no known cause and affect a single muscle group in the body. These so-called focal dystonias can lead to disability and problems with physical and emotional quality of life.
    The study included three of the most common types of focal dystonia: Laryngeal dystonia, characterized by involuntary movements of the vocal cords that can cause difficulties with speech (also called spasmodic dysphonia); Cervical dystonia, which causes the neck muscles to spasm and the neck to tilt in an unusual manner; Blepharospasm, a focal dystonia of the eyelid that causes involuntary twitching and forceful eyelid closure.

    advertisement

    Traditionally, a dystonia diagnosis is made based on clinical observations, said Dr. Simonyan. Previous studies have found that the agreement on dystonia diagnosis between clinicians based on purely clinical assessments is as low as 34 percent and have reported that about 50 percent of the cases go misdiagnosed or underdiagnosed at a first patient visit.
    DystoniaNet could be integrated into medical decision-making
    DystoniaNet utilizes deep learning, a particular type of AI algorithm, to analyze data from individual MRI and identify subtler differences in brain structure. The platform is able to detect clusters of abnormal structures in several regions of the brain that are known to control processing and motor commands. These small changes cannot be seen by a naked eye in MRI, and the patterns are only evident through the platform’s ability to take 3D brain images and zoom into their microstructural details.
    “Our study suggests that the implementation of the DystoniaNet platform for dystonia diagnosis would be transformative for the clinical management of this disorder,” said the first study author Davide Valeriani, PhD, a postdoctoral research fellow in the Dystonia and Speech Motor Control Laboratory at Mass Eye and Ear and Harvard Medical School. “Importantly, our platform was designed to be efficient and interpretable for clinicians, by providing the patient’s diagnosis, the confidence of the AI in that diagnosis, and information about which brain structures are abnormal.”
    DystoniaNet is a patent-pending proprietary platform developed by Dr. Simonyan and Dr. Valeriani, in conjunction with Mass General Brigham Innovation. The technology interprets an MRI scan for microstructural biomarker in 0.36 seconds. DystoniaNet has been trained using Amazon Web Services computational cloud platform. The researchers believe this technology can be easily translated into the clinical setting, such as by being integrated in an electronic medical record or directly in the MRI scanner software. If DystoniaNet finds a high probability of dystonia in the MRI, a physician can use this information to help confidently confirm the diagnosis, pursue future actions, and suggest a course of treatment without a delay. Dystonia cannot be cured, but some treatments can help reduce the incidence of dystonia-related spasms.
    Future studies will look at more types of dystonia and will include trials at multiple hospitals to further validate the DystoniaNet platform in a larger number of patients.
    This research was funded and supported by the National Institutes of Health (R01DC011805, R01DC012545, R01NS088160), Amazon Web Services through the Machine Learning Research Award, and a charitable gift by Keith and Bobbi Richardson. More