More stories

  • in

    Advocating a new paradigm for electron simulations

    Although most fundamental mathematical equations that describe electronic structures are long known, they are too complex to be solved in practice. This has hampered progress in physics, chemistry and the material sciences. Thanks to modern high-performance computing clusters and the establishment of the simulation method density functional theory (DFT), researchers were able to change this situation. However, even with these tools the modelled processes are in many cases still drastically simplified. Now, physicists at the Center for Advanced Systems Understanding (CASUS) and the Institute of Radiation Physics at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) succeeded in significantly improving the DFT method. This opens up new possibilities for experiments with ultra-high intensity lasers, as the group explains in the Journal of Chemical Theory and Computation.
    In the new publication, Young Investigator Group Leader Dr. Tobias Dornheim, lead author Dr. Zhandos Moldabekov (both CASUS, HZDR) and Dr. Jan Vorberger (Institute of Radiation Physics, HZDR) take on one of the most fundamental challenges of our time: accurately describing how billions of quantum particles such as electrons interact. These so-called quantum many-body systems are at the heart of many research fields within physics, chemistry, material science, and related disciplines. Indeed, most material properties are determined by the complex quantum mechanical behavior of interacting electrons. While the fundamental mathematical equations that describe electronic structures are, in principle, long known, they are too complex to be solved in practice. Therefore, the actual understanding of e. g. elaborately designed materials has remained very limited.
    This unsatisfactory situation has changed with the advent of modern high-performance computing clusters, which has given rise to the new field of computational quantum many-body theory. Here, a particularly successful tool is density functional theory (DFT), which has given unprecedented insights into the properties of materials. DFT is currently considered one of the most important simulation methods in physics, chemistry, and the material sciences. It is especially adept in describing many-electron systems. Indeed, the number of scientific publications based on DFT calculations has been exponentially increasing over the last decade and companies have used the method to successfully calculate properties of materials as accurate as never before.
    Overcoming a drastic simplification
    Many such properties that can be calculated using DFT are obtained in the framework of linear response theory. This concept is also used in many experiments in which the (linear) response of the system of interest to an external perturbation such as a laser is measured. In this way, the system can be diagnosed and essential parameters like density or temperature can be obtained. Linear response theory often renders experiment and theory feasible in the first place and is nearly ubiquitous throughout physics and related disciplines. However, it is still a drastic simplification of the processes and a strong limitation.
    In their latest publication, the researchers are breaking new ground by extending the DFT method beyond the simplified linear regime. Thus, non-linear effects in quantities like density waves, stopping power, and structure factors can be calculated and compared to experimental results from real materials for the first time.
    Prior to this publication these non-linear effects were only reproduced by a set of elaborate calculation methods, namely, quantum Monte Carlo simulations. Although delivering exact results, this method is limited to constrained system parameters, as it requires a lot of computational power. Hence, there has been a big need for faster simulation methods. “The DFT approach we present in our paper is 1,000 to 10,000 times faster than quantum Monte Carlo calculations,” says Zhandos Moldabekov. “Moreover, we were able to demonstrate across temperature regimes ranging from ambient to extreme conditions, that this comes not to the detriment of accuracy. The DFT-based methodology of the non-linear response characteristics of quantum-correlated electrons opens up the enticing possibility to study new non-linear phenomena in complex materials.”
    More opportunities for modern free electron lasers
    “We see that our new methodology fits very well to the capabilities of modern experimental facilities like the Helmholtz International Beamline for Extreme Fields, which is co-operated by HZDR and went into operation only recently,” explains Jan Vorberger. “With high power lasers and free electron lasers we can create exactly these non-linear excitations we can now study theoretically and examine them with unprecedented temporal and spatial resolution. Theoretical and experimental tools are ready to study new effects in matter under extreme conditions that have not been accessible before.”
    “This paper is a great example to illustrate the direction my recently established group is heading to,” says Tobias Dornheim, leading the Young Investigator Group “Frontiers of Computational Quantum Many-Body Theory” installed in early 2022. “We have been mainly active in the high energy density physics community in the past years. Now, we are devoted to push the frontiers of science by providing computational solutions to quantum many-body problems in many different contexts. We believe that the present advance in electronic structure theory will be useful for researchers in a number of research fields.” More

  • in

    Machine-learning algorithms can help health care staff correctly diagnose alcohol-associated hepatitis, acute cholangitis

    Acute cholangitis is a potentially life-threatening bacterial infection that often is associated with gallstones. Symptoms include fever, jaundice, right upper quadrant pain, and elevated liver enzymes.
    While these may seem like distinctive, telltale symptoms, unfortunately, they are similar to those of a much different condition: alcohol-associated hepatitis. This challenges emergency department staff and other health care professionals who need to diagnose and treat patients with liver enzyme abnormalities and systemic inflammatory responses.
    New Mayo Clinic research finds that machine-learning algorithms can help health care staff distinguish the two conditions. In an article published in Mayo Clinic Proceedings, researchers show how algorithms may be effective predictive tools using a few simple variables and routinely available structured clinical information.
    “This study was motivated by seeing many medical providers in the emergency department or ICU struggle to distinguish acute cholangitis and alcohol-associated hepatitis, which are very different conditions that can present similarly,” says Joseph Ahn, M.D., a third-year gastroenterology and hepatology fellow at Mayo Clinic in Rochester. Dr. Ahn is first author of the study.
    “We developed and trained machine-learning algorithms to distinguish the two conditions using some of the routinely available lab values that all of these patients should have,” Dr. Ahn says. “The machine-learning algorithms demonstrated excellent performances for discriminating the two conditions, with over 93% accuracy.”
    The researchers analyzed electronic health records of 459 patients older than age 18 who were admitted to Mayo Clinic in Rochester between Jan. 1, 2010, and Dec. 31, 2019. The patients were diagnosed with acute cholangitis or alcohol-associated hepatitis.
    Ten routinely available laboratory values were collected at the time of admission. After removal of patients whose data were incomplete, 260 patients with alcohol-associated hepatitis and 194 with acute cholangitis remained. These data were used to train eight machine-learning algorithms.
    The researchers also externally validated the results using a cohort of ICU patients who were seen at Beth Israel Deaconess Medical Center in Boston between 2001 and 2012. The algorithms also outperformed physicians who participated in an online survey, which is described in the article.
    “The study highlights the potential for machine-learning algorithms to assist in clinical decision-making in cases of uncertainty,” says Dr. Ahn. “There are many instances of gastroenterologists receiving consults for urgent endoscopic retrograde cholangiopancreatography in patients who initially deny a history of alcohol use but later turn out to have alcohol-associated hepatitis. In some situations, the inability to obtain a reliable history from patients with altered mental status or lack of access to imaging modalities in underserved areas may force providers to make the determination based on a limited amount of objective data.”
    If the machine-learning algorithms can be made easily accessible with an online calculator or smartphone app, they may help health care staff who are urgently presented with an acutely ill patient with abnormal liver enzymes, according to the study.
    “For patients, this would lead to improved diagnostic accuracy and reduce the number of additional tests or inappropriate ordering of invasive procedures, which may delay the correct diagnosis or subject patients to the risk of unnecessary complications,” Dr. Ahn says.
    The authors are from the Division of Gastroenterology and Hepatology and the Division of Internal Medicine at Mayo Clinic in Rochester, and from the Department of Computer Science at Hanyang University in Seoul, South Korea. Co-author Yung-Kyun Noh was supported in this research by Samsung Research Funding and Incubation Center of Samsung Electronics. The authors report no competing interests.
    Story Source:
    Materials provided by Mayo Clinic. Original written by Jay Furst. Note: Content may be edited for style and length. More

  • in

    Virtual reality technology could strengthen effects of traditional rehabilitation for multiple sclerosis

    In a recent article, Kessler Foundation scientists advocated for the incorporation of virtual reality (VR) technology in cognitive rehabilitation research in multiple sclerosis (MS). They presented a conceptual framework supporting VR as an adjuvant to traditional cognitive rehabilitation and exercise training for MS, theorizing that VR could strengthen the effects of traditional rehabilitative therapies by increasing sensory input and promoting multisensory integration and processing.
    MS and exercise researchers Carly L.A. Wender, PhD, John DeLuca, PhD, and Brian M. Sandroff, PhD, authored the review, “Developing the rationale for including virtual reality in cognitive rehabilitation and exercise training approaches for managing cognitive dysfunction in MS,” which was published open access on April 3, 2022 by NeuroSci as part of the Special Issue Cognitive Impairment and Neuropsychiatric Dysfunctions in Multiple Sclerosis.
    Current pharmacological therapies for MS are not effective for cognitive dysfunction, a common consequence of MS that affects the daily lives of many individuals. This lack of efficacy underscores the need to consider other approaches to managing these disabling cognitive deficits.
    The inclusion of VR technology in rehabilitation research and care for MS has the potential not only to improve cognition but to facilitate the transfer of those cognitive gains to improvements in everyday function, according to Brian Sandroff, PhD, senior research scientist in the Center for Neuropsychology and Neuroscience Research at Kessler Foundation. “With VR, we can substantially increase engagement and the volume of sensory input,” he foresees. “And by promoting multisensory integration and processing, VR can augment the effects of the two most promising nonpharmacological treatments — cognitive rehabilitation and exercise.”
    Virtual environments are flexible and varied, enabling investigators to control the range and progression of cognitive challenges, with the potential for greater adaptations and stronger intervention effects. VR also allows for the incorporation of cognitive rehabilitation strategies into exercise training sessions, which may support a more direct approach to improving specific cognitive domains through exercise prescriptions. The application of VR to stroke research has shown more improvement in motor outcomes compared with traditional therapy, as well as greater neural activation in the affected area of the brain, suggesting that greater gains may persist over time.
    Dr. Sandroff emphasized the largely conceptual advantages for the use of VR to treat cognitive dysfunction in individuals with MS. “More clinical research is needed to explore the efficacy of combining VR with cognitive rehabilitation and/or exercise training, and the impact on everyday functioning on individual with MS,” Dr. Sandroff concluded. “The conceptual framework we outline includes examples of ways immersive and interactive VR can be incorporated into MS clinical trials that will form the basis for larger randomized clinical trials.”
    Story Source:
    Materials provided by Kessler Foundation. Note: Content may be edited for style and length. More

  • in

    Building explainability into the components of machine-learning models

    Explanation methods that help users understand and trust machine-learning models often describe how much certain features used in the model contribute to its prediction. For example, if a model predicts a patient’s risk of developing cardiac disease, a physician might want to know how strongly the patient’s heart rate data influences that prediction.
    But if those features are so complex or convoluted that the user can’t understand them, does the explanation method do any good?
    MIT researchers are striving to improve the interpretability of features so decision makers will be more comfortable using the outputs of machine-learning models. Drawing on years of field work, they developed a taxonomy to help developers craft features that will be easier for their target audience to understand.
    “We found that out in the real world, even though we were using state-of-the-art ways of explaining machine-learning models, there is still a lot of confusion stemming from the features, not from the model itself,” says Alexandra Zytek, an electrical engineering and computer science PhD student and lead author of a paper introducing the taxonomy.
    To build the taxonomy, the researchers defined properties that make features interpretable for five types of users, from artificial intelligence experts to the people affected by a machine-learning model’s prediction. They also offer instructions for how model creators can transform features into formats that will be easier for a layperson to comprehend.
    They hope their work will inspire model builders to consider using interpretable features from the beginning of the development process, rather than trying to work backward and focus on explainability after the fact. More

  • in

    Breaking AIs to make them better

    Today’s artificial intelligence systems used for image recognition are incredibly powerful with massive potential for commercial applications. Nonetheless, current artificial neural networks — the deep learning algorithms that power image recognition — suffer one massive shortcoming: they are easily broken by images that are even slightly modified.
    This lack of ‘robustness’ is a significant hurdle for researchers hoping to build better AIs. However, exactly why this phenomenon occurs, and the underlying mechanisms behind it, remain largely unknown.
    Aiming to one day overcome these flaws,researchers at Kyushu University’s Faculty of Information Science and Electrical Engineering have published in PLOS ONE a method called ‘Raw Zero-Shot’ that assesses how neural networks handle elements unknown to them. The results could help researchers identify common features that make AIs ‘non-robust’ and develop methods to rectify their problems.
    “There is a range of real-world applications for image recognition neural networks, including self-driving cars and diagnostic tools in healthcare,” explains Danilo Vasconcellos Vargas, who led the study. “However, no matter how well trained the AI, it can fail with even a slight change in an image.”
    In practice, image recognition AIs are ‘trained’ on many sample images before being asked to identify one. For example, if you want an AI to identify ducks, you would first train it on many pictures of ducks.
    Nonetheless, even the best-trained AIs can be misled. In fact, researchers have found that an image can be manipulated such that — while it may appear unchanged to the human eye — an AI cannot accurately identify it. Even a single-pixel change in the image can cause confusion.
    To better understand why this happens, the team began investigating different image recognition AIs with the hope of identifying patterns in how they behave when faced with samples that they had not been trained with, i.e., elements unknown to the AI.
    “If you give an image to an AI, it will try to tell you what it is, no matter if that answer is correct or not. So, we took the twelve most common AIs today and applied a new method called ‘Raw Zero-Shot Learning,'” continues Vargas. “Basically, we gave the AIs a series of images with no hints or training. Our hypothesis was that there would be correlations in how they answered. They would be wrong, but wrong in the same way.”
    What they found was just that. In all cases, the image recognition AI would produce an answer, and the answers — while wrong — would be consistent, that is to say they would cluster together. The density of each cluster would indicate how the AI processed the unknown images based on its foundational knowledge of different images.
    “If we understand what the AI was doing and what it learned when processing unknown images, we can use that same understanding to analyze why AIs break when faced with images with single-pixel changes or slight modifications,” Vargas states. “Utilization of the knowledge we gained trying to solve one problem by applying it to a different but related problem is known as Transferability.”
    The team observed that Capsule Networks, also known as CapsNet, produced the densest clusters, giving it the best transferability amongst neural networks. They believe it might be because of the dynamical nature of CapsNet.
    “While today’s AIs are accurate, they lack the robustness for further utility. We need to understand what the problem is and why it’s happening. In this work, we showed a possible strategy to study these issues,” concludes Vargas. “Instead of focusing solely on accuracy, we must investigate ways to improve robustness and flexibility. Then we may be able to develop a true artificial intelligence.”
    Story Source:
    Materials provided by Kyushu University. Note: Content may be edited for style and length. More

  • in

    Algorithm predicts crime a week in advance, but reveals bias in police response

    Advances in machine learning and artificial intelligence have sparked interest from governments that would like to use these tools for predictive policing to deter crime. Early efforts at crime prediction have been controversial, however, because they do not account for systemic biases in police enforcement and its complex relationship with crime and society.
    Data and social scientists from the University of Chicago have developed a new algorithm that forecasts crime by learning patterns in time and geographic locations from public data on violent and property crimes. The model can predict future crimes one week in advance with about 90% accuracy.
    In a separate model, the research team also studied the police response to crime by analyzing the number of arrests following incidents and comparing those rates among neighborhoods with different socioeconomic status. They saw that crime in wealthier areas resulted in more arrests, while arrests in disadvantaged neighborhoods dropped. Crime in poor neighborhoods didn’t lead to more arrests, however, suggesting bias in police response and enforcement.
    “What we’re seeing is that when you stress the system, it requires more resources to arrest more people in response to crime in a wealthy area and draws police resources away from lower socioeconomic status areas,” said Ishanu Chattopadhyay, PhD, Assistant Professor of Medicine at UChicago and senior author of the new study, which was published this week in Nature Human Behavior.
    The tool was tested and validated using historical data from the City of Chicago around two broad categories of reported events: violent crimes (homicides, assaults, and batteries) and property crimes (burglaries, thefts, and motor vehicle thefts). These data were used because they were most likely to be reported to police in urban areas where there is historical distrust and lack of cooperation with law enforcement. Such crimes are also less prone to enforcement bias, as is the case with drug crimes, traffic stops, and other misdemeanor infractions.
    Previous efforts at crime prediction often use an epidemic or seismic approach, where crime is depicted as emerging in “hotspots” that spread to surrounding areas. These tools miss out on the complex social environment of cities, however, and don’t consider the relationship between crime and the effects of police enforcement.
    “Spatial models ignore the natural topology of the city,” said sociologist and co-author James Evans, PhD, Max Palevsky Professor at UChicago and the Santa Fe Institute. “Transportation networks respect streets, walkways, train and bus lines. Communication networks respect areas of similar socio-economic background. Our model enables discovery of these connections.”
    The new model isolates crime by looking at the time and spatial coordinates of discrete events and detecting patterns to predict future events. It divides the city into spatial tiles roughly 1,000 feet across and predicts crime within these areas instead of relying on traditional neighborhood or political boundaries, which are also subject to bias. The model performed just as well with data from seven other U.S. cities: Atlanta, Austin, Detroit, Los Angeles, Philadelphia, Portland, and San Francisco.
    “We demonstrate the importance of discovering city-specific patterns for the prediction of reported crime, which generates a fresh view on neighborhoods in the city, allows us to ask novel questions, and lets us evaluate police action in new ways,” Evans said.
    Chattopadhyay is careful to note that the tool’s accuracy does not mean that it should be used to direct law enforcement, with police departments using it to swarm neighborhoods proactively to prevent crime. Instead, it should be added to a toolbox of urban policies and policing strategies to address crime.
    “We created a digital twin of urban environments. If you feed it data from happened in the past, it will tell you what’s going to happen in future. It’s not magical, there are limitations, but we validated it and it works really well,” Chattopadhyay said. “Now you can use this as a simulation tool to see what happens if crime goes up in one area of the city, or there is increased enforcement in another area. If you apply all these different variables, you can see how the systems evolves in response.”
    The study, “Event-level Prediction of Urban Crime Reveals Signature of Enforcement Bias in U.S. Cities,” was supported by the Defense Advanced Research Projects Agency and the Neubauer Collegium for Culture and Society. Additional authors include Victor Rotaru, Yi Huang, and Timmy Li from the University of Chicago. More

  • in

    Common gene used to profile microbial communities

    Part of a gene is better than none when identifying a species of microbe. But for Rice University computer scientists, part was not nearly enough in their pursuit of a program to identify all the species in a microbiome.
    Emu, their microbial community profiling software, effectively identifies bacterial species by leveraging long DNA sequences that span the entire length of the gene under study.
    The Emu project led by computer scientist Todd Treangen and graduate student Kristen Curry of Rice’s George R. Brown School of Engineering facilitates the analysis of a key gene microbiome researchers use to sort out species of bacteria that could be harmful — or helpful — to humans and the environment.
    Their target, 16S, is a subunit of the rRNA (ribosomal ribonucleic acid) gene, whose usage was pioneered by Carl Woese in 1977. This region is highly conserved in bacteria and archaea and also contains variable regions that are critical for separating distinct genera and species.
    “It’s commonly used for microbiome analysis because it’s present in all bacteria and most archaea,” said Curry, in her third year in the Treangen group. “Because of that, there are regions that have been conserved over the years that make it easy to target. In DNA sequencing, we need parts of it to be the same in all bacteria so we know what to look for, and then we need parts to be different so we can tell bacteria apart.”
    The Rice team’s study, with collaborators in Germany and at the Houston Methodist Research Institute, Baylor College of Medicine and Texas Children’s Hospital, appears in the journal Nature Methods. More

  • in

    Capturing an elusive shadow: State-by-state gun ownership

    Policy-makers are faced with an exceptional challenge: how to reduce harm caused by firearms while maintaining citizens’ right to bear arms and protect themselves. This is especially true as the Supreme Court has hobbled New York State regulations restricting who can carry a concealed weapon.
    While meaningful legislation requires an understanding of how access to firearms is associated with different outcomes of harm, this knowledge also calls for accurate, highly-resolved data on firearm possession, data that is presently unavailable due to a lack of a comprehensive national firearm ownership registry.
    Newly published research from data scientist and firearm proliferation researcher Maurizio Porfiri, Institute Professor at the NYU Tandon School of Engineering, and co-authors Roni Barak Ventura, a post-doctoral researcher at Porfiri’s Dynamical Systems Lab, and Manuel Ruiz Marin of the Universidad Politécnica de Cartagena, Spain describe a spatio-temporal model to predict trends in firearm prevalence on a state-by-state level by fusing data from two available proxies — background checks per capita and suicides committed with a firearm in a given state. The study “A spatiotemporal model of firearm ownership in the United States,” in the Cell Press journal Patterns, details how by calibrating their results with yearly survey data, the team determined that the two proxies can be simultaneously considered to draw precise information regarding firearm ownership.
    Porfiri, who in 2020 received one of the first newly authorized NSF federal grants for $2 million to study the “firearm ecosystem” in the U.S., has spent the last few years exploring gun acquisition trends and how they relate to and are influenced by a number of factors, from media coverage of mass shootings, to the influence of the sitting President.
    “There is very limited knowledge on when and where guns are acquired in the country, and even less is known regarding future ownership trends,” said Porfiri, professor of mechanical and aerospace, biomedical, and civil and urban engineering and incoming director of the Center for Urban Science and Progress (CUSP) at NYU Tandon. “Prior studies have largely relied on the use of a single, select proxy to make some inference of gun prevalence, typically within simple correlation schemes. Our results show that there is a need to combine proxies of sales and violence to draw precise inferences on firearm prevalence.” He added that most research aggregates the measure counts within states and does not consider interference between states or spill-over effects.
    Their study shows how their model can be used to better understand the relationships between media coverage, mass shootings, and firearm ownership, uncovering causal associations that are masked when the proxies are used individually.
    While the researchers found, for example, that media coverage of firearm control is causally associated with firearm ownership, they discovered that their model generating a strong firearm ownership profile for a state was a strong predictor of mass shootings in that state.
    “The potential link between mass shootings and firearm purchases is a unique contribution of our model,” said Ruiz Marin. “Such a link can only be detected by scratching the surface on the exact gun counts in the country.”
    “We combined publicly available data variables into one measure of ownership. Because it has a spatial component, we could also track gun flow from one state to another based on political and cultural similarities,” said Barak-Ventura, adding that the spatial component of the work is novel. “Prior studies looked at a correlation of two variables such as increasing background checks and an increase in gun violence.”
    Barak-Ventura said the team is now using their model to explore which policies are effective in reducing death by guns in a state and surrounding regions, and how the relationship between gun ownership and violent outcomes is disrupted by different legislation.
    The research was supported by the National Science Foundation and by RAND’s National Collaborative on Gun Violence Research through a postdoctoral fellowship award. Roni Barak Ventura’s work was supported by a scholarship from the Mitsui USA Foundation. This study was also part of the collaborative activities carried out under the programs of the region of Murcia (Spain) as well as the Ministerio de Ciencia, Innovación y Universidades. More