More stories

  • in

    Feeling out of equilibrium in a dual geometric world

    Losing energy is rarely a good thing, but now, researchers in Japan have shown how to extend the applicability of thermodynamics to systems that are not in equilibrium. By encoding the energy dissipation relationships in a geometric way, they were able to cast the physical constraints in a generalized geometric space. This work may significantly improve our understanding of chemical reaction networks, including those that underlie the metabolism and growth of living organisms.
    Thermodynamics is the branch of physics dealing with the processes by which energy is transferred between entities. Its predictions are crucial for both chemistry and biology when determining if certain chemical reactions, or interconnected networks of reactions, will proceed spontaneously. However, while thermodynamics tries to establish a general description of macroscopic systems, often we encounter difficulties in working on the system out of equilibrium. Successful attempts to extend the framework to nonequilibrium situations have usually been limited only to specific systems and models.
    In two recently published studies, researchers from the Institute of Industrial Science at The University of Tokyo demonstrated that complex nonlinear chemical reaction processes could be described by transforming the problem into a geometrical dual representation. “With our structure, we can extend theories of nonequilibrium systems with quadratic dissipation functions to more general cases, which are important for studying chemical reaction networks,” says first author Tetsuya J. Kobayashi.
    In physics, duality is a central concept. Some physical entities are easier to interpret when transformed into a different, but mathematically equivalent, representation. As an example, a wave in the time space can be transformed into its representation in the frequency space, which is its dual form. When dealing with chemical processes, thermodynamic force and flux are the nonlinearly related dual representations — their product leads to the rate at which energy is lost to dissipation — in a geometric space induced by the duality, the scientists were able to show how thermodynamic relationships can be generalized even in nonequilibrium cases.
    “Most previous studies of chemical reaction networks relied on assumptions about the kinetics of the system. We showed how they can be handled more generally in the nonequilibrium case by employing the duality and associated geometry,” says last author Yuki Sughiyama. Possessing a more universal understanding of thermodynamic systems, and extending the applicability of nonequilibrium thermodynamics to more disciplines, can provide a better vantage point for analyzing or designing complex reaction networks, such as those used in living organisms or industrial manufacturing processes.
    Story Source:
    Materials provided by Institute of Industrial Science, The University of Tokyo. Note: Content may be edited for style and length. More

  • in

    Users trust AI as much as humans for flagging problematic content

    Social media users may trust artificial intelligence — AI — as much as human editors to flag hate speech and harmful content, according to researchers at Penn State.
    The researchers said that when users think about positive attributes of machines, like their accuracy and objectivity, they show more faith in AI. However, if users are reminded about the inability of machines to make subjective decisions, their trust is lower.
    The findings may help developers design better AI-powered content curation systems that can handle the large amounts of information currently being generated while avoiding the perception that the material has been censored, or inaccurately classified, said S. Shyam Sundar, James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory.
    “There’s this dire need for content moderation on social media and more generally, online media,” said Sundar, who is also an affiliate of Penn State’s Institute for Computational and Data Sciences. “In traditional media, we have news editors who serve as gatekeepers. But online, the gates are so wide open, and gatekeeping is not necessarily feasible for humans to perform, especially with the volume of information being generated. So, with the industry increasingly moving towards automated solutions, this study looks at the difference between human and automated content moderators, in terms of how people respond to them.”
    Both human and AI editors have advantages and disadvantages. Humans tend to more accurately assess whether content is harmful, such as when it is racist or potentially could provoke self-harm, according to Maria D. Molina, assistant professor of advertising and public relations, Michigan State, who is first author of the study. People, however, are unable to process the large amounts of content that is now being generated and shared online.
    On the other hand, while AI editors can swiftly analyze content, people often distrust these algorithms to make accurate recommendations, as well as fear that the information could be censored. More

  • in

    Beyond AlphaFold: A.I. excels at creating new proteins

    Over the past two years, machine learning has revolutionized protein structure prediction. Now, three papers in Science describe a similar revolution in protein design.
    In the new papers, biologists at the University of Washington School of Medicine show that machine learning can be used to create protein molecules much more accurately and quickly than previously possible. The scientists hope this advance will lead to many new vaccines, treatments, tools for carbon capture, and sustainable biomaterials.
    “Proteins are fundamental across biology, but we know that all the proteins found in every plant, animal, and microbe make up far less than one percent of what is possible. With these new software tools, researchers should be able to find solutions to long-standing challenges in medicine, energy, and technology,” said senior author David Baker, professor of biochemistry at the University of Washington School of Medicine and recipient of a 2021 Breakthrough Prize in Life Sciences.
    Proteins are often referred to as the “building blocks of life” because they are essential for the structure and function of all living things. They are involved in virtually every process that takes place inside cells, including growth, division, and repair. Proteins are made up of long chains of chemicals called amino acids. The sequence of amino acids in a protein determines its three-dimensional shape. This intricate shape is crucial for the protein to function.
    Recently, powerful machine learning algorithms including AlphaFold and RoseTTAFold have been trained to predict the detailed shapes of natural proteins based solely on their amino acid sequences. Machine learning is a type of artificial intelligence that allows computers to learn from data without being explicitly programmed. Machine learning can be used to model complex scientific problems that are too difficult for humans to understand.
    To go beyond the proteins found in nature, Baker’s team members broke down the challenge of protein design into three parts andused new software solutions for each. More

  • in

    'Digital mask' could protect patients' privacy in medical records

    Scientists have created a ‘digital mask’ that will allow facial images to be stored in medical records while preventing potentially sensitive personal biometric information from being extracted and shared.
    In research published today in Nature Medicine, a team led by scientists from the University of Cambridge and Sun Yat-sen University in Guangzhou, China, used three-dimensional (3D) reconstruction and deep learning algorithms to erase identifiable features from facial images while retaining disease-relevant features needed for diagnosis.
    Facial images can be useful for identifying signs of disease. For example, features such as deep forehead wrinkles and wrinkles around the eyes are significantly associated with coronary heart disease, while abnormal changes in eye movement can indicate poor visual function and visual cognitive developmental problems. However, facial images also inevitably record other biometric information about the patient, including their race, sex, age and mood.
    With the increasing digitalisation of medical records comes the risk of data breaches. While most patient data can be anonymised, facial data is more difficult to anonymise while retaining essential information. Common methods, including blurring and cropping identifiable areas, may lose important disease-relevant information, yet even so cannot fully evade face recognition systems.
    Due to privacy concerns, people often hesitate to share their medical data for public medical research or electronic health records, hindering the development of digital medical care.
    Professor Haotian Lin from Sun Yat-sen University said: “During the COVID-19 pandemic, we had to turn to consultations over the phone or by video link rather than in person. Remote healthcare for eye diseases requires patients to share a large amount of digital facial information. Patients want to know that their potentially sensitive information is secure and that their privacy is protected.”
    Professor Lin and colleagues developed a ‘digital mask’, which inputs an original video of a patient’s face and outputs a video based on the use of a deep learning algorithm and 3D reconstruction, while discarding as much of the patient’s personal biometric information as possible — and from which it was not possible to identify the individual. More

  • in

    New tool overcomes major hurdle in clinical AI design

    Harvard Medical School scientists and colleagues at Stanford University have developed an artificial intelligence diagnostic tool that can detect diseases on chest X-rays directly from natural-language descriptions contained in accompanying clinical reports.
    The step is deemed a major advance in clinical AI design because most current AI models require laborious human annotation of vast reams of data before the labeled data are fed into the model to train it.
    A report on the work, published Sept. 15 in Nature Biomedical Engineering, shows that the model, called CheXzero, performed on par with human radiologists in its ability to detect pathologies on chest X-rays.
    The team has made the code for the model publicly available for other researchers.
    Most AI models require labeled datasets during their “training” so they can learn to correctly identify pathologies. This process is especially burdensome for medical image-interpretation tasks since it involves large-scale annotation by human clinicians, which is often expensive and time-consuming. For instance, to label a chest X-ray dataset, expert radiologists would have to look at hundreds of thousands of X-ray images one by one and explicitly annotate each one with the conditions detected. While more recent AI models have tried to address this labeling bottlenck by learning from unlabeled data in a “pre-training” stage, they eventually require fine-tuning on labeled data to achieve high performance.
    By contrast, the new model is self-supervised, in the sense that it learns more independently, without the need for hand-labeled data before or after training. The model relies solely on chest X-rays and the English-language notes found in accompanying X-ray reports. More

  • in

    Talk with your hands? You might think with them too!

    How do we understand words? Scientists don’t fully understand what happens when a word pops into your brain. A research group led by Professor Shogo Makioka at the Graduate School of Sustainable System Sciences, Osaka Metropolitan University, wanted to test the idea of embodied cognition. Embodied cognition proposes that people understand the words for objects through how they interact with them, so the researchers devised a test to observe semantic processing of words when the ways that the participants could interact with objects were limited.
    Words are expressed in relation to other words; a “cup,” for example, can be a “container, made of glass, used for drinking.” However, you can only use a cup if you understand that to drink from a cup of water, you hold it in your hand and bring it to your mouth, or that if you drop the cup, it will smash on the floor. Without understanding this, it would be difficult to create a robot that can handle a real cup. In artificial intelligence research, these issues are known as symbol grounding problems, which map symbols onto the real world.
    How do humans achieve symbol grounding? Cognitive psychology and cognitive science propose the concept of embodied cognition, where objects are given meaning through interactions with the body and the environment.
    To test embodied cognition, the researchers conducted experiments to see how the participants’ brains responded to words that describe objects that can be manipulated by hand, when the participants’ hands could move freely compared to when they were restrained.
    “It was very difficult to establish a method for measuring and analyzing brain activity. The first author, Ms. Sae Onishi, worked persistently to come up with a task, in a way that we were able to measure brain activity with sufficient accuracy,” Professor Makioka explained.
    In the experiment, two words such as “cup” and “broom” were presented to participants on a screen. They were asked to compare the relative sizes of the objects those words represented and to verbally answer which object was larger — in this case, “broom.” Comparisons were made between the words, describing two types of objects, hand-manipulable objects, such as “cup” or “broom” and nonmanipulable objects, such as “building” or “lamppost,” to observe how each type was processed.
    During the tests, the participants placed their hands on a desk, where they were either free or restrained by a transparent acrylic plate. When the two words were presented on the screen, to answer which one represented a larger object, the participants needed to think of both objects and compare their sizes, forcing them to process each word’s meaning.
    Brain activity was measured with functional near-infrared spectroscopy (fNIRS), which has the advantage of taking measurements without imposing further physical constraints. The measurements focused on the interparietal sulcus and the inferior parietal lobule (supramarginal gyrus and angular gyrus) of the left brain, which are responsible for semantic processing related to tools. The speed of the verbal response was measured to determine how quickly the participant answered after the words appeared on the screen.
    The results showed that the activity of the left brain in response to hand-manipulable objects was significantly reduced by hand restraints. Verbal responses were also affected by hand constraints. These results indicate that constraining hand movement affects the processing of object-meaning, which supports the idea of embodied cognition. These results suggest that the idea of embodied cognition could also be effective for artificial intelligence to learn the meaning of objects. The paper was published in Scientific Reports.
    Story Source:
    Materials provided by Osaka Metropolitan University. Note: Content may be edited for style and length. More

  • in

    Using artificial intelligence to improve tuberculosis treatments

    Imagine you have 20 new compounds that have shown some effectiveness in treating a disease like tuberculosis (TB), which affects 10 million people worldwide and kills 1.5 million each year. For effective treatment, patients will need to take a combination of three or four drugs for months or even years because the TB bacteria behave differently in different environments in cells — and in some cases evolve to become drug-resistant. Twenty compounds in three- and four-drug combinations offer nearly 6,000 possible combinations. How do you decide which drugs to test together?
    In a recent study, published in the September issue of Cell Reports Medicine, researchers from Tufts University used data from large studies that contained laboratory measurements of two-drug combinations of 12 anti-tuberculosis drugs. Using mathematical models, the team discovered a set of rules that drug pairs need to satisfy to be potentially good treatments as part of three- and four-drug cocktails.
    The use of drug pairs rather than three- and four- drug combination measurement cuts down significantly on the amount of testing that needs to be done before moving a drug combination into further study.
    “Using the design rules we’ve established and tested, we can substitute one drug pair for another drug pair and know with a high degree of confidence that the drug pair should work in concert with the other drug pair to kill the TB bacteria in the rodent model,” says Bree Aldridge, associate professor of molecular biology and microbiology at Tufts University School of Medicine and of biomedical engineering at the School of Engineering, and an immunology and molecular microbiology program faculty member at the Graduate School of Biomedical Sciences. “The selection process we developed is both more streamlined and more accurate in predicting success than prior processes, which necessarily considered fewer combinations.”
    The lab of Aldridge, who is corresponding author on the paper and also associate director of Tufts Stuart B. Levy Center for Integrated Management of Antimicrobial Resistance, previously developed and uses DiaMOND, or diagonal measurement of n-way drug interactions, a method to systemically study pairwise and high-order drug combination interactions to identify shorter, more efficient treatment regimens for TB and potentially other bacterial infections. With the design rules established in this new study, researchers believe they can increase the speed at which scientists determine which drug combinations will most effectively treat tuberculosis, the second leading infectious killer in the world.
    Story Source:
    Materials provided by Tufts University. Note: Content may be edited for style and length. More

  • in

    Dense liquid droplets act as cellular computers

    An emerging field explores how groups of molecules condense together inside cells, the way oil droplets assemble and separate from water in a vinaigrette.
    In human cells, “liquid-liquid phase separation” occurs because similar, large molecules glom together into dense droplets separated from the more diluted parts of the fluid cell interior. Past work had suggested that evolution harnessed the natural formation of these “condensates” to organize cells, providing, for instance, isolated spaces for the building of cellular machines.
    Furthermore, abnormal, condensed — also called “tangled” — groups of molecules in droplets are nearly always present in the cells of patients with neurodegenerative conditions, including Alzheimer’s disease. While no one knows why such condensates form, one new theory argues that the biophysical properties of cell interiors change as people age — driven in part by “molecular crowding” that packs more molecules into the same spaces to affect phase separation.
    Researchers compare condensates to microprocessors, computers built into circuits, because both recognize and calculate responses based on incoming information. Despite the suspected impact of physical changes on liquid processors, the field has struggled to clarify the mechanisms connecting phase separation, condensate formation, and computation based on chemical signals, which occur at much smaller scale, researchers say. This is because natural condensates have so many functions that experiments struggle to delineate them.
    To address this challenge, researchers at NYU Grossman School of Medicine and the German Center for Neurodegenerative Diseases built an artificial system that revealed how the formation of condensates changes the action at the molecular level of enzymes called kinases, an example of chemical computation. Kinases are protein switches that influence cellular processes by phosphorylating — attaching a molecule called a phosphate group — to target molecules.
    The new analysis, published online September 14 in Molecular Cell, found that the formation of engineered condensates during phase separation offered more “sticky” regions where medically important kinases and their targets could interact and trigger phosphorylation signals. More