More stories

  • in

    Even smartest AI models don't match human visual processing

    Deep convolutional neural networks (DCNNs) don’t see objects the way humans do — using configural shape perception — and that could be dangerous in real-world AI applications, says Professor James Elder, co-author of a York University study published today.
    Published in the Cell Press journal iScience, Deep learning models fail to capture the configural nature of human shape perception is a collaborative study by Elder, who holds the York Research Chair in Human and Computer Vision and is Co-Director of York’s Centre for AI & Society, and Assistant Psychology Professor Nicholas Baker at Loyola College in Chicago, a former VISTA postdoctoral fellow at York.
    The study employed novel visual stimuli called “Frankensteins” to explore how the human brain and DCNNs process holistic, configural object properties.
    “Frankensteins are simply objects that have been taken apart and put back together the wrong way around,” says Elder. “As a result, they have all the right local features, but in the wrong places.”
    The investigators found that while the human visual system is confused by Frankensteins, DCNNs are not — revealing an insensitivity to configural object properties.
    “Our results explain why deep AI models fail under certain conditions and point to the need to consider tasks beyond object recognition in order to understand visual processing in the brain,” Elder says. “These deep models tend to take ‘shortcuts’ when solving complex recognition tasks. While these shortcuts may work in many cases, they can be dangerous in some of the real-world AI applications we are currently working on with our industry and government partners,” Elder points out.
    One such application is traffic video safety systems: “The objects in a busy traffic scene — the vehicles, bicycles and pedestrians — obstruct each other and arrive at the eye of a driver as a jumble of disconnected fragments,” explains Elder. “The brain needs to correctly group those fragments to identify the correct categories and locations of the objects. An AI system for traffic safety monitoring that is only able to perceive the fragments individually will fail at this task, potentially misunderstanding risks to vulnerable road users.”
    According to the researchers, modifications to training and architecture aimed at making networks more brain-like did not lead to configural processing, and none of the networks were able to accurately predict trial-by-trial human object judgements. “We speculate that to match human configural sensitivity, networks must be trained to solve broader range of object tasks beyond category recognition,” notes Elder.
    Story Source:
    Materials provided by York University. Note: Content may be edited for style and length. More

  • in

    The magneto-optic modulator

    Many state-of-the-art technologies work at incredibly low temperatures. Superconducting microprocessors and quantum computers promise to revolutionize computation, but scientists need to keep them just above absolute zero (-459.67° Fahrenheit) to protect their delicate states. Still, ultra-cold components have to interface with room temperature systems, providing both a challenge and an opportunity for engineers.
    An international team of scientists, led by UC Santa Barbara’s Paolo Pintus, has designed a device to help cryogenic computers talk with their fair-weather counterparts. The mechanism uses a magnetic field to convert data from electrical current to pulses of light. The light can then travel via fiber-optic cables, which can transmit more information than regular electrical cables while minimizing the heat that leaks into the cryogenic system. The team’s results appear in the journal Nature Electronics.
    “A device like this could enable seamless integration with cutting-edge technologies based on superconductors, for example,” said Pintus, a project scientist in UC Santa Barbara’s Optoelectronics Research Group. Superconductors can carry electrical current without any energy loss, but typically require temperatures below -450° Fahrenheit to work properly.
    Right now, cryogenic systems use standard metal wires to connect with room-temperature electronics. Unfortunately, these wires transfer heat into the cold circuits and can only transmit a small amount of data at a time.
    Pintus and his collaborators wanted to address both these issues at once. “The solution is using light in an optical fiber to transfer information instead of using electrons in a metal cable,” he said.
    Fiber optics are standard in modern telecommunications. These thin glass cables carry information as pulses of light far faster than metal wires can carry electrical charges. As a result, fiberoptic cables can relay 1,000 times more data than conventional wires over the same time span. And glass is a good insulator, meaning it will transfer far less heat to the cryogenic components than a metal wire. More

  • in

    Data science reveals universal rules shaping cells' power stations

    Mitochondria are compartments — so-called “organelles” — in our cells that provide the chemical energy supply we need to move, think, and live. Chloroplasts are organelles in plants and algae that capture sunlight and perform photosynthesis. At a first glance, they might look worlds apart. But an international team of researchers, led by the University of Bergen, have used data science and computational biology to show that the same “rules” have shaped how both organelles — and more — have evolved throughout life’s history.
    Both types of organelle were once independent organisms, with their own full genomes. Billions of years ago, those organisms were captured and imprisoned by other cells — the ancestors of modern species. Since then, the organelles have lost most of their genomes, with only a handful of genes remaining in modern-day mitochondrial and chloroplast DNA. These remaining genes are essential for life and important in many devastating diseases, but why they stay in organelle DNA — when so many others have been lost — has been debated for decades.
    For a fresh perspective on this question, the scientists took a data-driven approach. They gathered data on all the organelle DNA that has been sequenced across life. They then used modelling, biochemistry, and structural biology to represent a wide range of different hypotheses about gene retention as a set of numbers associated with each gene. Using tools from data science and statistics, they asked which ideas could best explain the patterns of retained genes in the data they had compiled — testing the results with unseen data to check their power.
    “Some clear patterns emerged from the modelling,” explains Kostas Giannakis, a postdoctoral researcher at Bergen and joint first author on the paper. “Lots of these genes encode subunits of larger cellular machines, which are assembled like a jigsaw. Genes for the pieces in the middle of the jigsaw are most likely to stay in organelle DNA.”
    The team believe that this is because keeping local control over the production of such central subunits help the organelle quickly respond to change — a version of the so-called “CoRR” model. They also found support for other existing, debated, and new ideas. For example, if a gene product is hydrophobic — and hard to import to the organelle from outside — the data shows that it is often retained there. Genes that are themselves encoded using stronger-binding chemical groups are also more often retained — perhaps because they are more robust in the harsh environment of the organelle.
    “These different hypotheses have usually been thought of as competing in the past,” says Iain Johnston, a professor at Bergen and leader of the team. “But actually no single mechanism can explain all the observations — it takes a combination. A strength of this unbiased, data-driven approach is that it can show that lots of ideas are partly right, but none exclusively so — perhaps explaining the long debate on these topics.”
    To their surprise, the team also found that their models trained to describe mitochondrial genes also predicted the retention of chloroplast genes, and vice versa. They also found that the same genetic features shaping mitochondrial and chloroplast DNA also appear to play a role in the evolution of other endosymbionts — organisms which have been more recently captured by other hosts, from algae to insects.
    “That was a wow moment,” says Johnston. “We — and others — have had this idea that similar pressures might apply to the evolution of different organelles. But to see this universal, quantitative link — data from one organelle precisely predicting patterns in another, and in more recent endosymbionts — was really striking.”
    The research is part of a broader project funded by the European Research Council, and the team are now working on a parallel question — how different organisms maintain the organelle genes that they do retain. Mutations in mitochondrial DNA can cause devastating inherited diseases; the team are using modelling, statistics, and experiments to explore how these mutations are dealt with in humans, plants, and more.
    Story Source:
    Materials provided by The University of Bergen. Note: Content may be edited for style and length. More

  • in

    Feeling out of equilibrium in a dual geometric world

    Losing energy is rarely a good thing, but now, researchers in Japan have shown how to extend the applicability of thermodynamics to systems that are not in equilibrium. By encoding the energy dissipation relationships in a geometric way, they were able to cast the physical constraints in a generalized geometric space. This work may significantly improve our understanding of chemical reaction networks, including those that underlie the metabolism and growth of living organisms.
    Thermodynamics is the branch of physics dealing with the processes by which energy is transferred between entities. Its predictions are crucial for both chemistry and biology when determining if certain chemical reactions, or interconnected networks of reactions, will proceed spontaneously. However, while thermodynamics tries to establish a general description of macroscopic systems, often we encounter difficulties in working on the system out of equilibrium. Successful attempts to extend the framework to nonequilibrium situations have usually been limited only to specific systems and models.
    In two recently published studies, researchers from the Institute of Industrial Science at The University of Tokyo demonstrated that complex nonlinear chemical reaction processes could be described by transforming the problem into a geometrical dual representation. “With our structure, we can extend theories of nonequilibrium systems with quadratic dissipation functions to more general cases, which are important for studying chemical reaction networks,” says first author Tetsuya J. Kobayashi.
    In physics, duality is a central concept. Some physical entities are easier to interpret when transformed into a different, but mathematically equivalent, representation. As an example, a wave in the time space can be transformed into its representation in the frequency space, which is its dual form. When dealing with chemical processes, thermodynamic force and flux are the nonlinearly related dual representations — their product leads to the rate at which energy is lost to dissipation — in a geometric space induced by the duality, the scientists were able to show how thermodynamic relationships can be generalized even in nonequilibrium cases.
    “Most previous studies of chemical reaction networks relied on assumptions about the kinetics of the system. We showed how they can be handled more generally in the nonequilibrium case by employing the duality and associated geometry,” says last author Yuki Sughiyama. Possessing a more universal understanding of thermodynamic systems, and extending the applicability of nonequilibrium thermodynamics to more disciplines, can provide a better vantage point for analyzing or designing complex reaction networks, such as those used in living organisms or industrial manufacturing processes.
    Story Source:
    Materials provided by Institute of Industrial Science, The University of Tokyo. Note: Content may be edited for style and length. More

  • in

    Users trust AI as much as humans for flagging problematic content

    Social media users may trust artificial intelligence — AI — as much as human editors to flag hate speech and harmful content, according to researchers at Penn State.
    The researchers said that when users think about positive attributes of machines, like their accuracy and objectivity, they show more faith in AI. However, if users are reminded about the inability of machines to make subjective decisions, their trust is lower.
    The findings may help developers design better AI-powered content curation systems that can handle the large amounts of information currently being generated while avoiding the perception that the material has been censored, or inaccurately classified, said S. Shyam Sundar, James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory.
    “There’s this dire need for content moderation on social media and more generally, online media,” said Sundar, who is also an affiliate of Penn State’s Institute for Computational and Data Sciences. “In traditional media, we have news editors who serve as gatekeepers. But online, the gates are so wide open, and gatekeeping is not necessarily feasible for humans to perform, especially with the volume of information being generated. So, with the industry increasingly moving towards automated solutions, this study looks at the difference between human and automated content moderators, in terms of how people respond to them.”
    Both human and AI editors have advantages and disadvantages. Humans tend to more accurately assess whether content is harmful, such as when it is racist or potentially could provoke self-harm, according to Maria D. Molina, assistant professor of advertising and public relations, Michigan State, who is first author of the study. People, however, are unable to process the large amounts of content that is now being generated and shared online.
    On the other hand, while AI editors can swiftly analyze content, people often distrust these algorithms to make accurate recommendations, as well as fear that the information could be censored. More

  • in

    Beyond AlphaFold: A.I. excels at creating new proteins

    Over the past two years, machine learning has revolutionized protein structure prediction. Now, three papers in Science describe a similar revolution in protein design.
    In the new papers, biologists at the University of Washington School of Medicine show that machine learning can be used to create protein molecules much more accurately and quickly than previously possible. The scientists hope this advance will lead to many new vaccines, treatments, tools for carbon capture, and sustainable biomaterials.
    “Proteins are fundamental across biology, but we know that all the proteins found in every plant, animal, and microbe make up far less than one percent of what is possible. With these new software tools, researchers should be able to find solutions to long-standing challenges in medicine, energy, and technology,” said senior author David Baker, professor of biochemistry at the University of Washington School of Medicine and recipient of a 2021 Breakthrough Prize in Life Sciences.
    Proteins are often referred to as the “building blocks of life” because they are essential for the structure and function of all living things. They are involved in virtually every process that takes place inside cells, including growth, division, and repair. Proteins are made up of long chains of chemicals called amino acids. The sequence of amino acids in a protein determines its three-dimensional shape. This intricate shape is crucial for the protein to function.
    Recently, powerful machine learning algorithms including AlphaFold and RoseTTAFold have been trained to predict the detailed shapes of natural proteins based solely on their amino acid sequences. Machine learning is a type of artificial intelligence that allows computers to learn from data without being explicitly programmed. Machine learning can be used to model complex scientific problems that are too difficult for humans to understand.
    To go beyond the proteins found in nature, Baker’s team members broke down the challenge of protein design into three parts andused new software solutions for each. More

  • in

    'Digital mask' could protect patients' privacy in medical records

    Scientists have created a ‘digital mask’ that will allow facial images to be stored in medical records while preventing potentially sensitive personal biometric information from being extracted and shared.
    In research published today in Nature Medicine, a team led by scientists from the University of Cambridge and Sun Yat-sen University in Guangzhou, China, used three-dimensional (3D) reconstruction and deep learning algorithms to erase identifiable features from facial images while retaining disease-relevant features needed for diagnosis.
    Facial images can be useful for identifying signs of disease. For example, features such as deep forehead wrinkles and wrinkles around the eyes are significantly associated with coronary heart disease, while abnormal changes in eye movement can indicate poor visual function and visual cognitive developmental problems. However, facial images also inevitably record other biometric information about the patient, including their race, sex, age and mood.
    With the increasing digitalisation of medical records comes the risk of data breaches. While most patient data can be anonymised, facial data is more difficult to anonymise while retaining essential information. Common methods, including blurring and cropping identifiable areas, may lose important disease-relevant information, yet even so cannot fully evade face recognition systems.
    Due to privacy concerns, people often hesitate to share their medical data for public medical research or electronic health records, hindering the development of digital medical care.
    Professor Haotian Lin from Sun Yat-sen University said: “During the COVID-19 pandemic, we had to turn to consultations over the phone or by video link rather than in person. Remote healthcare for eye diseases requires patients to share a large amount of digital facial information. Patients want to know that their potentially sensitive information is secure and that their privacy is protected.”
    Professor Lin and colleagues developed a ‘digital mask’, which inputs an original video of a patient’s face and outputs a video based on the use of a deep learning algorithm and 3D reconstruction, while discarding as much of the patient’s personal biometric information as possible — and from which it was not possible to identify the individual. More

  • in

    New tool overcomes major hurdle in clinical AI design

    Harvard Medical School scientists and colleagues at Stanford University have developed an artificial intelligence diagnostic tool that can detect diseases on chest X-rays directly from natural-language descriptions contained in accompanying clinical reports.
    The step is deemed a major advance in clinical AI design because most current AI models require laborious human annotation of vast reams of data before the labeled data are fed into the model to train it.
    A report on the work, published Sept. 15 in Nature Biomedical Engineering, shows that the model, called CheXzero, performed on par with human radiologists in its ability to detect pathologies on chest X-rays.
    The team has made the code for the model publicly available for other researchers.
    Most AI models require labeled datasets during their “training” so they can learn to correctly identify pathologies. This process is especially burdensome for medical image-interpretation tasks since it involves large-scale annotation by human clinicians, which is often expensive and time-consuming. For instance, to label a chest X-ray dataset, expert radiologists would have to look at hundreds of thousands of X-ray images one by one and explicitly annotate each one with the conditions detected. While more recent AI models have tried to address this labeling bottlenck by learning from unlabeled data in a “pre-training” stage, they eventually require fine-tuning on labeled data to achieve high performance.
    By contrast, the new model is self-supervised, in the sense that it learns more independently, without the need for hand-labeled data before or after training. The model relies solely on chest X-rays and the English-language notes found in accompanying X-ray reports. More