More stories

  • in

    The magneto-optic modulator

    Many state-of-the-art technologies work at incredibly low temperatures. Superconducting microprocessors and quantum computers promise to revolutionize computation, but scientists need to keep them just above absolute zero (-459.67° Fahrenheit) to protect their delicate states. Still, ultra-cold components have to interface with room temperature systems, providing both a challenge and an opportunity for engineers.
    An international team of scientists, led by UC Santa Barbara’s Paolo Pintus, has designed a device to help cryogenic computers talk with their fair-weather counterparts. The mechanism uses a magnetic field to convert data from electrical current to pulses of light. The light can then travel via fiber-optic cables, which can transmit more information than regular electrical cables while minimizing the heat that leaks into the cryogenic system. The team’s results appear in the journal Nature Electronics.
    “A device like this could enable seamless integration with cutting-edge technologies based on superconductors, for example,” said Pintus, a project scientist in UC Santa Barbara’s Optoelectronics Research Group. Superconductors can carry electrical current without any energy loss, but typically require temperatures below -450° Fahrenheit to work properly.
    Right now, cryogenic systems use standard metal wires to connect with room-temperature electronics. Unfortunately, these wires transfer heat into the cold circuits and can only transmit a small amount of data at a time.
    Pintus and his collaborators wanted to address both these issues at once. “The solution is using light in an optical fiber to transfer information instead of using electrons in a metal cable,” he said.
    Fiber optics are standard in modern telecommunications. These thin glass cables carry information as pulses of light far faster than metal wires can carry electrical charges. As a result, fiberoptic cables can relay 1,000 times more data than conventional wires over the same time span. And glass is a good insulator, meaning it will transfer far less heat to the cryogenic components than a metal wire. More

  • in

    Data science reveals universal rules shaping cells' power stations

    Mitochondria are compartments — so-called “organelles” — in our cells that provide the chemical energy supply we need to move, think, and live. Chloroplasts are organelles in plants and algae that capture sunlight and perform photosynthesis. At a first glance, they might look worlds apart. But an international team of researchers, led by the University of Bergen, have used data science and computational biology to show that the same “rules” have shaped how both organelles — and more — have evolved throughout life’s history.
    Both types of organelle were once independent organisms, with their own full genomes. Billions of years ago, those organisms were captured and imprisoned by other cells — the ancestors of modern species. Since then, the organelles have lost most of their genomes, with only a handful of genes remaining in modern-day mitochondrial and chloroplast DNA. These remaining genes are essential for life and important in many devastating diseases, but why they stay in organelle DNA — when so many others have been lost — has been debated for decades.
    For a fresh perspective on this question, the scientists took a data-driven approach. They gathered data on all the organelle DNA that has been sequenced across life. They then used modelling, biochemistry, and structural biology to represent a wide range of different hypotheses about gene retention as a set of numbers associated with each gene. Using tools from data science and statistics, they asked which ideas could best explain the patterns of retained genes in the data they had compiled — testing the results with unseen data to check their power.
    “Some clear patterns emerged from the modelling,” explains Kostas Giannakis, a postdoctoral researcher at Bergen and joint first author on the paper. “Lots of these genes encode subunits of larger cellular machines, which are assembled like a jigsaw. Genes for the pieces in the middle of the jigsaw are most likely to stay in organelle DNA.”
    The team believe that this is because keeping local control over the production of such central subunits help the organelle quickly respond to change — a version of the so-called “CoRR” model. They also found support for other existing, debated, and new ideas. For example, if a gene product is hydrophobic — and hard to import to the organelle from outside — the data shows that it is often retained there. Genes that are themselves encoded using stronger-binding chemical groups are also more often retained — perhaps because they are more robust in the harsh environment of the organelle.
    “These different hypotheses have usually been thought of as competing in the past,” says Iain Johnston, a professor at Bergen and leader of the team. “But actually no single mechanism can explain all the observations — it takes a combination. A strength of this unbiased, data-driven approach is that it can show that lots of ideas are partly right, but none exclusively so — perhaps explaining the long debate on these topics.”
    To their surprise, the team also found that their models trained to describe mitochondrial genes also predicted the retention of chloroplast genes, and vice versa. They also found that the same genetic features shaping mitochondrial and chloroplast DNA also appear to play a role in the evolution of other endosymbionts — organisms which have been more recently captured by other hosts, from algae to insects.
    “That was a wow moment,” says Johnston. “We — and others — have had this idea that similar pressures might apply to the evolution of different organelles. But to see this universal, quantitative link — data from one organelle precisely predicting patterns in another, and in more recent endosymbionts — was really striking.”
    The research is part of a broader project funded by the European Research Council, and the team are now working on a parallel question — how different organisms maintain the organelle genes that they do retain. Mutations in mitochondrial DNA can cause devastating inherited diseases; the team are using modelling, statistics, and experiments to explore how these mutations are dealt with in humans, plants, and more.
    Story Source:
    Materials provided by The University of Bergen. Note: Content may be edited for style and length. More

  • in

    Feeling out of equilibrium in a dual geometric world

    Losing energy is rarely a good thing, but now, researchers in Japan have shown how to extend the applicability of thermodynamics to systems that are not in equilibrium. By encoding the energy dissipation relationships in a geometric way, they were able to cast the physical constraints in a generalized geometric space. This work may significantly improve our understanding of chemical reaction networks, including those that underlie the metabolism and growth of living organisms.
    Thermodynamics is the branch of physics dealing with the processes by which energy is transferred between entities. Its predictions are crucial for both chemistry and biology when determining if certain chemical reactions, or interconnected networks of reactions, will proceed spontaneously. However, while thermodynamics tries to establish a general description of macroscopic systems, often we encounter difficulties in working on the system out of equilibrium. Successful attempts to extend the framework to nonequilibrium situations have usually been limited only to specific systems and models.
    In two recently published studies, researchers from the Institute of Industrial Science at The University of Tokyo demonstrated that complex nonlinear chemical reaction processes could be described by transforming the problem into a geometrical dual representation. “With our structure, we can extend theories of nonequilibrium systems with quadratic dissipation functions to more general cases, which are important for studying chemical reaction networks,” says first author Tetsuya J. Kobayashi.
    In physics, duality is a central concept. Some physical entities are easier to interpret when transformed into a different, but mathematically equivalent, representation. As an example, a wave in the time space can be transformed into its representation in the frequency space, which is its dual form. When dealing with chemical processes, thermodynamic force and flux are the nonlinearly related dual representations — their product leads to the rate at which energy is lost to dissipation — in a geometric space induced by the duality, the scientists were able to show how thermodynamic relationships can be generalized even in nonequilibrium cases.
    “Most previous studies of chemical reaction networks relied on assumptions about the kinetics of the system. We showed how they can be handled more generally in the nonequilibrium case by employing the duality and associated geometry,” says last author Yuki Sughiyama. Possessing a more universal understanding of thermodynamic systems, and extending the applicability of nonequilibrium thermodynamics to more disciplines, can provide a better vantage point for analyzing or designing complex reaction networks, such as those used in living organisms or industrial manufacturing processes.
    Story Source:
    Materials provided by Institute of Industrial Science, The University of Tokyo. Note: Content may be edited for style and length. More

  • in

    Users trust AI as much as humans for flagging problematic content

    Social media users may trust artificial intelligence — AI — as much as human editors to flag hate speech and harmful content, according to researchers at Penn State.
    The researchers said that when users think about positive attributes of machines, like their accuracy and objectivity, they show more faith in AI. However, if users are reminded about the inability of machines to make subjective decisions, their trust is lower.
    The findings may help developers design better AI-powered content curation systems that can handle the large amounts of information currently being generated while avoiding the perception that the material has been censored, or inaccurately classified, said S. Shyam Sundar, James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory.
    “There’s this dire need for content moderation on social media and more generally, online media,” said Sundar, who is also an affiliate of Penn State’s Institute for Computational and Data Sciences. “In traditional media, we have news editors who serve as gatekeepers. But online, the gates are so wide open, and gatekeeping is not necessarily feasible for humans to perform, especially with the volume of information being generated. So, with the industry increasingly moving towards automated solutions, this study looks at the difference between human and automated content moderators, in terms of how people respond to them.”
    Both human and AI editors have advantages and disadvantages. Humans tend to more accurately assess whether content is harmful, such as when it is racist or potentially could provoke self-harm, according to Maria D. Molina, assistant professor of advertising and public relations, Michigan State, who is first author of the study. People, however, are unable to process the large amounts of content that is now being generated and shared online.
    On the other hand, while AI editors can swiftly analyze content, people often distrust these algorithms to make accurate recommendations, as well as fear that the information could be censored. More

  • in

    Beyond AlphaFold: A.I. excels at creating new proteins

    Over the past two years, machine learning has revolutionized protein structure prediction. Now, three papers in Science describe a similar revolution in protein design.
    In the new papers, biologists at the University of Washington School of Medicine show that machine learning can be used to create protein molecules much more accurately and quickly than previously possible. The scientists hope this advance will lead to many new vaccines, treatments, tools for carbon capture, and sustainable biomaterials.
    “Proteins are fundamental across biology, but we know that all the proteins found in every plant, animal, and microbe make up far less than one percent of what is possible. With these new software tools, researchers should be able to find solutions to long-standing challenges in medicine, energy, and technology,” said senior author David Baker, professor of biochemistry at the University of Washington School of Medicine and recipient of a 2021 Breakthrough Prize in Life Sciences.
    Proteins are often referred to as the “building blocks of life” because they are essential for the structure and function of all living things. They are involved in virtually every process that takes place inside cells, including growth, division, and repair. Proteins are made up of long chains of chemicals called amino acids. The sequence of amino acids in a protein determines its three-dimensional shape. This intricate shape is crucial for the protein to function.
    Recently, powerful machine learning algorithms including AlphaFold and RoseTTAFold have been trained to predict the detailed shapes of natural proteins based solely on their amino acid sequences. Machine learning is a type of artificial intelligence that allows computers to learn from data without being explicitly programmed. Machine learning can be used to model complex scientific problems that are too difficult for humans to understand.
    To go beyond the proteins found in nature, Baker’s team members broke down the challenge of protein design into three parts andused new software solutions for each. More

  • in

    'Digital mask' could protect patients' privacy in medical records

    Scientists have created a ‘digital mask’ that will allow facial images to be stored in medical records while preventing potentially sensitive personal biometric information from being extracted and shared.
    In research published today in Nature Medicine, a team led by scientists from the University of Cambridge and Sun Yat-sen University in Guangzhou, China, used three-dimensional (3D) reconstruction and deep learning algorithms to erase identifiable features from facial images while retaining disease-relevant features needed for diagnosis.
    Facial images can be useful for identifying signs of disease. For example, features such as deep forehead wrinkles and wrinkles around the eyes are significantly associated with coronary heart disease, while abnormal changes in eye movement can indicate poor visual function and visual cognitive developmental problems. However, facial images also inevitably record other biometric information about the patient, including their race, sex, age and mood.
    With the increasing digitalisation of medical records comes the risk of data breaches. While most patient data can be anonymised, facial data is more difficult to anonymise while retaining essential information. Common methods, including blurring and cropping identifiable areas, may lose important disease-relevant information, yet even so cannot fully evade face recognition systems.
    Due to privacy concerns, people often hesitate to share their medical data for public medical research or electronic health records, hindering the development of digital medical care.
    Professor Haotian Lin from Sun Yat-sen University said: “During the COVID-19 pandemic, we had to turn to consultations over the phone or by video link rather than in person. Remote healthcare for eye diseases requires patients to share a large amount of digital facial information. Patients want to know that their potentially sensitive information is secure and that their privacy is protected.”
    Professor Lin and colleagues developed a ‘digital mask’, which inputs an original video of a patient’s face and outputs a video based on the use of a deep learning algorithm and 3D reconstruction, while discarding as much of the patient’s personal biometric information as possible — and from which it was not possible to identify the individual. More

  • in

    New tool overcomes major hurdle in clinical AI design

    Harvard Medical School scientists and colleagues at Stanford University have developed an artificial intelligence diagnostic tool that can detect diseases on chest X-rays directly from natural-language descriptions contained in accompanying clinical reports.
    The step is deemed a major advance in clinical AI design because most current AI models require laborious human annotation of vast reams of data before the labeled data are fed into the model to train it.
    A report on the work, published Sept. 15 in Nature Biomedical Engineering, shows that the model, called CheXzero, performed on par with human radiologists in its ability to detect pathologies on chest X-rays.
    The team has made the code for the model publicly available for other researchers.
    Most AI models require labeled datasets during their “training” so they can learn to correctly identify pathologies. This process is especially burdensome for medical image-interpretation tasks since it involves large-scale annotation by human clinicians, which is often expensive and time-consuming. For instance, to label a chest X-ray dataset, expert radiologists would have to look at hundreds of thousands of X-ray images one by one and explicitly annotate each one with the conditions detected. While more recent AI models have tried to address this labeling bottlenck by learning from unlabeled data in a “pre-training” stage, they eventually require fine-tuning on labeled data to achieve high performance.
    By contrast, the new model is self-supervised, in the sense that it learns more independently, without the need for hand-labeled data before or after training. The model relies solely on chest X-rays and the English-language notes found in accompanying X-ray reports. More

  • in

    Talk with your hands? You might think with them too!

    How do we understand words? Scientists don’t fully understand what happens when a word pops into your brain. A research group led by Professor Shogo Makioka at the Graduate School of Sustainable System Sciences, Osaka Metropolitan University, wanted to test the idea of embodied cognition. Embodied cognition proposes that people understand the words for objects through how they interact with them, so the researchers devised a test to observe semantic processing of words when the ways that the participants could interact with objects were limited.
    Words are expressed in relation to other words; a “cup,” for example, can be a “container, made of glass, used for drinking.” However, you can only use a cup if you understand that to drink from a cup of water, you hold it in your hand and bring it to your mouth, or that if you drop the cup, it will smash on the floor. Without understanding this, it would be difficult to create a robot that can handle a real cup. In artificial intelligence research, these issues are known as symbol grounding problems, which map symbols onto the real world.
    How do humans achieve symbol grounding? Cognitive psychology and cognitive science propose the concept of embodied cognition, where objects are given meaning through interactions with the body and the environment.
    To test embodied cognition, the researchers conducted experiments to see how the participants’ brains responded to words that describe objects that can be manipulated by hand, when the participants’ hands could move freely compared to when they were restrained.
    “It was very difficult to establish a method for measuring and analyzing brain activity. The first author, Ms. Sae Onishi, worked persistently to come up with a task, in a way that we were able to measure brain activity with sufficient accuracy,” Professor Makioka explained.
    In the experiment, two words such as “cup” and “broom” were presented to participants on a screen. They were asked to compare the relative sizes of the objects those words represented and to verbally answer which object was larger — in this case, “broom.” Comparisons were made between the words, describing two types of objects, hand-manipulable objects, such as “cup” or “broom” and nonmanipulable objects, such as “building” or “lamppost,” to observe how each type was processed.
    During the tests, the participants placed their hands on a desk, where they were either free or restrained by a transparent acrylic plate. When the two words were presented on the screen, to answer which one represented a larger object, the participants needed to think of both objects and compare their sizes, forcing them to process each word’s meaning.
    Brain activity was measured with functional near-infrared spectroscopy (fNIRS), which has the advantage of taking measurements without imposing further physical constraints. The measurements focused on the interparietal sulcus and the inferior parietal lobule (supramarginal gyrus and angular gyrus) of the left brain, which are responsible for semantic processing related to tools. The speed of the verbal response was measured to determine how quickly the participant answered after the words appeared on the screen.
    The results showed that the activity of the left brain in response to hand-manipulable objects was significantly reduced by hand restraints. Verbal responses were also affected by hand constraints. These results indicate that constraining hand movement affects the processing of object-meaning, which supports the idea of embodied cognition. These results suggest that the idea of embodied cognition could also be effective for artificial intelligence to learn the meaning of objects. The paper was published in Scientific Reports.
    Story Source:
    Materials provided by Osaka Metropolitan University. Note: Content may be edited for style and length. More