More stories

  • in

    Most complex protein knots

    Theoretical physicists at Johannes Gutenberg University Mainz have put Google’s artificial intelligence AlphaFold to the test and have found the most complex protein knots so far.
    The question of how the chemical composition of a protein, the amino acid sequence, determines its 3D structure has been one of the biggest challenges in biophysics for more than half a century. This knowledge about the so-called “folding” of proteins is in great demand, as it contributes significantly to the understanding of various diseases and their treatment, among other things. For these reasons, Google’s DeepMind research team has developed AlphaFold, an artificial intelligence that predicts 3D structures.
    A team consisting of researchers from Johannes Gutenberg University Mainz (JGU) and the University of California, Los Angeles, has now taken a closer look at these structures and examined them with respect to knots. We know knots primarily from shoelaces and cables, but they also occur on the nanoscale in our cells. Knotted proteins can not only be used to assess the quality of structure predictions but also raise important questions about folding mechanisms and the evolution of proteins.
    The most complex knots as a test for AlphaFold
    “We investigated numerically all — that is some 100,000 — predictions of AlphaFold for new protein knots,” said Maarten A. Brems, a PhD student in the group of Dr. Peter Virnau at Mainz University. The goal was to identify rare, high-quality structures containing complex and previously unknown protein knots to provide a basis for experimental verification of AlphaFold’s predictions. The study not only discovered the most complex knotted protein to date but also the first composite knots in proteins. The latter can be thought of as two separate knots on the same string. “These new discoveries also provide insight into the evolutionary mechanisms behind such rare proteins,” added Robert Runkel, a theoretical physicist also involved in the project. The results of this study were recently published in Protein Science.
    Dr. Peter Virnau is pleased with the results: “We have already established a collaboration with our colleague Todd Yeates from UCLA to confirm these structures experimentally. This line of research will shape the biophysics community’s view of artificial intelligence — and we are fortunate to have an expert like Dr. Yeates involved.”
    Story Source:
    Materials provided by Johannes Gutenberg Universitaet Mainz. Note: Content may be edited for style and length. More

  • in

    Virtual reality app trial shown to reduce common phobias

    Results from a University of Otago, Christchurch trial suggest fresh hope for the estimated one-in-twelve people worldwide suffering from a fear of flying, needles, heights, spiders and dogs.
    The trial, led by Associate Professor Cameron Lacey, from the Department of Psychological Medicine, studied phobia patients using a headset and a smartphone app treatment programme — a combination of Virtual Reality (VR) 360-degree video exposure therapy and cognitive behavioural therapy (CBT).
    Participants downloaded a fully self-guided smartphone app called “oVRcome,” developed by Christchurch tech entrepreneur Adam Hutchinson, aimed at treating patients with phobia and anxiety.
    The app was paired with a headset to immerse participants in virtual environments to help treat their phobia.
    The results from the trial, just published in the Australian and New Zealand Journal of Psychiatry, showed a 75 per cent reduction in phobia symptoms after six weeks of the treatment programme.
    “The improvements they reported suggests there’s great potential for the use of VR and mobile phone apps as a means of self-guided treatment for people struggling with often-crippling phobias,” Associate Professor Lacey says. More

  • in

    Machine learning identifies gun purchasers at risk of suicide

    A new study from the Violence Prevention Research Program (VPRP) at UC Davis suggests machine learning, a type of artificial intelligence, may help identify handgun purchasers who are at high risk of suicide. It also identified individual and community characteristics that are predictive of firearm suicide. The study was published in JAMA Network Open.
    Previous research has shown the risk of suicide is particularly high immediately after purchase, suggesting that acquisition itself is an indicator of elevated suicide risk.
    Risk factors identified by the algorithm to be predictive of firearm suicide included: older age first-time firearm purchaser white race living in close proximity to the gun dealer purchasing a revolver”While limiting access to firearms among individuals at increased risk for suicide presents a critical opportunity to save lives, accurately identifying those at risk remains a key challenge. Our results suggest the potential utility of handgun records in identifying high-risk individuals to aid suicide prevention,” said Hannah S. Laqueur, an assistant professor in the Department of Emergency Medicine and lead author of the study.
    In 2020, almost 48,000 Americans died by suicide, of which more than 24,000 were firearm suicides. Firearms are by far the most lethal method of suicide. Access to firearms has been identified as a major risk factor for suicideand is a potential focus for suicide prevention.
    Methodology
    To see if an algorithm could identify gun purchasers at risk of firearm suicide, the researchers looked at data from almost five million firearm transactions from the California Dealer Record of Sale database (DROS). The records, which spanned from 1996 to 2015, represented almost two million individuals. They also looked at firearm suicide data from California death records between 1996 and 2016. More

  • in

    Researchers use quantum-inspired approach to increase lidar resolution

    Researchers have shown that a quantum-inspired technique can be used to perform lidar imaging with a much higher depth resolution than is possible with conventional approaches. Lidar, which uses laser pulses to acquire 3D information about a scene or object, is usually best suited for imaging large objects such as topographical features or built structures due to its limited depth resolution.
    “Although lidar can be used to image the overall shape of a person, it typically doesn’t capture finer details such as facial features,” said research team leader Ashley Lyons from the University of Glasgow in the United Kingdom. “By adding extra depth resolution, our approach could capture enough detail to not only see facial features but even someone’s fingerprints.”
    In the Optica Publishing Group journal Optics Express, Lyons and first author Robbie Murray describe the new technique, which they call imaging two-photon interference lidar. They show that it can distinguish reflective surfaces less than 2 millimeters apart and create high-resolution 3D images with micron-scale resolution.
    “This work could lead to much higher resolution 3D imaging than is possible now, which could be useful for facial recognition and tracking applications that involve small features,” said Lyons. “For practical use, conventional lidar could be used to get a rough idea of where an object might be and then the object could be carefully measured with our method.”
    Using classically entangled light
    The new technique uses “quantum inspired” interferometry, which extracts information from the way that two light beams interfere with each other. Entangled pairs of photons — or quantum light — are often used for this type of interferometry, but approaches based on photon entanglement tend to perform poorly in situations with high levels of light loss, which is almost always the case for lidar. To overcome this problem, the researchers applied what they’ve learned from quantum sensing to classical (non-quantum) light. More

  • in

    Researchers develop computer model to predict whether a pesticide will harm bees

    Researchers in the Oregon State University College of Engineering have harnessed the power of artificial intelligence to help protect bees from pesticides.
    Cory Simon, assistant professor of chemical engineering, and Xiaoli Fern, associate professor of computer science, led the project, which involved training a machine learning model to predict whether any proposed new herbicide, fungicide or insecticide would be toxic to honey bees based on the compound’s molecular structure.
    The findings, featured on the cover of The Journal of Chemical Physics in a special issue, “Chemical Design by Artificial Intelligence,” are important because many fruit, nut, vegetable and seed crops rely on bee pollination.
    Without bees to transfer the pollen needed for reproduction, almost 100 commercial crops in the United States would vanish. Bees’ global economic impact is annually estimated to exceed $100 billion.
    “Pesticides are widely used in agriculture, which increase crop yield and provide food security, but pesticides can harm off-target species like bees,” Simon said. “And since insects, weeds, etc. eventually evolve resistance, new pesticides must continually be developed, ones that don’t harm bees.”
    Graduate students Ping Yang and Adrian Henle used honey bee toxicity data from pesticide exposure experiments, involving nearly 400 different pesticide molecules, to train an algorithm to predict if a new pesticide molecule would be toxic to honey bees.
    “The model represents pesticide molecules by the set of random walks on their molecular graphs,” Yang said.
    A random walk is a mathematical concept that describes any meandering path, such as on the complicated chemical structure of a pesticide, where each step along the path is decided by chance, as if by coin tosses.
    Imagine, Yang explains, that you’re out for an aimless stroll along a pesticide’s chemical structure, making your way from atom to atom via the bonds that hold the compound together. You travel in random directions but keep track of your route, the sequence of atoms and bonds that you visit. Then you go out on a different molecule, comparing the series of twists and turns to what you’ve done before.
    “The algorithm declares two molecules similar if they share many walks with the same sequence of atoms and bonds,” Yang said. “Our model serves as a surrogate for a bee toxicity experiment and can be used to quickly screen proposed pesticide molecules for their toxicity.”
    The National Science Foundation supported this research.
    Story Source:
    Materials provided by Oregon State University. Original written by Steve Lundeberg. Note: Content may be edited for style and length. More

  • in

    A robot learns to imagine itself

    As every athletic or fashion-conscious person knows, our body image is not always accurate or realistic, but it’s an important piece of information that determines how we function in the world. When you get dressed or play ball, your brain is constantly planning ahead so that you can move your body without bumping, tripping, or falling over.
    We humans acquire our body-model as infants, and robots are following suit. A Columbia Engineering team announced today they have created a robot that — for the first time — is able to learn a model of its entire body from scratch, without any human assistance. In a new study published by Science Robotics,, the researchers demonstrate how their robot created a kinematic model of itself, and then used its self-model to plan motion, reach goals, and avoid obstacles in a variety of situations. It even automatically recognized and then compensated for damage to its body.
    Robot watches itself like an an infant exploring itself in a hall of mirrors
    The researchers placed a robotic arm inside a circle of five streaming video cameras. The robot watched itself through the cameras as it undulated freely. Like an infant exploring itself for the first time in a hall of mirrors, the robot wiggled and contorted to learn how exactly its body moved in response to various motor commands. After about three hours, the robot stopped. Its internal deep neural network had finished learning the relationship between the robot’s motor actions and the volume it occupied in its environment.
    “We were really curious to see how the robot imagined itself,” said Hod Lipson, professor of mechanical engineering and director of Columbia’s Creative Machines Lab, where the work was done. “But you can’t just peek into a neural network, it’s a black box.” After the researchers struggled with various visualization techniques, the self-image gradually emerged. “It was a sort of gently flickering cloud that appeared to engulf the robot’s three-dimensional body,” said Lipson. “As the robot moved, the flickering cloud gently followed it.” The robot’s self-model was accurate to about 1% of its workspace.
    Self-modeling robots will lead to more self-reliant autonomous systems
    The ability of robots to model themselves without being assisted by engineers is important for many reasons: Not only does it save labor, but it also allows the robot to keep up with its own wear-and-tear, and even detect and compensate for damage. The authors argue that this ability is important as we need autonomous systems to be more self-reliant. A factory robot, for instance, could detect that something isn’t moving right, and compensate or call for assistance.
    “We humans clearly have a notion of self,” explained the study’s first author Boyuan Chen, who led the work and is now an assistant professor at Duke University. “Close your eyes and try to imagine how your own body would move if you were to take some action, such as stretch your arms forward or take a step backward. Somewhere inside our brain we have a notion of self, a self-model that informs us what volume of our immediate surroundings we occupy, and how that volume changes as we move.”
    Self-awareness in robots
    The work is part of Lipson’s decades-long quest to find ways to grant robots some form of self-awareness. “Self-modeling is a primitive form of self-awareness,” he explained. “If a robot, animal, or human, has an accurate self-model, it can function better in the world, it can make better decisions, and it has an evolutionary advantage.”
    The researchers are aware of the limits, risks, and controversies surrounding granting machines greater autonomy through self-awareness. Lipson is quick to admit that the kind of self-awareness demonstrated in this study is, as he noted, “trivial compared to that of humans, but you have to start somewhere. We have to go slowly and carefully, so we can reap the benefits while minimizing the risks.” More

  • in

    Turning white blood cells into medicinal microrobots with light

    Medicinal microrobots could help physicians better treat and prevent diseases. But most of these devices are made with synthetic materials that trigger immune responses in vivo. Now, for the first time, researchers reporting in ACS Central Science have used lasers to precisely control neutrophils — a type of white blood cell — as a natural, biocompatible microrobot in living fish. The “neutrobots” performed multiple tasks, showing they could someday deliver drugs to precise locations in the body.
    Microrobots currently in development for medical applications would require injections or the consumption of capsules to get them inside an animal or person. But researchers have found that these microscopic objects often trigger immune reactions in small animals, resulting in the removal of microrobots from the body before they can perform their jobs. Using cells already present in the body, such as neutrophils, could be a less invasive alternative for drug delivery that wouldn’t set off the immune system. These white blood cells already naturally pick up nanoparticles and dead red blood cells and can migrate through blood vessels into adjacent tissues, so they are good candidates for becoming microrobots. Previously, researchers have guided neutrophils with lasers in lab dishes, moving them around as “neutrobots.” However, information on whether this approach will work in living animals was lacking. So, Xianchuang Zheng, Baojun Li and colleagues wanted to demonstrate the feasibility of light-driven neutrobots in animals using live zebrafish.
    The researchers manipulated and maneuvered neutrophils in zebrafish tails, using focused laser beams as remote optical tweezers. The light-driven microrobot could be moved up to a velocity of 1.3 µm/s, which is three times faster than a neutrophil naturally moves. In their experiments, the researchers used the optical tweezers to precisely and actively control the functions that neutrophils conduct as part of the immune system. For instance, a neutrobot was moved through a blood vessel wall into the surrounding tissue. Another one picked up and transported a plastic nanoparticle, showing its potential for carrying medicine. And when a neutrobot was pushed toward red blood cell debris, it engulfed the pieces. Surprisingly, at the same time, a different neutrophil, which wasn’t controlled by a laser, tried to naturally remove the cellular debris. Because they successfully controlled neutrobots in vivo, the researchers say this study advances the possibilities for targeted drug delivery and precise treatment of diseases.
    The authors acknowledge funding from the National Natural Science Foundation of China, the Basic and Applied Basic Research Foundation of Guangdong Province, and the Science and Technology Program of Guangzhou.
    Story Source:
    Materials provided by American Chemical Society. Note: Content may be edited for style and length. More

  • in

    Incidental pulmonary embolism on chest CT: AI vs. clinical reports

    According to ARRS’ American Journal of Roentgenology (AJR), an AI tool for detection of incidental pulmonary embolus (iPE) on conventional contrast-enhanced chest CT examinations had high NPV and moderate PPV for detection, even finding some iPEs missed by radiologists.
    “Potential applications of the AI tool include serving as a second reader to help detect additional iPEs or as a worklist triage tool to allow earlier iPE detection and intervention,” wrote lead investigator Kiran Batra from the University of Texas Southwestern Medical Center in Dallas. “Various explanations of misclassifications by the AI tool (both false positives and false negatives) were identified, to provide targets for model improvement.”
    Batra and colleagues’ retrospective study included 2,555 patients (1,340 women, 1,215 men; mean age, 53.6 years) who underwent 3,003 conventional contrast-enhanced chest CT examinations between September 2019 and February 2020 at Parkland Health in Dallas, TX. Using an FDA-approved, commercially available AI tool (Aidoc, New York, NY) to detect acute iPE on the images, a vendor-supplied natural language processing algorithm (RepScheme, Tel Aviv, Israel) was then applied to the clinical reports to identify examinations interpreted as positive for iPE.
    Ultimately, the commercial AI tool had NPV of 99.8% and PPV of 86.7% for detection of iPE on conventional contrast-enhanced chest CT examinations (i.e., not using CT pulmonary angiography protocols). Of 40 iPEs present in the team’s study sample, 7 were detected only by the clinical reports, and 4 were detected only by AI.
    Noting that both the AI tool and clinical reports detected iPEs missed by the other method, “the diagnostic performance of the AI tool did not show significant variation across study subgroups,” the authors of this AJR article added.
    Story Source:
    Materials provided by American Roentgen Ray Society. Note: Content may be edited for style and length. More