More stories

  • in

    Using robotic assistance to make colonoscopy kinder and easier

    Scientists have made a breakthrough in their work to develop semi-autonomous colonoscopy, using a robot to guide a medical device into the body.
    The milestone brings closer the prospect of an intelligent robotic system being able to guide instruments to precise locations in the body to take biopsies or allow internal tissues to be examined.
    A doctor or nurse would still be on hand to make clinical decisions but the demanding task of manipulating the device is offloaded to a robotic system.
    The latest findings — ‘Enabling the future of colonoscopy with intelligent and autonomous magnetic manipulation’ — is the culmination of 12 years of research by an international team of scientists led by the University of Leeds.
    The research is published today (Monday, 12 October) in the scientific journal Nature Machine Intelligence. 
    Patient trials using the system could begin next year or in early 2022.

    advertisement

    Pietro Valdastri, Professor of Robotics and Autonomous Systems at Leeds, is supervising the research. He said: “Colonoscopy gives doctors a window into the world hidden deep inside the human body and it provides a vital role in the screening of diseases such as colorectal cancer. But the technology has remained relatively unchanged for decades.
    “What we have developed is a system that is easier for doctors or nurses to operate and is less painful for patients. It marks an important a step in the move to make colonoscopy much more widely available — essential if colorectal cancer is to be identified early.”
    Because the system is easier to use, the scientists hope this can increase the number of providers who can perform the procedure and allow for greater patient access to colonoscopy.
    A colonoscopy is a procedure to examine the rectum and colon. Conventional colonoscopy is carried out using a semi-flexible tube which is inserted into the anus, a process some patients find so painful they require an anaesthetic.
    Magnetic flexible colonoscope
    The research team has developed a smaller, capsule-shaped device which is tethered to a narrow cable and is inserted into the anus and then guided into place — not by the doctor or nurse pushing the colonoscope but by a magnet on a robotic arm positioned over the patient.

    advertisement

    The robotic arm moves around the patient as it manoeuvres the capsule. The system is based on the principle that magnetic forces attract and repel.
    The magnet on the outside of the patient interacts with tiny magnets in the capsule inside the body, navigating it through the colon. The researchers say it will be less painful than having a conventional colonoscopy.
    Guiding the robotic arm can be done manually but it is a technique that is difficult to master. In response, the researchers have developed different levels of robotic assistance. This latest research evaluated how effective the different levels of robotic assistance were in aiding non-specialist staff to carry out the procedure.
    Levels of robotic assistance
    Direct robot control. This is where the operator has direct control of the robot via a joystick. In this case, there is no assistance.
    Intelligent endoscope teleoperation. The operator focuses on where they want the capsule to be located in the colon, leaving the robotic system to calculate the movements of the robotic arm necessary to get the capsule into place.
    Semi-autonomous navigation. The robotic system autonomously navigates the capsule through the colon, using computer vision — although this can be overridden by the operator.
    During a laboratory simulation, 10 non-expert staff were asked to get the capsule to a point within the colon within 20 minutes. They did that five times, using the three different levels of assistance.
    Using direct robot control, the participants had a 58% success rate. That increased to 96% using intelligent endoscope teleoperation — and 100% using semi-autonomous navigation.
    In the next stage of the experiment, two participants were asked to navigate a conventional colonoscope into the colon of two anaesthetised pigs — and then to repeat the task with the magnet-controlled robotic system using the different levels of assistance. A vet was in attendance to ensure the animals were not harmed.
    The participants were scored on the NASA Task Load Index, a measure of how taxing a task was, both physically and mentally.
    The NASA Task Load Index revealed that they found it easier to operate the colonoscope with robotic assistance. A sense of frustration was a major factor in operating the conventional colonoscope and where participants had direct control of the robot.
    James Martin, a PhD researcher from the University of Leeds who co-led the study, said: “Operating the robotic arm is challenging. It is not very intuitive and that has put a brake on the development of magnetic flexible colonoscopes.
    “But we have demonstrated for the first time that it is possible to offload that function to the robotic system, leaving the operator to think about the clinical task they are undertaking — and it is making a measurable difference in human performance.”
    The techniques developed to conduct colonoscopy examinations could be applied to other endoscopic devices, such as those used to inspect the upper digestive tract or lungs.
    Dr Bruno Scaglioni, a Postdoctoral Research Fellow at Leeds and co-leader of the study, added: “Robot-assisted colonoscopy has the potential to revolutionize the way the procedure is carried out. It means people conducting the examination do not need to be experts in manipulating the device.
    “That will hopefully make the technique more widely available, where it could be offered in clinics and health centres rather than hospitals.” More

  • in

    Liquid metals come to the rescue of semiconductors

    Moore’s law is an empirical suggestion describing that the number of transistors doubles every few years in integrated circuits (ICs). However, Moore’s law has started to fail as transistors are now so small that the current silicon-based technologies are unable to offer further opportunities for shrinking.
    One possibility of overcoming Moore’s law is to resort to two-dimensional semiconductors. These two-dimensional materials are so thin that they can allow the propagation of free charge carriers, namely electrons and holes in transistors that carry the information, along an ultra-thin plane. This confinement of charge carriers can potentially allow the switching of the semiconductor very easily. It also allows directional pathways for the charge carriers to move without scattering and therefore leading to infinitely small resistance for the transistors. This means in theory the two-dimensional materials can result in transistors that do not waste energy during their on/off switching. Theoretically, they can switch very fast and also switch off to absolute zero resistance values during their non-operational states. Sounds ideal, but life is not ideal! In reality, there are still many technological barriers that should be surpassed for creating such perfect ultra-thin semiconductors. One of the barriers with the current technologies is that the deposited ultra-thin films are full of grain boundaries so that the charge carriers are bounced back from them and hence the resistive loss increases.
    One of the most exciting ultra-thin semiconductors is molybdenum disulphide (MoS2) which has been the subject of investigation for the past two decades for its electronic properties. However, obtaining very large-scale two-dimensional MoS2 without any grain boundaries has been proven to be a real challenge. Using any current large-scale deposition technologies, grain-boundary-free MoS2 which is essential for making ICs has yet been reached with acceptable maturity. However, now researchers at the School of Chemical Engineering, University of New South Wales (UNSW) have developed a method to eliminate such grain boundaries based on a new deposition approach.
    “This unique capability was achieved with the help of gallium metal in its liquid state. Gallium is an amazing metal with a low melting point of only 29.8 °C. It means that at a normal office temperature it is solid, while it turns into a liquid when placed at the palm of someone’s hand. It is a melted metal, so its surface is atomically smooth. It is also a conventional metal which means that its surface provides a large number of free electrons for facilitating chemical reactions.” Ms Yifang Wang, the first author of the paper said.
    “By bringing the sources of molybdenum and sulphur near the surface of gallium liquid metal, we were able to realize chemical reactions that form the molybdenum sulphur bonds to establish the desired MoS2. The formed two-dimensional material is templated onto an atomically smooth surface of gallium, so it is naturally nucleated and grain boundary free. This means that by a second step annealing, we were able to obtain very large area MoS2 with no grain boundary. This is a very important step for scaling up this fascinating ultra-smooth semiconductor.” Prof Kourosh Kalantar-Zadeh, the leading author of the work said.
    The researchers at UNSW are now planning to expand their methods to creating other two-dimensional semiconductors and dielectric materials in order to create a number of materials that can be used as different parts of transistors.

    Story Source:
    Materials provided by ARC Centre of Excellence in Future Low-Energy Electronics Technologies. Note: Content may be edited for style and length. More

  • in

    New virtual reality software allows scientists to 'walk' inside cells

    Virtual reality software which allows researchers to ‘walk’ inside and analyse individual cells could be used to understand fundamental problems in biology and develop new treatments for disease.
    The software, called vLUME, was created by scientists at the University of Cambridge and 3D image analysis software company Lume VR Ltd. It allows super-resolution microscopy data to be visualised and analysed in virtual reality, and can be used to study everything from individual proteins to entire cells. Details are published in the journal Nature Methods.
    Super-resolution microscopy, which was awarded the Nobel Prize for Chemistry in 2014, makes it possible to obtain images at the nanoscale by using clever tricks of physics to get around the limits imposed by light diffraction. This has allowed researchers to observe molecular processes as they happen. However, a problem has been the lack of ways to visualise and analyse this data in three dimensions.
    “Biology occurs in 3D, but up until now it has been difficult to interact with the data on a 2D computer screen in an intuitive and immersive way,” said Dr Steven F. Lee from Cambridge’s Department of Chemistry, who led the research. “It wasn’t until we started seeing our data in virtual reality that everything clicked into place.”
    The vLUME project started when Lee and his group met with the Lume VR founders at a public engagement event at the Science Museum in London. While Lee’s group had expertise in super-resolution microscopy, the team from Lume specialised in spatial computing and data analysis, and together they were able to develop vLUME into a powerful new tool for exploring complex datasets in virtual reality.
    “vLUME is revolutionary imaging software that brings humans into the nanoscale,” said Alexandre Kitching, CEO of Lume. “It allows scientists to visualise, question and interact with 3D biological data, in real time all within a virtual reality environment, to find answers to biological questions faster. It’s a new tool for new discoveries.”
    Viewing data in this way can stimulate new initiatives and ideas. For example, Anoushka Handa — a PhD student from Lee’s group — used the software to image an immune cell taken from her own blood, and then stood inside her own cell in virtual reality. “It’s incredible — it gives you an entirely different perspective on your work,” she said.
    The software allows multiple datasets with millions of data points to be loaded in and finds patterns in the complex data using in-built clustering algorithms. These findings can then be shared with collaborators worldwide using image and video features in the software.
    “Data generated from super-resolution microscopy is extremely complex,” said Kitching. “For scientists, running analysis on this data can be very time consuming. With vLUME, we have managed to vastly reduce that wait time allowing for more rapid testing and analysis.”
    The team are mostly using vLUME with biological datasets, such as neurons, immune cells or cancer cells. For example, Lee’s group has been studying how antigen cells trigger an immune response in the body. “Through segmenting and viewing the data in vLUME, we’ve quickly been able to rule out certain hypotheses and propose new ones,” said Lee. This software allows researchers to explore, analyse, segment and share their data in new ways. All you need is a VR headset.”

    Story Source:
    Materials provided by University of Cambridge. The original story is licensed under a Creative Commons License. Note: Content may be edited for style and length. More

  • in

    Multi-state data storage leaving binary behind

    Electronic data is being produced at a breath-taking rate.
    The total amount of data stored in data centres around the globe is of the order of ten zettabytes (a zettabyte is a trillion gigabytes), and we estimate that amount doubles every couple of years.
    With 8% of global electricity already being consumed in information and communication technology (ICT), low-energy data-storage is a key priority.
    To date there is no clear winner in the race for next-generation memory that is non-volatile, has great endurance, highly energy efficient, low cost, high density, and allows fast access operation.
    The joint international team comprehensively reviews ‘multi-state memory’ data storage, which steps ‘beyond binary’ to store more data than just 0s and 1s.
    MULTI-STATE MEMORY: MORE THAN JUST ZEROES AND ONES
    Multi-state memory is an extremely promising technology for future data storage, with the ability to store data in more than a single bit (ie, 0 or 1) allowing much higher storage density (amount of data stored per unit area.

    advertisement

    This circumvents the plateauing of benefits historically offered by ‘Moore’s Law’, where component size halved abut every two years. In recent years, the long-predicted plateauing of Moore’s Law has been observed, with charge leakage and spiralling research and fabrication costs putting the nail in the Moore’s Law coffin.
    Non-volatile, multi-state memory (NMSM) offers energy efficiency, high, nonvolatility, fast access, and low cost.
    Storage density is dramatically enhanced without scaling down the dimensions of the memory cell, making memory devices more efficient and less expensive.
    NEUROMORPHIC COMPUTER MIMICKING THE HUMAN BRAIN
    Multi-state memory also enables the proposed future technology neuromorphic computing, which would mirror the structure of the human brain. This radically-different, brain-inspired computing regime could potentially provide the economic impetus for adoption of a novel technology such as NMSM.
    NMSMs allow analog calculation, which could be vital to intelligent, neuromorphic networks, as well as potentially helping us finally unravel the working mechanism of the human brain itself.
    THE STUDY
    The paper reviews device architectures, working mechanisms, material innovation, challenges, and recent progress for leading NMSM candidates, including:
    Flash memory
    magnetic random-access memory (MRAM)
    resistive random-access memory (RRAM)
    ferroelectric random-access memory (FeRAM)
    phase-change memory (PCM) More

  • in

    New project to build nano-thermometers could revolutionize temperature imaging

    Cheaper refrigerators? Stronger hip implants? A better understanding of human disease? All of these could be possible and more, someday, thanks to an ambitious new project underway at the National Institute of Standards and Technology (NIST).
    NIST researchers are in the early stages of a massive undertaking to design and build a fleet of tiny ultra-sensitive thermometers. If they succeed, their system will be the first to make real-time measurements of temperature on the microscopic scale in an opaque 3D volume — which could include medical implants, refrigerators, and even the human body.
    The project is called Thermal Magnetic Imaging and Control (Thermal MagIC), and the researchers say it could revolutionize temperature measurements in many fields: biology, medicine, chemical synthesis, refrigeration, the automotive industry, plastic production — “pretty much anywhere temperature plays a critical role,” said NIST physicist Cindi Dennis. “And that’s everywhere.”
    The NIST team has now finished building its customized laboratory spaces for this unique project and has begun the first major phase of the experiment.
    Thermal MagIC will work by using nanometer-sized objects whose magnetic signals change with temperature. The objects would be incorporated into the liquids or solids being studied — the melted plastic that might be used as part of an artificial joint replacement, or the liquid coolant being recirculated through a refrigerator. A remote sensing system would then pick up these magnetic signals, meaning the system being studied would be free from wires or other bulky external objects.
    The final product could make temperature measurements that are 10 times more precise than state-of-the-art techniques, acquired in one-tenth the time in a volume 10,000 times smaller. This equates to measurements accurate to within 25 millikelvin (thousandths of a kelvin) in as little as a tenth of a second, in a volume just a hundred micrometers (millionths of a meter) on a side. The measurements would be “traceable” to the International System of Units (SI); in other words, its readings could be accurately related to the fundamental definition of the kelvin, the world’s basic unit of temperature.

    advertisement

    The system aims to measure temperatures over the range from 200 to 400 kelvin (K), which is about -99 to 260 degrees Fahrenheit (F). This would cover most potential applications — at least the ones the Thermal MagIC team envisions will be possible within the next 5 years. Dennis and her colleagues see potential for a much larger temperature range, stretching from 4 K-600 K, which would encompass everything from supercooled superconductors to molten lead. But that is not a part of current development plans.
    “This is a big enough sea change that we expect that if we can develop it — and we have confidence that we can — other people will take it and really run with it and do things that we currently can’t imagine,” Dennis said.
    Potential applications are mostly in research and development, but Dennis said the increase in knowledge would likely trickle down to a variety of products, possibly including 3D printers, refrigerators, and medicines.
    What Is It Good For?
    Whether it’s the thermostat in your living room or a high-precision standard instrument that scientists use for laboratory measurements, most thermometers used today can only measure relatively big areas — on a macroscopic as opposed to microscopic level. These conventional thermometers are also intrusive, requiring sensors to penetrate the system being measured and to connect to a readout system by bulky wires.

    advertisement

    Infrared thermometers, such as the forehead instruments used at many doctors’ offices, are less intrusive. But they still only make macroscopic measurements and cannot see beneath surfaces.
    Thermal MagIC should let scientists get around both these limitations, Dennis said.
    Engineers could use Thermal MagIC to study, for the first time, how heat transfer occurs within different coolants on the microscale, which could aid their quest to find cheaper, less energy-intensive refrigeration systems.
    Doctors could use Thermal MagIC to study diseases, many of which are associated with temperature increases — a hallmark of inflammation — in specific parts of the body.
    And manufacturers could use the system to better control 3D printing machines that melt plastic to build custom objects such as medical implants and prostheses. Without the ability to measure temperature on the microscale, 3D printing developers are missing crucial information about what’s going on inside the plastic as it solidifies into an object. More knowledge could improve the strength and quality of 3D-printed materials someday, by giving engineers more control over the 3D printing process.
    Giving It OOMMF
    The first step in making this new thermometry system is creating nano-sized magnets that will give off strong magnetic signals in response to temperature changes. To keep particle concentrations as low as possible, the magnets will need to be 10 times more sensitive to temperature changes than any objects that currently exist.
    To get that kind of signal, Dennis said, researchers will likely need to use multiple magnetic materials in each nano-object. A core of one substance will be surrounded by other materials like the layers of an onion.
    The trouble is that there are practically endless combinations of properties that can be tweaked, including the materials’ composition, size, shape, the number and thickness of the layers, or even the number of materials. Going through all of these potential combinations and testing each one for its effect on the object’s temperature sensitivity could take multiple lifetimes to accomplish.
    To help them get there in months instead of decades, the team is turning to sophisticated software: the Object Oriented MicroMagnetic Framework (OOMMF), a widely used modeling program developed by NIST researchers Mike Donahue and Don Porter.
    The Thermal MagIC team will use this program to create a feedback loop. NIST chemists Thomas Moffat, Angela Hight Walker and Adam Biacchi will synthesize new nano-objects. Then Dennis and her team will characterize the objects’ properties. And finally, Donahue will help them feed that information into OOMMF, which will make predictions about what combinations of materials they should try next.
    “We have some very promising results from the magnetic nano-objects side of things, but we’re not quite there yet,” Dennis said.
    Each Dog Is a Voxel
    So how do they measure the signals given out by tiny concentrations of nano-thermometers inside a 3D object in response to temperature changes? They do it with a machine called a magnetic particle imager (MPI), which surrounds the sample and measures a magnetic signal coming off the nanoparticles.
    Effectively, they measure changes to the magnetic signal coming off one small volume of the sample, called a “voxel” — basically a 3D pixel — and then scan through the entire sample one voxel at a time.
    But it’s hard to focus a magnetic field, said NIST physicist Solomon Woods. So they achieve their goal in reverse.
    Consider a metaphor. Say you have a dog kennel, and you want to measure how loud each individual dog is barking. But you only have one microphone. If multiple dogs are barking at once, your mic will pick up all of that sound, but with only one mic you won’t be able to distinguish one dog’s bark from another’s.
    However, if you could quiet each dog somehow — perhaps by occupying its mouth with a bone — except for a single cocker spaniel in the corner, then your mic would still be picking up all the sounds in the room, but the only sound would be from the cocker spaniel.
    In theory, you could do this with each dog in sequence — first the cocker spaniel, then the mastiff next to it, then the labradoodle next in line — each time leaving just one dog bone-free.
    In this metaphor, each dog is a voxel.
    Basically, the researchers max out the ability of all but one small volume of their sample to respond to a magnetic field. (This is the equivalent of stuffing each dog’s mouth with a delicious bone.) Then, measuring the change in magnetic signal from the entire sample effectively lets you measure just that one little section.
    MPI systems similar to this exist but are not sensitive enough to measure the kind of tiny magnetic signal that would come from a small change in temperature. The challenge for the NIST team is to boost the signal significantly.
    “Our instrumentation is very similar to MPI, but since we have to measure temperature, not just measure the presence of a nano-object, we essentially need to boost our signal-to-noise ratio over MPI by a thousand or 10,000 times,” Woods said.
    They plan to boost the signal using state-of-the-art technologies. For example, Woods may use superconducting quantum interference devices (SQUIDs), cryogenic sensors that measure extremely subtle changes in magnetic fields, or atomic magnetometers, which detect how energy levels of atoms are changed by an external magnetic field. Woods is working on which are best to use and how to integrate them into the detection system.
    The final part of the project is making sure the measurements are traceable to the SI, a project led by NIST physicist Wes Tew. That will involve measuring the nano-thermometers’ magnetic signals at different temperatures that are simultaneously being measured by standard instruments.
    Other key NIST team members include Thinh Bui, Eric Rus, Brianna Bosch Correa, Mark Henn, Eduardo Correa and Klaus Quelhas.
    Before finishing their new laboratory space, the researchers were able to complete some important work. In a paper published last month in the International Journal on Magnetic Particle Imaging, the group reported that they had found and tested a “promising” nanoparticle material made of iron and cobalt, with temperature sensitivities that varied in a controllable way depending on how the team prepared the material. Adding an appropriate shell material to encase this nanoparticle “core” would bring the team closer to creating a working temperature-sensitive nanoparticle for Thermal MagIC.
    In the past few weeks, the researchers have made further progress testing combinations of materials for the nanoparticles.
    “Despite the challenge of working during the pandemic, we have had some successes in our new labs,” Woods said. “These achievements include our first syntheses of multi-layer nanomagnetic systems for thermometry, and ultra-stable magnetic temperature measurements using techniques borrowed from atomic clock research.” More

  • in

    'Universal law of touch' will enable new advances in virtual reality

    Seismic waves, commonly associated with earthquakes, have been used by scientists to develop a universal scaling law for the sense of touch. A team, led by researchers at the University of Birmingham, used Rayleigh waves to create the first scaling law for touch sensitivity. The results are published in Science Advances.
    The researchers are part of a European consortium (H-Reality) that are already using the theory to develop new Virtual Reality technologies that incorporate the sense of touch.
    Rayleigh waves are created by impact between objects and are commonly thought to travel only along surfaces. The team discovered that, when it comes to touch, the waves also travel through layers of skin and bone and are picked up by the body’s touch receptor cells.
    Using mathematical modelling of these touch receptors the researchers showed how the receptors were located at depths that allowed them to respond to Rayleigh waves. The interaction of these receptors with the Rayleigh waves will vary across species, but the ratio of receptor depth vs wavelength remains the same, enabling the universal law to be defined.
    The mathematics used by the researchers to develop the law is based on approaches first developed over a hundred years ago to model earthquakes. The law supports predictions made by the Nobel-Prize-winning physicist Georg von Békésy who first suggested the mathematics of earthquakes could be used to explore connections between Rayleigh waves and touch.
    The team also found that the interaction of the waves and receptors remained even when the stiffness of the outermost layer of skin changed. The ability of the receptors to respond to Rayleigh waves remained unchanged despite the many variations in this outer layer caused by, age, gender, profession, or even hydration.
    Dr Tom Montenegro-Johnson, of the University of Birmingham’s School of Mathematics, led the research. He explains: “Touch is a primordial sense, as important to our ancient ancestors as it is to modern day mammals, but it’s also one of the most complex and therefore least understood. While we have universal laws to explain sight and hearing, for example, this is the first time that we’ve been able to explain touch in this way.”
    James Andrews, co-author of the study at the University of Birmingham, adds: “The principles we’ve defined enable us to better understand the different experiences of touch among a wide range of species. For example, if you indent the skin of a rhinoceros by 5mm, they would have the same sensation as a human with a similar indentation — it’s just that the forces required to produce the indentation would be different. This makes a lot of sense in evolutionary terms, since it’s connected to relative danger and potential damage.”
    The work was funded by the European Union’s Horizon 2020 research and innovation programme, under collaborative project “H-Reality.” The other institutions involved in the project are Ultraleap Ltd. (UK), Actronika (France), TU Delft (The Netherlands), and CNRS (France).

    Story Source:
    Materials provided by University of Birmingham. Note: Content may be edited for style and length. More

  • in

    Researchers use artificial intelligence language tools to decode molecular movements

    By applying natural language processing tools to the movements of protein molecules, University of Maryland scientists created an abstract language that describes the multiple shapes a protein molecule can take and how and when it transitions from one shape to another.
    A protein molecule’s function is often determined by its shape and structure, so understanding the dynamics that control shape and structure can open a door to understanding everything from how a protein works to the causes of disease and the best way to design targeted drug therapies. This is the first time a machine learning algorithm has been applied to biomolecular dynamics in this way, and the method’s success provides insights that can also help advance artificial intelligence (AI). A research paper on this work was published on October 9, 2020, in the journal Nature Communications.
    “Here we show the same AI architectures used to complete sentences when writing emails can be used to uncover a language spoken by the molecules of life,” said the paper’s senior author, Pratyush Tiwary, an assistant professor in UMD’s Department of Chemistry and Biochemistry and Institute for Physical Science and Technology. “We show that the movement of these molecules can be mapped into an abstract language, and that AI techniques can be used to generate biologically truthful stories out of the resulting abstract words.”
    Biological molecules are constantly in motion, jiggling around in their environment. Their shape is determined by how they are folded and twisted. They may remain in a given shape for seconds or days before suddenly springing open and refolding into a different shape or structure. The transition from one shape to another occurs much like the stretching of a tangled coil that opens in stages. As different parts of the coil release and unfold, the molecule assumes different intermediary conformations.
    But the transition from one form to another occurs in picoseconds (trillionths of a second) or faster, which makes it difficult for experimental methods such as high-powered microscopes and spectroscopy to capture exactly how the unfolding happens, what parameters affect the unfolding and what different shapes are possible. The answers to those questions form the biological story that Tiwary’s new method can reveal.
    Tiwary and his team applied Newton’s laws of motion — which can predict the movement of atoms within a molecule — with powerful supercomputers, including UMD’s Deepthought2, to develop statistical physics models that simulate the shape, movement and trajectory of individual molecules.
    Then they fed those models into a machine learning algorithm, like the one Gmail uses to automatically complete sentences as you type. The algorithm approached the simulations as a language in which each molecular movement forms a letter that can be strung together with other movements to make words and sentences. By learning the rules of syntax and grammar that determine which shapes and movements follow one another and which don’t, the algorithm predicts how the protein untangles as it changes shape and the variety of forms it takes along the way.
    To demonstrate that their method works, the team applied it to a small biomolecule called riboswitch, which had been previously analyzed using spectroscopy. The results, which revealed the various forms the riboswitch could take as it was stretched, matched the results of the spectroscopy studies.
    “One of the most important uses of this, I hope, is to develop drugs that are very targeted,” Tiwary said. “You want to have potent drugs that bind very strongly, but only to the thing that you want them to bind to. We can achieve that if we can understand the different forms that a given biomolecule of interest can take, because we can make drugs that bind only to one of those specific forms at the appropriate time and only for as long as we want.”
    An equally important part of this research is the knowledge gained about the language processing system Tiwary and his team used, which is generally called a recurrent neural network, and in this specific instance a long short-term memory network. The researchers analyzed the mathematics underpinning the network as it learned the language of molecular motion. They found that the network used a kind of logic that was similar to an important concept from statistical physics called path entropy. Understanding this opens opportunities for improving recurrent neural networks in the future.
    “It is natural to ask if there are governing physical principles making AI tools successful,” Tiwary said. “Here we discover that, indeed, it is because the AI is learning path entropy. Now that we know this, it opens up more knobs and gears we can tune to do better AI for biology and perhaps, ambitiously, even improve AI itself. Anytime you understand a complex system such as AI, it becomes less of a black-box and gives you new tools for using it more effectively and reliably.” More

  • in

    New model may explain rarity of certain malaria-blocking mutations

    A new computational model suggests that certain mutations that block infection by the most dangerous species of malaria have not become widespread in people because of the parasite’s effects on the immune system. Bridget Penman of the University of Warwick, U.K., and Sylvain Gandon of the CNRS and Montpellier University, France, present these findings in the open-access journal PLOS Computational Biology.
    Malaria is a potentially lethal, mosquito-borne disease caused by parasites of the Plasmodium genus. Several protective adaptations to malaria have spread widely among humans, such as the sickle-cell mutation. Laboratory experiments suggest that certain other mutations could be highly protective against the most dangerous human-infecting malaria species, Plasmodium falciparum. However, despite being otherwise benign, these mutations have not become widespread.
    To help clarify why some protective mutations may remain rare, Penman and colleagues developed a computational model that simulates the epidemiology of malaria infection, as well the evolution of protective mutations. Importantly, the model also incorporates mechanisms of adaptive immunity, in which the immune system “learns” to recognize and attack specific pathogens, such as P. falciparum.
    Analysis of the model’s predictions suggests that if people rapidly gain adaptive immunity to the severe effects of P. falciparum malaria, mutations capable of blocking P. falciparum infection are unlikely to spread among the population. The fewer the number of infections it takes for people to become immune to the severe effects of malaria, the less likely it is that malaria infection-blocking mutations will arise.
    “Understanding why a potential human malaria adaptation has not succeeded could be just as important as understanding those which have succeeded,” Penman says. “Our results highlight the need for further detailed genetic studies of populations living in regions impacted by malaria in order to better understand malaria-human interactions.”
    Ultimately, understanding how humans have adapted to malaria could help open up new avenues for treatment.

    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More