More stories

  • in

    Protein storytelling to address the pandemic

    In the last five decades, we’ve learned a lot about the secret lives of proteins — how they work, what they interact with, the machinery that makes them function — and the pace of discovery is accelerating.
    The first three-dimensional protein structure began emerging in the 1970s. Today, the Protein Data Bank, a worldwide repository of information about the 3D structures of large biological molecules, has information about hundreds of thousands of proteins. Just this week, the company DeepMind shocked the protein structure world with its accurate, AI-driven predictions.
    But the 3D structure is often not enough to truly understand what a protein is up to, explains Ken Dill, director of the Laufer Center for Physical and Quantitative Biology at Stony Brook University and a member of the National Academy of Sciences. “It’s like somebody asking how an automobile works, and a mechanic opening the hood of a car and saying, ‘see, there’s the engine, that’s how it works.'”
    In the intervening decades, computer simulations have built upon and added to the understanding of protein behavior by setting these 3D molecular machines in motion. Analyzing their energy landscapes, interactions, and dynamics has taught us even more about these prime movers of life.
    “We’re really trying to ask the question: how does it work? Not just, how does it look?” Dill said. “That’s the essence of why you want to know protein structures in the first place, and one of the biggest applications of this is for drug discovery.”
    Writing in Science magazine in November 2020, Dill and his Stony Brook colleagues Carlos Simmerling and Emiliano Brini shared their perspectives on the evolution of the field.

    advertisement

    “Computational Molecular Physics is an increasingly powerful tool for telling the stories of protein molecule actions,” they wrote. “Systematic improvements in forcefields, enhanced sampling methods, and accelerators have enabled [computational molecular physics] to reach timescales of important biological actions…. At this rate, in the next quarter century, we’ll be telling stories of protein molecules over the whole lifespan, tens of minutes, of a bacterial cell.”
    Speeding Simulations
    Decades after the first dynamic models of proteins, however, computational biophysicists still face major challenges. To be useful, simulations need to be accurate; and to be accurate, simulation needs to progress atom by atom and femtosecond (10^-12 seconds) by femtosecond. To match the timescales that matter, simulations must extend over microseconds or milliseconds — that is, millions of time-steps.
    “Computational molecular physics has developed at a fast clip relatively speaking, but not enough to get us into the time and size and motion range we need to see,” he said.
    One of the main methods researchers use to understand proteins in this way is called molecular dynamics. Since 2015, with support from the National Institutes of Health and the National Science Foundation, Dill and his team have been working to speed up molecular dynamics simulations. Their method, called MELD, accelerates the process by providing vague but important information about the system being studied.

    advertisement

    Dill likens the method to a treasure hunt. Instead of asking someone to find a treasure that could be anywhere, they provide a map with clues, saying: ‘it’s either near Chicago or Idaho.’ In the case of actual proteins, that might mean telling the simulation that one part of a chain of amino acids is near another part of the chain. This narrowing of the search field can speed up simulations significantly — sometimes more than 1000-times faster — enabling novel studies and providing new insights.
    Protein Structure Predictions for COVID-19
    One of the most important uses of biophysical modeling in our daily lives is drug discovery and development. 3D models of viruses or bacteria help identify weak spots in their defenses, and molecular dynamics simulations determine what small molecules may bind to those attackers and gum up their works without having to test every possibility in the lab.
    Dill’s Laufer Center team is involved in a number of efforts to find drugs and treatments for COVID-19, with support from the White House-organized COVID-19 HPC Consortium, an effort among Federal government, industry, and academic leaders to provide access to the world’s most powerful high-performance computing resources in support of COVID-19 research.
    “Everyone dropped other things to work on COVID-19,” Dill recalled.
    The first step the team took was to use MELD to determine the 3D shape of the coronavirus’ unknown proteins. Only three of the 29 of the virus’ proteins have been definitively resolved so far. “Most structures are not known, which is not a good beginning for drug discovery,” he said. “Can we predict structures that are not known? That’s the primary thing that we used Frontera for.”
    The Frontera supercomputer at the Texas Advanced Computing Center (TACC) — the fastest at any university in the world — allowed Dill and his team to make structure predictions for 19 additional proteins. Each of these could serve as an avenue for new drug developments. They have made their structure predictions publicly available and are working with teams to experimentally test their accuracy.
    While it seems like the vaccine race is already close to declaring a winner, the first round of vaccines, drugs, and treatments are only the starting point for a recovery. As with HIV, it is likely that the first drugs developed will not work on all people, or will be surpassed by more effective ones with fewer side-effects in the future.
    Dill and his Laufer Center team are playing the long game, hoping to find targets and mechanisms that are more promising than those already being developed.
    Repurposing Drugs and Exploring New Approaches
    A second project by the Laufer Center group uses Frontera to scan millions of commercially available small molecules for efficacy against COVID-19, in collaboration with Dima Kozakov’s group at Stony Brook University.
    “By focusing on the repurposing of commercially available molecules it’s possible, in principle, to shorten the time it takes to find a new drug,” he said. “Kozakov’s group has the ability to quickly screen thousands of molecules to identify the best hundred ones. We use our physics modeling to filter this pool of candidates even further, narrowing the options experimentalists need to test.”
    A third project is studying an interesting cellular protein known as PROTAC that directs the “trash collector proteins” of human cells to pick up specific target proteins that they would not usually remove.
    “Our cell has smart ways to identify proteins that needs to be destroyed. It gets next to it, puts a sticker on it, and the proteins who collect trash take it away,” he explained. “Initially PROTAC molecules have been used to target cancer related proteins. Now there is a push to transfer this concept to target SARS-CoV-2 proteins.”
    Collaborating with Stony Brook chemist Peter Tonge, they are working to simulate the interaction of novel PROTACS with the COVID-19 virus. “These are some of our most ambitious simulations, both in term of the size of the systems we are tackling and in terms of the chemical complexity,” he said. “Frontera is a crucial resource to give us sufficient turnaround times. For one simulation we need 30 GPUs and four to five days of continuous calculations.”
    The team is developing and testing their protocols on a non-COVID test system to benchmark their predictions. Once they settle on a protocol, they will apply this design procedure to COVID systems.
    Every protein has a story to tell and Dill, Brini and their collaborators are building and applying the tools that help elucidate these stories. “There are some problems in protein science where we believe the real challenge is getting the physics and math right,” Dill concluded. “We’re testing that hypothesis on COVID-19.” More

  • in

    Unlocking the secrets of chemical bonding with machine learning

    A new machine learning approach offers important insights into catalysis, a fundamental process that makes it possible to reduce the emission of toxic exhaust gases or produce essential materials like fabric.
    In a report published in Nature Communications, Hongliang Xin, associate professor of chemical engineering at Virginia Tech, and his team of researchers developed a Bayesian learning model of chemisorption, or Bayeschem for short, aiming to use artificial intelligence to unlock the nature of chemical bonding at catalyst surfaces.
    “It all comes down to how catalysts bind with molecules,” said Xin. “The interaction has to be strong enough to break some chemical bonds at reasonably low temperatures, but not too strong that catalysts would be poisoned by reaction intermediates. This rule is known as the Sabatier principle in catalysis.”
    Understanding how catalysts interact with different intermediates and determining how to control their bond strengths so that they are within that “goldilocks zone” is the key to designing efficient catalytic processes, Xin said. The research provides a tool for that purpose.
    Bayeschem works using Bayesian learning, a specific machine learning algorithm for inferring models from data. “Suppose you have a domain model based on well-established physical laws, and you want to use it to make predictions or learn something new about the world,” explained Siwen Wang, a former chemical engineering doctoral student. “The Bayesian approach is to learn the distribution of model parameters given our prior knowledge and the observed, often scarce, data, while providing uncertainty quantification of model predictions.”
    The d-band theory of chemisorption used in Bayeschem is a theory describing chemical bonding at solid surfaces involving d-electrons that are usually shaped like a four-leaf clover. The model explains how d-orbitals of catalyst atoms are overlapping and attracted to adsorbate valence orbitals that have a spherical or dumbbell-like shape. It has been considered the standard model in heterogeneous catalysis since its development by Hammer and Nørskov in the 1990s, and though it has been successful in explaining bonding trends of many systems, Xin said the model fails at times due to the intrinsic complexity of electronic interactions.
    According to Xin, Bayeschem brings the d-band theory to a new level for quantifying those interaction strengths and possibly tailoring some knobs, such as structure and composition, to design better materials. The approach advances the d-band theory of chemisorption by extending its prediction and interpretation capabilities of adsorption properties, both of which are crucial in catalyst discovery. However, compared with the black-box machine learning models that are trained by large amounts of data, the prediction accuracy of Bayeschem is still amenable to improvement, said Hemanth Pillai, a chemical engineering doctoral student in Xin’s group who contributed equally to the study.
    “The opportunity to come up with highly accurate and interpretable models that build on deep learning algorithms and the theory of chemisorption is highly rewarding for achieving the goals of artificial intelligence in catalysis,” said Xin.

    Story Source:
    Materials provided by Virginia Tech. Original written by Tina Russell. Note: Content may be edited for style and length. More

  • in

    Using a video game to understand the origin of emotions

    Emotions are complex phenomena that influence our minds, bodies and behaviour. A number of studies have sought to connect given emotions, such as fear or pleasure, to specific areas of the brain, but without success. Some theoretical models suggest that emotions emerge through the coordination of multiple mental processes triggered by an event. These models involve the brain orchestrating adapted emotional responses via the synchronisation of motivational, expressive and visceral mechanisms. To investigate this hypothesis, a research team from the University of Geneva (UNIGE) studied brain activity using functional MRI. They analysed the feelings, expressions and physiological responses of volunteers while they were playing a video game that had been specially developed to arouse different emotions depending on the progress of the game. The results, published in the journal PLOS Biology, show that different emotional components recruit several neural networks in parallel distributed throughout the brain, and that their transient synchronisation generates an emotional state. The somatosensory and motor pathways are two of the areas involved in this synchronisation, thereby validating the idea that emotion is grounded in action-oriented functions in order to allow an adapted response to events.
    Most studies use passive stimulation to understand the emergence of emotions: they typically present volunteers with photos, videos or images evoking fear, anger, joy or sadness while recording the cerebral response using electroencephalography or imaging. The goal is to pinpoint the specific neural networks for each emotion. “The problem is, these regions overlap for different emotions, so they’re not specific,” begins Joana Leitão, a post-doctoral fellow in the Department of Fundamental Neurosciences (NEUFO) in UNIGE’s Faculty of Medicine and at the Swiss Centre for Affective Sciences (CISA). “What’s more, it’s likely that, although these images represent emotions well, they don’t evoke them.”
    A question of perspective
    Several neuroscientific theories have attempted to model the emergence of an emotion, although none has so far been proven experimentally. The UNIGE research team subscribe to the postulate that emotions are “subjective”: two individuals faced with the same situation may experience a different emotion. “A given event is not assessed in the same way by each person because the perspectives are different,” continues Dr Leitão.
    In a theoretical model known as the component process model (CPM) — devised by Professor Klaus Scherer, the retired founding director of CISA- an event will generate multiple responses in the organism. These relate to components of cognitive assessment (novelty or concordance with a goal or norms), motivation, physiological processes (sweating or heart rate), and expression (smiling or shouting). In a situation that sets off an emotional response, these different components influence each other dynamically. It is their transitory synchronisation that might correspond to an emotional state.
    Emotional about Pacman
    The Geneva neuroscientists devised a video game to evaluate the applicability of this model. “The aim is to evoke emotions that correspond to different forms of evaluation,” explains Dr Leitão. “Rather than viewing simple images, participants play a video game that puts them in situations they’ll have to evaluate so they can advance and win rewards.” The game is an arcade game that is similar to the famous Pacman. Players have to grab coins, touch the “nice monsters,” ignore the “neutral monsters” and avoid the “bad guys” to win points and pass to the next level.
    The scenario involves situations that trigger the four components of the CPM model differently. At the same time, the researchers were able to measure brain activity via imaging; facial expression by analysing the zygomatic muscles; feelings via questions; and physiology by skin and cardiorespiratory measurements. “All of these components involve different circuits distributed throughout the brain,” says the Geneva-based researcher. “By cross-referencing the imagery data with computational modelling, we were able to determine how these components interact over time and at what point they synchronise to generate an emotion.”
    A made-to-measure emotional response
    The results also indicate that a region deep in the brain called the basal ganglia is involved in this synchronisation. This structure is known as a convergence point between multiple cortical regions, each of which is equipped with specialised affective, cognitive or sensorimotor processes. The other regions involve the sensorimotor network, the posterior insula and the prefrontal cortex. “The involvement of the somatosensory and motor zones accords with the postulate of theories that consider emotion as a preparatory mechanism for action that enables the body to promote an adaptive response to events,” concludes Patrik Vuilleumier, full professor at NEUFO and senior author of the study.

    Story Source:
    Materials provided by Université de Genève. Note: Content may be edited for style and length. More

  • in

    Tech makes it possible to digitally communicate through human touch

    Instead of inserting a card or scanning a smartphone to make a payment, what if you could simply touch the machine with your finger?
    A prototype developed by Purdue University engineers would essentially let your body act as the link between your card or smartphone and the reader or scanner, making it possible for you to transmit information just by touching a surface.
    The prototype doesn’t transfer money yet, but it’s the first technology that can send any information through the direct touch of a fingertip. While wearing the prototype as a watch, a user’s body can be used to send information such as a photo or password when touching a sensor on a laptop, the researchers show in a new study.
    “We’re used to unlocking devices using our fingerprints, but this technology wouldn’t rely on biometrics — it would rely on digital signals. Imagine logging into an app on someone else’s phone just by touch,” said Shreyas Sen, a Purdue associate professor of electrical and computer engineering.
    “Whatever you touch would become more powerful because digital information is going through it.”
    The study is published in Transactions on Computer-Human Interaction, a journal by the Association for Computing Machinery. Shovan Maity, a Purdue alum, led the study as a Ph.D. student in Sen’s lab. The researchers also will present their findings at the Association for Computing Machinery’s Computer Human Interaction (ACM CHI) conference in May.

    advertisement

    The technology works by establishing an “internet” within the body that smartphones, smartwatches, pacemakers, insulin pumps and other wearable or implantable devices can use to send information. These devices typically communicate using Bluetooth signals that tend to radiate out from the body. A hacker could intercept those signals from 30 feet away, Sen said.
    Sen’s technology instead keeps signals confined within the body by coupling them in a so-called “Electro-Quasistatic range” that is much lower on the electromagnetic spectrum than typical Bluetooth communication. This mechanism is what enables information transfer by only touching a surface.
    Even if your finger hovered just one centimeter above a surface, information wouldn’t transfer through this technology without a direct touch. This would prevent a hacker from stealing private information such as credit card credentials by intercepting the signals.
    The researchers demonstrated this capability in the lab by having a person interact with two adjacent surfaces. Each surface was equipped with an electrode to touch, a receiver to get data from the finger and a light to indicate that data had transferred. If the finger directly touched an electrode, only the light of that surface turned on. The fact that the light of the other surface stayed off indicated that the data didn’t leak out.
    Similarly, if a finger hovered as close as possible over a laptop sensor, a photo wouldn’t transfer. But a direct touch could transfer a photo.

    advertisement

    Credit card machines and apps such as Apple Pay use a more secure alternative to Bluetooth signals — called near-field communication — to receive a payment from tapping a card or scanning a phone. Sen’s technology would add the convenience of making a secure payment in a single gesture.
    “You wouldn’t have to bring a device out of your pocket. You could leave it in your pocket or on your body and just touch,” Sen said.
    The technology could also replace key fobs or cards that currently use Bluetooth communication to grant access into a building. Instead, a person might just touch a door handle to enter.
    Like machines today that scan coupons, gift cards and other information from a phone, using this technology in real life would require surfaces everywhere to have the right hardware for recognizing your finger.
    The software on the device that a person is wearing would also need to be configured to send signals through the body to the fingertip — and have a way to turn off so that information, such as a payment, wouldn’t be transferred to every surface equipped to receive it.
    The researchers believe that the applications of this technology would go beyond how we interact with devices today.
    “Anytime you are enabling a new hardware channel, it gives you more possibilities. Think of big touch screens that we have today — the only information that the computer receives is the location of your touch. But the ability to transfer information through your touch would change the applications of that big touch screen,” Sen said.
    A video about the research is available on YouTube at https://youtu.be/-2oscW5i5DQ.

    Story Source:
    Materials provided by Purdue University. Original written by Kayla Wiles. Note: Content may be edited for style and length. More

  • in

    Mapping quantum structures with light to unlock their capabilities

    A new tool that uses light to map out the electronic structures of crystals could reveal the capabilities of emerging quantum materials and pave the way for advanced energy technologies and quantum computers, according to researchers at the University of Michigan, University of Regensburg and University of Marburg.
    A paper on the work is published in Science.
    Applications include LED lights, solar cells and artificial photosynthesis.
    “Quantum materials could have an impact way beyond quantum computing,” said Mackillo Kira, professor of electrical engineering and computer science at the University of Michigan, who led the theory side of the new study. “If you optimize quantum properties right, you can get 100% efficiency for light absorption.”
    Silicon-based solar cells are already becoming the cheapest form of electricity, although their sunlight-to-electricity conversion efficiency is rather low, about 30%. Emerging “2D” semiconductors, which consist of a single layer of crystal, could do that much better — potentially using up to 100% of the sunlight. They could also elevate quantum computing to room temperature from the near-absolute-zero machines demonstrated so far.
    “New quantum materials are now being discovered at a faster pace than ever,” said Rupert Huber, professor of physics at the University of Regensburg in Germany, who led the experimental work. “By simply stacking such layers one on top of the other under variable twist angles, and with a wide selection of materials, scientists can now create artificial solids with truly unprecedented properties.”
    The ability to map these properties down to the atoms could help streamline the process of designing materials with the right quantum structures. But these ultrathin materials are much smaller and messier than earlier crystals, and the old analysis methods don’t work. Now, 2D materials can be measured with the new laser-based method at room temperature and pressure.

    advertisement

    The measurable operations include processes that are key to solar cells, lasers and optically driven quantum computing. Essentially, electrons pop between a “ground state,” in which they cannot travel, and states in the semiconductor’s “conduction band,” in which they are free to move through space. They do this by absorbing and emitting light.
    The quantum mapping method uses a 100 femtosecond (100 quadrillionths of a second) pulse of red laser light to pop electrons out of the ground state and into the conduction band. Next the electrons are hit with a second pulse of infrared light. This pushes them so that they oscillate up and down an energy “valley” in the conduction band, a little like skateboarders in a halfpipe.
    The team uses the dual wave/particle nature of electrons to create a standing wave pattern that looks like a comb. They discovered that when the peak of this electron comb overlaps with the material’s band structure — its quantum structure — electrons emit light intensely. That powerful light emission along, with the narrow width of the comb lines, helped create a picture so sharp that researchers call it super-resolution.
    By combining that precise location information with the frequency of the light, the team was able to map out the band structure of the 2D semiconductor tungsten diselenide. Not only that, but they could also get a read on each electron’s orbital angular momentum through the way the front of the light wave twisted in space. Manipulating an electron’s orbital angular momentum, known also as a pseudospin, is a promising avenue for storing and processing quantum information.
    In tungsten diselenide, the orbital angular momentum identifies which of two different “valleys” an electron occupies. The messages that the electrons send out can show researchers not only which valley the electron was in but also what the landscape of that valley looks like and how far apart the valleys are, which are the key elements needed to design new semiconductor-based quantum devices.
    For instance, when the team used the laser to push electrons up the side of one valley until they fell into the other, the electrons emitted light at that drop point, too. That light gives clues about the depths of the valleys and the height of the ridge between them. With this kind of information, researchers can figure out how the material would fare for a variety of purposes.
    The paper is titled, “Super-resolution lightwave tomography of electronic bands in quantum materials.” This research was funded by the Army Research Office, German Research Foundation and U-M College of Engineering Blue Sky Research Program. More

  • in

    Ancient humans may have deliberately voyaged to Japan’s Ryukyu Islands

    Long ago, ancient mariners successfully navigated a perilous ocean journey to arrive at Japan’s Ryukyu Islands, a new study suggests.
    Archaeological sites on six of these isles — part of a 1,200-kilometer-long chain — indicate that migrations to the islands occurred 35,000 to 30,000 years ago, both from the south via Taiwan and from the north via the Japanese island of Kyushu.
    But whether ancient humans navigated there on purpose or drifted there by accident on the Kuroshio ocean current, one of the world’s largest and strongest currents, is unclear. The answer to that question could shed light on the proficiency of these Stone Age humans as mariners and their mental capabilities overall.
    Now, satellite-tracked buoys that simulated wayward rafts suggest that there’s little chance that the seafarers reached the isles by accident.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Researchers analyzed 138 buoys that were released near or passed by Taiwan and the Philippine island Luzon from 1989 to 2017, deployed as part of the Global Drifter Program to map surface ocean currents worldwide. In findings published online December 3 in Scientific Reports, the team found that only four of the buoys came within 20 kilometers of any of the Ryukyu Islands, and these did so only as a result of typhoons and other adverse weather.
    It is unlikely that ancient mariners would have set out on an ocean voyage with a major storm on the horizon, say paleoanthropologist Yousuke Kaifu of the University of Tokyo and colleagues. As a result, the new findings indicate that the Kuroshio current would have forced drifters away from rather than toward the Ryukyu Islands, suggesting that anyone who made the crossing did so intentionally instead of accidentally, Kaifu says.
    Geologic records suggest that currents in the region have remained stable for at least the past 100,000 years. So it’s reasonable to conclude that these buoys mimic how well ancient watercraft set adrift in the same area might have fared, the researchers say.
    “From a navigation perspective, crossing to the Ryukyus was so challenging that accidental-drift models are unlikely to provide an effective explanation,” agrees archaeologist Thomas Leppard of Florida State University in Tallahassee, who was not involved in the research. This new work “is, of course, not conclusive, but it is suggestive.”
    Stone tools and butchered remains of a rhinoceros suggest archaic human lineages such as Homo erectus may have similarly crossed seas at least 709,000 years ago. And artifacts found in Australia suggest modern humans may have begun voyaging across the ocean at least 65,000 years ago (SN: 7/19/17). But it remains hotly debated whether humans’ ocean journeys during the Paleolithic, which lasted from roughly 2.6 million years ago to about 11,700 years ago, were generally made accidentally or intentionally.
    Other data do suggest that ancient humans could have deliberately made the voyage to the Ryukyu Islands. In 2019, a team of adventurers succeeded in paddling more than 200 kilometers from Taiwan to Yonaguni in the archipelago using a dugout canoe that Kaifu and his colleagues made using stone axes modeled off Japanese Paleolithic artifacts.
    Although the people of the Paleolithic are often perceived as primitive and conservative in their goals, “I feel something very different from the evidence of human presence on these remote islands,” Kaifu says. More