More stories

  • in

    Protein storytelling to address the pandemic

    In the last five decades, we’ve learned a lot about the secret lives of proteins — how they work, what they interact with, the machinery that makes them function — and the pace of discovery is accelerating.
    The first three-dimensional protein structure began emerging in the 1970s. Today, the Protein Data Bank, a worldwide repository of information about the 3D structures of large biological molecules, has information about hundreds of thousands of proteins. Just this week, the company DeepMind shocked the protein structure world with its accurate, AI-driven predictions.
    But the 3D structure is often not enough to truly understand what a protein is up to, explains Ken Dill, director of the Laufer Center for Physical and Quantitative Biology at Stony Brook University and a member of the National Academy of Sciences. “It’s like somebody asking how an automobile works, and a mechanic opening the hood of a car and saying, ‘see, there’s the engine, that’s how it works.'”
    In the intervening decades, computer simulations have built upon and added to the understanding of protein behavior by setting these 3D molecular machines in motion. Analyzing their energy landscapes, interactions, and dynamics has taught us even more about these prime movers of life.
    “We’re really trying to ask the question: how does it work? Not just, how does it look?” Dill said. “That’s the essence of why you want to know protein structures in the first place, and one of the biggest applications of this is for drug discovery.”
    Writing in Science magazine in November 2020, Dill and his Stony Brook colleagues Carlos Simmerling and Emiliano Brini shared their perspectives on the evolution of the field.

    advertisement

    “Computational Molecular Physics is an increasingly powerful tool for telling the stories of protein molecule actions,” they wrote. “Systematic improvements in forcefields, enhanced sampling methods, and accelerators have enabled [computational molecular physics] to reach timescales of important biological actions…. At this rate, in the next quarter century, we’ll be telling stories of protein molecules over the whole lifespan, tens of minutes, of a bacterial cell.”
    Speeding Simulations
    Decades after the first dynamic models of proteins, however, computational biophysicists still face major challenges. To be useful, simulations need to be accurate; and to be accurate, simulation needs to progress atom by atom and femtosecond (10^-12 seconds) by femtosecond. To match the timescales that matter, simulations must extend over microseconds or milliseconds — that is, millions of time-steps.
    “Computational molecular physics has developed at a fast clip relatively speaking, but not enough to get us into the time and size and motion range we need to see,” he said.
    One of the main methods researchers use to understand proteins in this way is called molecular dynamics. Since 2015, with support from the National Institutes of Health and the National Science Foundation, Dill and his team have been working to speed up molecular dynamics simulations. Their method, called MELD, accelerates the process by providing vague but important information about the system being studied.

    advertisement

    Dill likens the method to a treasure hunt. Instead of asking someone to find a treasure that could be anywhere, they provide a map with clues, saying: ‘it’s either near Chicago or Idaho.’ In the case of actual proteins, that might mean telling the simulation that one part of a chain of amino acids is near another part of the chain. This narrowing of the search field can speed up simulations significantly — sometimes more than 1000-times faster — enabling novel studies and providing new insights.
    Protein Structure Predictions for COVID-19
    One of the most important uses of biophysical modeling in our daily lives is drug discovery and development. 3D models of viruses or bacteria help identify weak spots in their defenses, and molecular dynamics simulations determine what small molecules may bind to those attackers and gum up their works without having to test every possibility in the lab.
    Dill’s Laufer Center team is involved in a number of efforts to find drugs and treatments for COVID-19, with support from the White House-organized COVID-19 HPC Consortium, an effort among Federal government, industry, and academic leaders to provide access to the world’s most powerful high-performance computing resources in support of COVID-19 research.
    “Everyone dropped other things to work on COVID-19,” Dill recalled.
    The first step the team took was to use MELD to determine the 3D shape of the coronavirus’ unknown proteins. Only three of the 29 of the virus’ proteins have been definitively resolved so far. “Most structures are not known, which is not a good beginning for drug discovery,” he said. “Can we predict structures that are not known? That’s the primary thing that we used Frontera for.”
    The Frontera supercomputer at the Texas Advanced Computing Center (TACC) — the fastest at any university in the world — allowed Dill and his team to make structure predictions for 19 additional proteins. Each of these could serve as an avenue for new drug developments. They have made their structure predictions publicly available and are working with teams to experimentally test their accuracy.
    While it seems like the vaccine race is already close to declaring a winner, the first round of vaccines, drugs, and treatments are only the starting point for a recovery. As with HIV, it is likely that the first drugs developed will not work on all people, or will be surpassed by more effective ones with fewer side-effects in the future.
    Dill and his Laufer Center team are playing the long game, hoping to find targets and mechanisms that are more promising than those already being developed.
    Repurposing Drugs and Exploring New Approaches
    A second project by the Laufer Center group uses Frontera to scan millions of commercially available small molecules for efficacy against COVID-19, in collaboration with Dima Kozakov’s group at Stony Brook University.
    “By focusing on the repurposing of commercially available molecules it’s possible, in principle, to shorten the time it takes to find a new drug,” he said. “Kozakov’s group has the ability to quickly screen thousands of molecules to identify the best hundred ones. We use our physics modeling to filter this pool of candidates even further, narrowing the options experimentalists need to test.”
    A third project is studying an interesting cellular protein known as PROTAC that directs the “trash collector proteins” of human cells to pick up specific target proteins that they would not usually remove.
    “Our cell has smart ways to identify proteins that needs to be destroyed. It gets next to it, puts a sticker on it, and the proteins who collect trash take it away,” he explained. “Initially PROTAC molecules have been used to target cancer related proteins. Now there is a push to transfer this concept to target SARS-CoV-2 proteins.”
    Collaborating with Stony Brook chemist Peter Tonge, they are working to simulate the interaction of novel PROTACS with the COVID-19 virus. “These are some of our most ambitious simulations, both in term of the size of the systems we are tackling and in terms of the chemical complexity,” he said. “Frontera is a crucial resource to give us sufficient turnaround times. For one simulation we need 30 GPUs and four to five days of continuous calculations.”
    The team is developing and testing their protocols on a non-COVID test system to benchmark their predictions. Once they settle on a protocol, they will apply this design procedure to COVID systems.
    Every protein has a story to tell and Dill, Brini and their collaborators are building and applying the tools that help elucidate these stories. “There are some problems in protein science where we believe the real challenge is getting the physics and math right,” Dill concluded. “We’re testing that hypothesis on COVID-19.” More

  • in

    Unlocking the secrets of chemical bonding with machine learning

    A new machine learning approach offers important insights into catalysis, a fundamental process that makes it possible to reduce the emission of toxic exhaust gases or produce essential materials like fabric.
    In a report published in Nature Communications, Hongliang Xin, associate professor of chemical engineering at Virginia Tech, and his team of researchers developed a Bayesian learning model of chemisorption, or Bayeschem for short, aiming to use artificial intelligence to unlock the nature of chemical bonding at catalyst surfaces.
    “It all comes down to how catalysts bind with molecules,” said Xin. “The interaction has to be strong enough to break some chemical bonds at reasonably low temperatures, but not too strong that catalysts would be poisoned by reaction intermediates. This rule is known as the Sabatier principle in catalysis.”
    Understanding how catalysts interact with different intermediates and determining how to control their bond strengths so that they are within that “goldilocks zone” is the key to designing efficient catalytic processes, Xin said. The research provides a tool for that purpose.
    Bayeschem works using Bayesian learning, a specific machine learning algorithm for inferring models from data. “Suppose you have a domain model based on well-established physical laws, and you want to use it to make predictions or learn something new about the world,” explained Siwen Wang, a former chemical engineering doctoral student. “The Bayesian approach is to learn the distribution of model parameters given our prior knowledge and the observed, often scarce, data, while providing uncertainty quantification of model predictions.”
    The d-band theory of chemisorption used in Bayeschem is a theory describing chemical bonding at solid surfaces involving d-electrons that are usually shaped like a four-leaf clover. The model explains how d-orbitals of catalyst atoms are overlapping and attracted to adsorbate valence orbitals that have a spherical or dumbbell-like shape. It has been considered the standard model in heterogeneous catalysis since its development by Hammer and Nørskov in the 1990s, and though it has been successful in explaining bonding trends of many systems, Xin said the model fails at times due to the intrinsic complexity of electronic interactions.
    According to Xin, Bayeschem brings the d-band theory to a new level for quantifying those interaction strengths and possibly tailoring some knobs, such as structure and composition, to design better materials. The approach advances the d-band theory of chemisorption by extending its prediction and interpretation capabilities of adsorption properties, both of which are crucial in catalyst discovery. However, compared with the black-box machine learning models that are trained by large amounts of data, the prediction accuracy of Bayeschem is still amenable to improvement, said Hemanth Pillai, a chemical engineering doctoral student in Xin’s group who contributed equally to the study.
    “The opportunity to come up with highly accurate and interpretable models that build on deep learning algorithms and the theory of chemisorption is highly rewarding for achieving the goals of artificial intelligence in catalysis,” said Xin.

    Story Source:
    Materials provided by Virginia Tech. Original written by Tina Russell. Note: Content may be edited for style and length. More

  • in

    Using a video game to understand the origin of emotions

    Emotions are complex phenomena that influence our minds, bodies and behaviour. A number of studies have sought to connect given emotions, such as fear or pleasure, to specific areas of the brain, but without success. Some theoretical models suggest that emotions emerge through the coordination of multiple mental processes triggered by an event. These models involve the brain orchestrating adapted emotional responses via the synchronisation of motivational, expressive and visceral mechanisms. To investigate this hypothesis, a research team from the University of Geneva (UNIGE) studied brain activity using functional MRI. They analysed the feelings, expressions and physiological responses of volunteers while they were playing a video game that had been specially developed to arouse different emotions depending on the progress of the game. The results, published in the journal PLOS Biology, show that different emotional components recruit several neural networks in parallel distributed throughout the brain, and that their transient synchronisation generates an emotional state. The somatosensory and motor pathways are two of the areas involved in this synchronisation, thereby validating the idea that emotion is grounded in action-oriented functions in order to allow an adapted response to events.
    Most studies use passive stimulation to understand the emergence of emotions: they typically present volunteers with photos, videos or images evoking fear, anger, joy or sadness while recording the cerebral response using electroencephalography or imaging. The goal is to pinpoint the specific neural networks for each emotion. “The problem is, these regions overlap for different emotions, so they’re not specific,” begins Joana Leitão, a post-doctoral fellow in the Department of Fundamental Neurosciences (NEUFO) in UNIGE’s Faculty of Medicine and at the Swiss Centre for Affective Sciences (CISA). “What’s more, it’s likely that, although these images represent emotions well, they don’t evoke them.”
    A question of perspective
    Several neuroscientific theories have attempted to model the emergence of an emotion, although none has so far been proven experimentally. The UNIGE research team subscribe to the postulate that emotions are “subjective”: two individuals faced with the same situation may experience a different emotion. “A given event is not assessed in the same way by each person because the perspectives are different,” continues Dr Leitão.
    In a theoretical model known as the component process model (CPM) — devised by Professor Klaus Scherer, the retired founding director of CISA- an event will generate multiple responses in the organism. These relate to components of cognitive assessment (novelty or concordance with a goal or norms), motivation, physiological processes (sweating or heart rate), and expression (smiling or shouting). In a situation that sets off an emotional response, these different components influence each other dynamically. It is their transitory synchronisation that might correspond to an emotional state.
    Emotional about Pacman
    The Geneva neuroscientists devised a video game to evaluate the applicability of this model. “The aim is to evoke emotions that correspond to different forms of evaluation,” explains Dr Leitão. “Rather than viewing simple images, participants play a video game that puts them in situations they’ll have to evaluate so they can advance and win rewards.” The game is an arcade game that is similar to the famous Pacman. Players have to grab coins, touch the “nice monsters,” ignore the “neutral monsters” and avoid the “bad guys” to win points and pass to the next level.
    The scenario involves situations that trigger the four components of the CPM model differently. At the same time, the researchers were able to measure brain activity via imaging; facial expression by analysing the zygomatic muscles; feelings via questions; and physiology by skin and cardiorespiratory measurements. “All of these components involve different circuits distributed throughout the brain,” says the Geneva-based researcher. “By cross-referencing the imagery data with computational modelling, we were able to determine how these components interact over time and at what point they synchronise to generate an emotion.”
    A made-to-measure emotional response
    The results also indicate that a region deep in the brain called the basal ganglia is involved in this synchronisation. This structure is known as a convergence point between multiple cortical regions, each of which is equipped with specialised affective, cognitive or sensorimotor processes. The other regions involve the sensorimotor network, the posterior insula and the prefrontal cortex. “The involvement of the somatosensory and motor zones accords with the postulate of theories that consider emotion as a preparatory mechanism for action that enables the body to promote an adaptive response to events,” concludes Patrik Vuilleumier, full professor at NEUFO and senior author of the study.

    Story Source:
    Materials provided by Université de Genève. Note: Content may be edited for style and length. More

  • in

    Tech makes it possible to digitally communicate through human touch

    Instead of inserting a card or scanning a smartphone to make a payment, what if you could simply touch the machine with your finger?
    A prototype developed by Purdue University engineers would essentially let your body act as the link between your card or smartphone and the reader or scanner, making it possible for you to transmit information just by touching a surface.
    The prototype doesn’t transfer money yet, but it’s the first technology that can send any information through the direct touch of a fingertip. While wearing the prototype as a watch, a user’s body can be used to send information such as a photo or password when touching a sensor on a laptop, the researchers show in a new study.
    “We’re used to unlocking devices using our fingerprints, but this technology wouldn’t rely on biometrics — it would rely on digital signals. Imagine logging into an app on someone else’s phone just by touch,” said Shreyas Sen, a Purdue associate professor of electrical and computer engineering.
    “Whatever you touch would become more powerful because digital information is going through it.”
    The study is published in Transactions on Computer-Human Interaction, a journal by the Association for Computing Machinery. Shovan Maity, a Purdue alum, led the study as a Ph.D. student in Sen’s lab. The researchers also will present their findings at the Association for Computing Machinery’s Computer Human Interaction (ACM CHI) conference in May.

    advertisement

    The technology works by establishing an “internet” within the body that smartphones, smartwatches, pacemakers, insulin pumps and other wearable or implantable devices can use to send information. These devices typically communicate using Bluetooth signals that tend to radiate out from the body. A hacker could intercept those signals from 30 feet away, Sen said.
    Sen’s technology instead keeps signals confined within the body by coupling them in a so-called “Electro-Quasistatic range” that is much lower on the electromagnetic spectrum than typical Bluetooth communication. This mechanism is what enables information transfer by only touching a surface.
    Even if your finger hovered just one centimeter above a surface, information wouldn’t transfer through this technology without a direct touch. This would prevent a hacker from stealing private information such as credit card credentials by intercepting the signals.
    The researchers demonstrated this capability in the lab by having a person interact with two adjacent surfaces. Each surface was equipped with an electrode to touch, a receiver to get data from the finger and a light to indicate that data had transferred. If the finger directly touched an electrode, only the light of that surface turned on. The fact that the light of the other surface stayed off indicated that the data didn’t leak out.
    Similarly, if a finger hovered as close as possible over a laptop sensor, a photo wouldn’t transfer. But a direct touch could transfer a photo.

    advertisement

    Credit card machines and apps such as Apple Pay use a more secure alternative to Bluetooth signals — called near-field communication — to receive a payment from tapping a card or scanning a phone. Sen’s technology would add the convenience of making a secure payment in a single gesture.
    “You wouldn’t have to bring a device out of your pocket. You could leave it in your pocket or on your body and just touch,” Sen said.
    The technology could also replace key fobs or cards that currently use Bluetooth communication to grant access into a building. Instead, a person might just touch a door handle to enter.
    Like machines today that scan coupons, gift cards and other information from a phone, using this technology in real life would require surfaces everywhere to have the right hardware for recognizing your finger.
    The software on the device that a person is wearing would also need to be configured to send signals through the body to the fingertip — and have a way to turn off so that information, such as a payment, wouldn’t be transferred to every surface equipped to receive it.
    The researchers believe that the applications of this technology would go beyond how we interact with devices today.
    “Anytime you are enabling a new hardware channel, it gives you more possibilities. Think of big touch screens that we have today — the only information that the computer receives is the location of your touch. But the ability to transfer information through your touch would change the applications of that big touch screen,” Sen said.
    A video about the research is available on YouTube at https://youtu.be/-2oscW5i5DQ.

    Story Source:
    Materials provided by Purdue University. Original written by Kayla Wiles. Note: Content may be edited for style and length. More

  • in

    Mapping quantum structures with light to unlock their capabilities

    A new tool that uses light to map out the electronic structures of crystals could reveal the capabilities of emerging quantum materials and pave the way for advanced energy technologies and quantum computers, according to researchers at the University of Michigan, University of Regensburg and University of Marburg.
    A paper on the work is published in Science.
    Applications include LED lights, solar cells and artificial photosynthesis.
    “Quantum materials could have an impact way beyond quantum computing,” said Mackillo Kira, professor of electrical engineering and computer science at the University of Michigan, who led the theory side of the new study. “If you optimize quantum properties right, you can get 100% efficiency for light absorption.”
    Silicon-based solar cells are already becoming the cheapest form of electricity, although their sunlight-to-electricity conversion efficiency is rather low, about 30%. Emerging “2D” semiconductors, which consist of a single layer of crystal, could do that much better — potentially using up to 100% of the sunlight. They could also elevate quantum computing to room temperature from the near-absolute-zero machines demonstrated so far.
    “New quantum materials are now being discovered at a faster pace than ever,” said Rupert Huber, professor of physics at the University of Regensburg in Germany, who led the experimental work. “By simply stacking such layers one on top of the other under variable twist angles, and with a wide selection of materials, scientists can now create artificial solids with truly unprecedented properties.”
    The ability to map these properties down to the atoms could help streamline the process of designing materials with the right quantum structures. But these ultrathin materials are much smaller and messier than earlier crystals, and the old analysis methods don’t work. Now, 2D materials can be measured with the new laser-based method at room temperature and pressure.

    advertisement

    The measurable operations include processes that are key to solar cells, lasers and optically driven quantum computing. Essentially, electrons pop between a “ground state,” in which they cannot travel, and states in the semiconductor’s “conduction band,” in which they are free to move through space. They do this by absorbing and emitting light.
    The quantum mapping method uses a 100 femtosecond (100 quadrillionths of a second) pulse of red laser light to pop electrons out of the ground state and into the conduction band. Next the electrons are hit with a second pulse of infrared light. This pushes them so that they oscillate up and down an energy “valley” in the conduction band, a little like skateboarders in a halfpipe.
    The team uses the dual wave/particle nature of electrons to create a standing wave pattern that looks like a comb. They discovered that when the peak of this electron comb overlaps with the material’s band structure — its quantum structure — electrons emit light intensely. That powerful light emission along, with the narrow width of the comb lines, helped create a picture so sharp that researchers call it super-resolution.
    By combining that precise location information with the frequency of the light, the team was able to map out the band structure of the 2D semiconductor tungsten diselenide. Not only that, but they could also get a read on each electron’s orbital angular momentum through the way the front of the light wave twisted in space. Manipulating an electron’s orbital angular momentum, known also as a pseudospin, is a promising avenue for storing and processing quantum information.
    In tungsten diselenide, the orbital angular momentum identifies which of two different “valleys” an electron occupies. The messages that the electrons send out can show researchers not only which valley the electron was in but also what the landscape of that valley looks like and how far apart the valleys are, which are the key elements needed to design new semiconductor-based quantum devices.
    For instance, when the team used the laser to push electrons up the side of one valley until they fell into the other, the electrons emitted light at that drop point, too. That light gives clues about the depths of the valleys and the height of the ridge between them. With this kind of information, researchers can figure out how the material would fare for a variety of purposes.
    The paper is titled, “Super-resolution lightwave tomography of electronic bands in quantum materials.” This research was funded by the Army Research Office, German Research Foundation and U-M College of Engineering Blue Sky Research Program. More

  • in

    Chaotic early solar system collisions resembled 'Asteroids' arcade game

    One Friday evening in 1992, a meteorite ended a more than 150 million-mile journey by smashing into the trunk of a red Chevrolet Malibu in Peekskill, New York. The car’s owner reported that the 30-pound remnant of the earliest days of our solar system was still warm and smelled of sulfur.
    Nearly 30 years later, a new analysis of that same Peekskill meteorite and 17 others by researchers at The University of Texas at Austin and the University of Tennessee, Knoxville, has led to a new hypothesis about how asteroids formed during the early years of the solar system.
    The meteorites studied in the research originated from asteroids and serve as natural samples of the space rocks. They indicate that the asteroids formed though violent bombardment and subsequent reassembly, a finding that runs counter to the prevailing idea that the young solar system was a peaceful place.
    The study was published in print Dec.1 in the journal Geochimica et Cosmochimica Acta.
    The research began when co-author Nick Dygert was a postdoctoral fellow at UT’s Jackson School of Geosciences studying terrestrial rocks using a method that could measure the cooling rates of rocks from very high temperatures, up to 1,400 degrees Celsius.
    Dygert, now an assistant professor at the University of Tennessee, realized that this method — called a rare earth element (REE)-in-two-pyroxene thermometer — could work for space rocks, too.

    advertisement

    “This is a really powerful new technique for using geochemistry to understand geophysical processes, and no one had used it to measure meteorites yet,” Dygert said.
    Since the 1970s, scientists have been measuring minerals in meteorites to figure out how they formed. The work suggested that meteorites cooled very slowly from the outside inward in layers. This “onion shell model” is consistent with a relatively peaceful young solar system where chunks of rock orbited unhindered. But those studies were only capable of measuring cooling rates from temperatures near about 500 degrees Celsius.
    When Dygert and Michael Lucas, a postdoctoral scholar at the University of Tennessee who led the work, applied the REE-in-two-pyroxene method, with its much higher sensitivity to peak temperature, they found unexpected results. From around 900 degrees Celsius down to 500 degrees Celsius, cooling rates were 1,000 to 1 million times faster than at lower temperatures.
    How could these two very different cooling rates be reconciled?
    The scientists proposed that asteroids formed in stages. If the early solar system was, much like the old Atari game “Asteroids,” rife with bombardment, large rocks would have been smashed to bits. Those smaller pieces would have cooled quickly. Afterward, when the small pieces reassembled into larger asteroids we see today, cooling rates would have slowed.

    advertisement

    To test this rubble pile hypothesis, Jackson School Professor Marc Hesse and first-year doctoral student Jialong Ren built a computational model of a two-stage thermal history of rubble pile asteroids for the first time.
    Because of the vast number of pieces in a rubble pile — 1015 or a thousand trillions — and the vast array of their sizes, Ren had to develop new techniques to account for changes in mass and temperature before and after bombardment.
    “This was an intellectually significant contribution,” Hesse said.
    The resulting model supports the rubble pile hypothesis and provides other insights as well. One implication is that cooling slowed so much after reassembly not because the rock gave off heat in layers. Rather, it was that the rubble pile contained pores.
    “The porosity reduces how fast you can conduct heat,” Hesse said. “You actually cool slower than you would have if you hadn’t fragmented because all of the rubble makes kind of a nice blanket. And that’s sort of unintuitive.”
    Tim Swindle of the Lunar and Planetary Laboratory at the University of Arizona, who studies meteorites but was not involved in the research, said that this work is a major step forward.
    “This seems like a more complete model, and they’ve added data to part of the question that people haven’t been talking about, but should have been. The jury is still out, but this is a strong argument.”
    The biggest implication of the new rubble pile hypothesis, Dygert said, is that these collisions characterized the early days of the solar system.
    “They were violent, and they started early on,” he said.
    The research was supported by NASA. The Smithsonian National Museum of Natural History supplied samples of meteorites for the study. More

  • in

    New machine learning tool tracks urban traffic congestion

    A new machine learning algorithm is poised to help urban transportation analysts relieve bottlenecks and chokepoints that routinely snarl city traffic.
    The tool, called TranSEC, was developed at the U.S. Department of Energy’s Pacific Northwest National Laboratory to help urban traffic engineers get access to actionable information about traffic patterns in their cities.
    Currently, publicly available traffic information at the street level is sparse and incomplete. Traffic engineers generally have relied on isolated traffic counts, collision statistics and speed data to determine roadway conditions. The new tool uses traffic datasets collected from UBER drivers and other publicly available traffic sensor data to map street-level traffic flow over time. It creates a big picture of city traffic using machine learning tools and the computing resources available at a national laboratory.
    “What’s novel here is the street level estimation over a large metropolitan area,” said Arif Khan, a PNNL computer scientist who helped develop TranSEC. “And unlike other models that only work in one specific metro area, our tool is portable and can be applied to any urban area where aggregated traffic data is available.”
    UBER-fast traffic analysis
    TranSEC (which stands for transportation state estimation capability) differentiates itself from other traffic monitoring methods by its ability to analyze sparse and incomplete information. It uses machine learning to connect segments with missing data, and that allows it to make near real-time street level estimations.

    advertisement

    In contrast, the map features on our smart phones can help us optimize our journey through a city landscape, pointing out chokepoints and suggesting alternate routes. But smart phone tools only work for an individual driver trying to get from point A to point B. City traffic engineers are concerned with how to help all vehicles get to their destinations efficiently. Sometimes a route that seems efficient for an individual driver leads to too many vehicles trying to access a road that wasn’t designed to handle that volume of traffic.
    Using public data from the entire 1,500-square-mile Los Angeles metropolitan area, the team reduced the time needed to create a traffic congestion model by an order of magnitude, from hours to minutes. The speed-up, accomplished with high-performance computing resources at PNNL, makes near-real-time traffic analysis feasible. The research team recently presented that analysis at the August 2020 virtual Urban Computing Workshop as part of the Knowledge Discovery and Data Mining (SIGKDD) conference, and in September 2020 they sought the input of traffic engineers at a virtual meeting on TranSEC.
    “TranSEC has the potential to initiate a paradigm shift in how traffic professionals monitor and predict system mobility performance,” said Mark Franz, a meeting attendee and a research engineer at the Center for Advanced Transportation Technology, University of Maryland, College Park. “TranSEC overcomes the inherent data gaps in legacy data collection methods and has tremendous potential.”
    Machine learning improves accuracy over time
    The machine learning feature of TranSEC means that as more data is acquired and processed it becomes more refined and useful over time. This kind of analysis is used to understand how disturbances spread across networks. Given enough data, the machine learning element will be able to predict impacts so that traffic engineers can create corrective strategies.

    advertisement

    “We use a graph-based model together with novel sampling methods and optimization engines, to learn both the travel times and the routes,” said Arun Sathanur, a PNNL computer scientist and a lead researcher on the team. “The method has significant potential to be expanded to other modes of transportation, such as transit and freight traffic. As an analytic tool, it is capable of investigating how a traffic condition spreads.”
    With PNNL’s data-driven approach, users can upload real-time data and update TranSEC on a regular basis in a transportation control center. Engineers can use short-term forecasts for decision support to manage traffic issues. PNNL’s approach is also extensible to include weather or other data that affect conditions on the road.
    Computing power for transportation planners nationwide
    Just as situational awareness of conditions informs an individual driver’s decisions, TranSEC’s approach provides situational awareness on a system-wide basis to help reduce urban traffic congestion.
    “Traffic engineers nationwide have not had a tool to give them anywhere near real-time estimation of transportation network states,” said Robert Rallo, PNNL computer scientist and principal investigator on the TranSEC project. “Being able to predict conditions an hour or more ahead would be very valuable, to know where the blockages are going to be.”
    While running a full-scale city model still requires high-performance computing resources, TranSEC is scalable. For example, a road network with only the major highways and arterials could be modeled on a powerful desktop computer.
    “We are working toward making TranSEC available to municipalities nationwide,” said Katherine Wolf, project manager for TranSEC.
    Eventually, after further development, TranSEC could be used to help program autonomous vehicle routes, according to the research team.
    Video: https://www.youtube.com/watch?v=8S4bLv9CtOo
    The project was supported by the U.S. Department of Energy’s Office of Energy Efficiency and Renewable Energy’s Vehicle Technologies Office, Energy Efficient Mobility Systems Program. More