More stories

  • in

    Software of autonomous driving systems

    The future has already arrived. (Partially) autonomous cars are already on our roads today with automated systems such as braking or lane departure warning systems. As a central vehicle component, the software of these systems must continuously and reliably meet high quality criteria. Franz Wotawa from the Institute of Software Technology at TU Graz and his team in close collaboration with the cyber-physical system testing team of AVL are dedicated to the great challenges of this future technology: the guarantee of safety through the automatic generation of extensive test scenarios for simulations and system-internal error compensation by means of an adaptive control method.
    Ontologies instead of test kilometers
    Test drives alone do not provide sufficient evidence for the accident safety of autonomous driving systems, explains Franz Wotawa: “Autonomous vehicles would have to be driven around 200 million kilometers to prove their reliability — especially for accident scenarios. That is 10,000 times more test kilometers than are required for conventional cars.” However, critical test scenarios with danger to life and limb cannot be reproduced in real test drives. Autonomous driving systems must therefore be tested for their safety in simulations. “Although the tests so far cover many scenarios, the question always remains whether this is sufficient and whether all possible accident scenarios have been considered,” says Wotawa. Mihai Nica from the AVL underlines this statement: “in order to test highly autonomous system, it is required to re-think how the automotive industry must validate and certify Advanced Driver Assistance Systes (ADAS) and Autonomous Driving (AD) systems. Therefore, AVL participates with TU Graz to develop a unique and highly efficient method and workflow based on simulation and test case generation for prove fulfillment of Safety Of The Intended Functionality (SOTIF), quality and system integrity requirements of the autonomous systems.”
    Together the project team is working on innovative methods with which far more test scenarios can be simulated than before. The researchers’ approach is as follows: instead of driving millions of kilometers, they use ontologies to describe the environment of autonomous vehicles. Ontologies are knowledge bases for the exchange of relevant information within a machine system. For example, interfaces, behavior and relationships of individual system units can communicate with each other. In the case of autonomous driving systems, these would be “decision making,” “traffic description” or “autopilot.” The Graz researchers worked with basic detailed information about environments in driving scenarios and fed the knowledge bases with details about the construction of roads, intersections and the like, which AVL provided. From this, driving scenarios can be derived, by using AVL’s world leading test case generation algorithm, that test the behavior of the automated driving systems in simulations.
    Additional weaknesses uncovered
    As part of the EU AutoDrive project, researchers have used two algorithms to convert these ontologies into input models for combinatorial testing that can subsequently be executed using simulation environments. “In initial experimental tests we have discovered serious weaknesses in automated driving functions. Without these automatically generated test scenarios, the vulnerabilities would not have been detected so quickly: nine out of 319 test cases investigated have led to accidents.” For example, in one test scenario, a brake assistance system failed to detect two people coming from different directions at the same time and one of them was badly hit due to the initiated braking maneuver. “This means that with our method, you can find test scenarios that are difficult to test in reality and that you might not even be able to focus on,” says Wotawa.
    This work by Franz Wotawa et al was also presented in the journal “Information and Software Technology” at the beginning of 2020 and overlaps with the “Christian Doppler Laboratory for Methods for Quality Assurance of Cyber-Physical Systems.” The CD lab is led by Franz Wotawa, and AVL is a corporate partner. Das Christian Doppler Labor (CD-Labor) wird von Franz Wotawa geleitet, die AVL ist Unternehmenspartnerin.
    Adaptive compensation of internal errors
    Autonomous systems and in particular autonomous driving systems must be able to correct themselves in the event of malfunctions or changed environmental conditions and reliably reach given target states at all times. “When we look at semi-automated systems already in use today, such as cruise control, it quickly becomes clear that in the case of errors, the driver can and will always intervene. With fully autonomous vehicles, this is no longer an option, so the system itself must be able to act accordingly,” explains Franz Wotawa.
    In a new publication for the Software Quality Journal, Franz Wotawa and his PhD student Martin Zimmermann present a control method that can adaptively compensate for internal errors in the software system. The presented method selects alternative actions in such a way that predetermined target states can be achieved, while providing a certain degree of redundancy. Action selection is based on weighting models that are adjusted over time and measure the success rate of specific actions already performed. In addition to the method, the researchers also present a Java implementation and its validation using two case studies motivated by the requirements of the autonomous driving range.

    Story Source:
    Materials provided by Graz University of Technology. Original written by Susanne Eigner. Note: Content may be edited for style and length. More

  • in

    Tracking misinformation campaigns in real-time is possible, study shows

    A research team led by Princeton University has developed a technique for tracking online foreign misinformation campaigns in real time, which could help mitigate outside interference in the 2020 American election.
    The researchers developed a method for using machine learning to identify malicious internet accounts, or trolls, based on their past behavior. Featured in Science Advances, the model investigated past misinformation campaigns from China, Russia, and Venezuela that were waged against the United States before and after the 2016 election.
    The team identified the patterns these campaigns followed by analyzing posts to Twitter and Reddit and the hyperlinks or URLs they included. After running a series of tests, they found their model was effective in identifying posts and accounts that were part of a foreign influence campaign, including those by accounts that had never been used before.
    They hope that software engineers will be able to build on their work to create a real-time monitoring system for exposing foreign influence in American politics.
    “What our research means is that you could estimate in real time how much of it is out there, and what they’re talking about,” said Jacob N. Shapiro, professor of politics and international affairs at the Princeton School of Public and International Affairs. “It’s not perfect, but it would force these actors to get more creative and possibly stop their efforts. You can only imagine how much better this could be if someone puts in the engineering efforts to optimize it.”
    Shapiro and associate research scholar Meysam Alizadeh conducted the study with Joshua Tucker, professor of politics at New York University, and Cody Buntain, assistant professor in informatics at New Jersey Institute of Technology.

    advertisement

    The team began with a simple question: Using only content-based features and examples of known influence campaign activity, could you look at other content and tell whether a given post was part of an influence campaign?
    They chose to investigate a unit known as a “postURL pair,” which is simply a post with a hyperlink. To have real influence, coordinated operations require intense human and bot-driven information sharing. The team theorized that similar posts may appear frequently across platforms over time.
    They combined data on troll campaigns from Twitter and Reddit with a rich dataset on posts by politically engaged users and average users collected over many years by NYU’s Center for Social Media and Politics (CSMaP). The troll data included publicly available Twitter and Reddit data from Chinese, Russian, and Venezuelan trolls totaling 8,000 accounts and 7.2 million posts from late 2015 through 2019.
    “We couldn’t have conducted the analysis without that baseline comparison dataset of regular, ordinary tweets,” said Tucker, co-director of CSMaP. “We used it to train the model to distinguish between tweets from coordinated influence campaigns and those from ordinary users.”
    The team considered the characteristics of the post itself, like the timing, word count, or if the mentioned URL domain is a news website. They also looked at what they called “metacontent,” or how the messaging in a post related to other information shared at that time (for example, whether a URL was in the top 25 political domains shared by trolls.)
    “Meysam’s insight on metacontent was key,” Shapiro said. “He saw that we could use the machine to replicate the human intuition that ‘something about this post just looks out of place.’ Both trolls and normal people often include local news URLs in their posts, but the trolls tended to mention different users in such posts, probably because they are trying to draw their audience’s attention in a new direction. Metacontent lets the algorithm find such anomalies.”

    advertisement

    The team tested their method extensively, examining performance month to month on five different prediction tasks across four influence campaigns. Across almost all of the 463 different tests, it was clear which posts were and were not part of an influence operation, meaning that content-based features can indeed help find coordinated influence campaigns on social media.
    In some countries, the patterns were easier to spot than others. Venezuelan trolls only retweeted certain people and topics, making them easy to detect. Russian and Chinese trolls were better at making their content look organic, but they, too, could be found. In early 2016, for example, Russian trolls quite often linked to far-right URLs, which was unusual given the other aspects of their posts, and, in early 2017, they linked to political websites in odd ways.
    Overall, Russian troll activity became harder to find as time went on. It is possible that investigative groups or others caught on to the false information, flagging the posts and forcing trolls to change their tactics or approach, though Russians also appear to have produced less in 2018 than in previous years.
    While the research shows there is no stable set of characteristics that will find influence efforts, it also shows that troll content will almost always be different in detectable ways. In one set of tests, the authors show the method can find never-before-used accounts that are part of an ongoing campaign. And while social media platforms regularly delete accounts associated with foreign disinformation campaigns, the team’s findings could lead to a more effective solution.
    “When the platforms ban these accounts, it not only makes it hard to collect data to find similar accounts in the future, but it signals to the disinformation actor that they should avoid the behavior that led to deletion,” said Buntain. “This mechanism allows [the platform] to identify these accounts, silo them away from the rest of Twitter, and make it appear to these actors as though they are continuing to share their disinformation material.”
    The work highlights the importance of interdisciplinary research between social and computational science, as well as the criticality of funding research data archives.
    “The American people deserve to understand how much is being done by foreign countries to influence our politics,” said Shapiro. “These results suggest that providing that knowledge is technically feasible. What we currently lack is the political will and funding, and that is a travesty.”
    The method is no panacea, the researchers cautioned. It requires that someone has already identified recent influence campaign activity to learn from. And how the different features combine to indicate questionable content changes over time and between campaigns.
    The paper, “Content-Based Features Predict Social Media Influence Operations,” will appear in Science Advances. More

  • in

    Is it a bird, a plane? Not superman, but a flapping wing drone

    A drone prototype that mimics the aerobatic manoeuvres of one of the world’s fastest birds, the swift, is being developed by an international team of engineers in the latest example of biologically inspired flight.
    A research team from Singapore, Australia, China and Taiwan has designed a 26 gram ornithopter (flapping wing aircraft) which can hover, dart, glide, brake and dive just like a swift, making them more versatile, safer and quieter than the existing quadcopter drones.
    Weighing the equivalent of two tablespoons of flour, the flapping wing drone has been optimised to fly in cluttered environments near humans, with the ability to glide, hover at very low power, and stop quickly from fast speeds, avoiding collisions — all things that quadcopters can’t do.
    National University of Singapore research scientist, Dr Yao-Wei Chin, who has led the project published today in Science Robotics, says the team has designed a flapping wing drone similar in size to a swift, or large moth, that can perform some aggressive bird flight manoeuvres.
    “Unlike common quadcopters that are quite intrusive and not very agile, biologically-inspired drones could be used very successfully in a range of environments,” Dr Chin says.
    The surveillance applications are clear, but novel applications include pollination of indoor vertical farms without damaging dense vegetation, unlike the rotary-propelled quadcopters whose blades risk shredding crops.

    advertisement

    Because of their stability in strong winds, the ornithopter drone could also be used to chase birds away from airports, reducing the risk of them getting sucked into jet engines.
    University of South Australia (UniSA) aerospace engineer, Professor Javaan Chahl, says copying the design of birds, like swifts, is just one strategy to improve the flight performance of flapping wing drones.
    “There are existing ornithopters that can fly forward and backward as well as circling and gliding, but until now, they haven’t been able to hover or climb. We have overcome these issues with our prototype, achieving the same thrust generated by a propeller,” Dr Chahl says.
    “The triple roles of flapping wings for propulsion, lift and drag enable us to replicate the flight patterns of aggressive birds by simple tail control. Essentially, the ornithopter drone is a combination of a paraglider, aeroplane and helicopter.”
    There are currently no commercialised ornithopters being used for surveillance, but this could change with the latest breakthrough, researchers claim.

    advertisement

    By improving the design so ornithopters can now produce enough thrust to hover and to carry a camera and accompanying electronics, the flapping wing drone could be used for crowd and traffic monitoring, information gathering and surveying forests and wildlife.
    “The light weight and the slow beating wings of the ornithopter poses less danger to the public than quadcopter drones in the event of a crash and given sufficient thrust and power banks it could be modified to carry different payloads depending on what is required,” Dr Chin says.
    One area that requires more research is how birds will react to a mechanical flying object resembling them in size and shape. Small, domesticated birds are easily scared by drones but large flocks and much bigger birds have been known to attack ornithopters.
    And while the bio-inspired breakthrough is impressive, we are a long way from replicating biological flight, Dr Chin says.
    “Although ornithopters are the closest to biological flight with their flapping wing propulsion, birds and insects have multiple sets of muscles which enable them to fly incredibly fast, fold their wings, twist, open feather slots and save energy.
    “Their wing agility allows them to turn their body in mid-air while still flapping at different speeds and angles.
    “Common swifts can cruise at a maximum speed of 31 metres a second, equivalent to 112 kilometres per hour or 90 miles per hour.
    “At most, I would say we are replicating 10 per cent of biological flight,” he says. More

  • in

    Brain builds and uses maps of social networks, physical space, in the same way

    Even in these social-distanced days, we keep in our heads a map of our relationships with other people: family, friends, coworkers and how they relate to each other. New research from the Center for Mind and Brain at the University of California, Davis shows that we put together this social map in much the same way that we assemble a map of physical places and things.
    “When we’re learning to navigate the real world, we don’t start off by seeing a whole map,” said Erie Boorman, assistant professor at the Center for Mind and Brain and UC Davis Department of Psychology. “We sample the world and reconstruct it.” The work is published July 22 in the journal Neuron.
    Research has shown that animals navigate using a representation of the outside world in their brain. Whether rats in a maze or people in a new city, they build this internal map in pieces then stitch them together. That work earned a Nobel Prize for Physiology or Medicine for John O’Keefe, May-Britt Moser and Edvard Moset in 2014.
    Boorman and UC Davis colleagues Seongmin Park, Douglas Miller and Charan Ranganath, with Hamed Nili at the University of Oxford, wondered if our brains represent abstract relationships, such as social networks, in the same way.
    To find out, they gave volunteers pieces of information about two groups of people ranked by perceived relative competence and popularity. The volunteers were only told about relations on one dimension between a pair of people who differed by one rank level at a time: for example, that Alice is more popular than Bob, but Bob is seen as more competent than Charles.
    The true social hierarchy could be mapped as a two-dimensional grid defined by dimensions of competence and popularity, but this was not shown to the volunteers. They only could infer it by integrating piecemeal learned relationships between pairs of individuals in one dimension at a time.

    advertisement

    They also learned about relative ranks of a few people between groups.
    Assembling a map
    They were later asked about relationships between new pairs of people in the grid while the researchers used functional magnetic resonance imaging to measure brain activity. Without being prompted, based only on pairwise comparisons, the volunteers organized the information into a two-dimensional grid in their brains. This two-dimensional map was present across three brain regions called the hippocampus, entorhinal cortex and ventromedial prefrontal cortex/medial orbitofrontal cortex.
    Based on limited comparisons between the two groups, they were also able to generalize to the rest of the group. For example, if Cynthia from group 1 was more popular than David from group 2, that affected the rank of other members of group 2 compared to group 1.
    The volunteers weren’t told to think about the data in that way, Boorman said. Given only pairwise comparisons, they inferred the remaining hierarchical arrangement of the whole set.

    advertisement

    “If you know how two social networks are related to each other, you can make a good inference about the relationship between two individuals in different social networks before direct experiences,” Park said.
    The study points to a general principle behind how we make decisions based on past experience. Whether we are remembering a route in the physical world, or learning about a set of friends and acquaintances, we start with a template, such as a 2-D topology, and a few landmarks, and fit new data around them.
    “Our results show that our brain organizes knowledge learned from separate experiences in a structural form like a map, which allows us to use past experiences to make a novel decision,” Park said.
    That allows us to quickly adapt to a new situation based on past experience. This may help to explain humans’ remarkable flexibility in generalizing experiences from one task to another, a key challenge in artificial intelligence research.
    “We know a lot about how the neural codes for representing physical space,” Boorman said. “It looks like the human brain uses the same codes to organize abstract, non-spatial information as well.” More

  • in

    Twitter data reveals global communication network

    Twitter mentions show distinct community structure patterns resulting from communication preferences of individuals affected by physical distance between users and commonalities, such as shared language and history.
    While previous investigations have identified patterns using other data, such as mobile phone usage and Facebook friend connections, research from the New England Complex Systems Institute looks at the collective effect of message transfer in the global community. The group’s results are reported in an article in the journal Chaos, by AIP Publishing.
    The scientists used the mentions mechanism in Twitter data to map the flow of information around the world. A mention in Twitter occurs when a user explicitly includes another @username in their tweet. This is a way to directly communicate with another user but is also a way to retransmit or retweet content.
    The investigators examined Twitter data from December 2013 and divided the world into 8,000 cells, each approximately 100 kilometers wide. A network was built on this lattice, where each node is a precise location and a link, or edge, is the number of Twitter users in one location who are mentioned in another location.
    Twitter is banned in several countries and is known to be more prevalent in countries with higher gross domestic product, so this affects the data. Their results show large regions, such as the U.S. and Europe, are strongly connected inside each region, but they are also weakly connected to other areas.
    “While strong ties keep groups cohesive, weak ties integrate groups at the large scale and are responsible for the spread of information systemwide,” said co-author Leila Hedayatifar.
    The researchers used a computational technique to determine modularity, a value that quantifies distance between communities on a network compared to a random arrangement. They also investigated a quantity known as betweenness centrality, which measures the number of shortest paths through each node. This measure highlights the locations that serve as connectors between many places.
    By optimizing the modularity, the investigators found 16 significant global communities. Three large communities exist in the Americas: an English-speaking region, Central and South American countries, and Brazil in its own group. Multiple communities exist in Europe, Asia and Africa.
    The data can also be analyzed on a finer scale, revealing subcommunities. Strong regional associations exist within countries or even cities. Istanbul, for example, has Twitter conversations that are largely restricted to certain zones within the city.
    The investigators also looked at the effect of common languages, borders and shared history.
    “We found, perhaps surprisingly, that countries who had a common colonizer have a decreased preference of interaction,” Hedayatifar said.
    She suggests hierarchical interactions with the colonizing country might inhibit interactions between former colonies.

    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Silver-plated gold nanostars detect early cancer biomarkers

    Biomedical engineers at Duke University have engineered a method for simultaneously detecting the presence of multiple specific microRNAs in RNA extracted from tissue samples without the need for labeling or target amplification. The technique could be used to identify early biomarkers of cancer and other diseases without the need for the elaborate, time-consuming, expensive processes and special laboratory equipment required by current technologies.
    The results appeared online on May 4 in the journal Analyst.
    “The general research focus in my lab has been on the early detection of diseases in people before they even know they’re sick,” said Tuan Vo-Dinh, director of the Fitzpatrick Institute for Photonics and the R. Eugene and Susie E. Goodson Distinguished Professor of Biomedical Engineering at Duke. “And to do that, you need to be able to go upstream, at the genomic level, to look at biomarkers like microRNA.”
    MicroRNAs are short RNA molecules that bind to messenger RNA and stop them from delivering their instructions to the body’s protein-producing machines. This could effectively silence certain sections of DNA or regulate gene expression, thus altering the behaviors of certain biological functions. More than 2000 microRNAs have been discovered in humans that affect development, differentiation, growth and metabolism.
    As researchers have discovered and learned more about these tiny genetic packages, many microRNAs have been linked to the misregulation of biological functions, resulting in diseases ranging from brain tumors to Alzheimer’s. These discoveries have led to an increasing interest in using microRNAs as disease biomarkers and therapeutic targets. Due to the very small amounts of miRNAs present in bodily samples, traditional methods of studying them require genetic-amplification processes such as quantitative reverse transcription PCR (qRT-PCR) and RNA sequencing.
    While these technologies perform admirably in well-equipped laboratories and research studies that can take months or years, they aren’t as well-suited for fast diagnostic results at the clinic or out in the field. To try to bridge this gap in applicability, Vo-Dinh and his colleagues are turning to silver-plated gold nanostars.

    advertisement

    “Gold nanostars have multiple spikes that can act as lighting rods for enhancing electromagnetic waves, which is a unique feature of the particle’s shape,” said Vo-Dinh, who also holds a faculty appointment in Duke chemistry. “Our tiny nanosensors, called ‘inverse molecular sentinels,’ take advantage of this ability to create clear signals of the presence of multiple microRNAs.”
    While the name is a mouthful, the basic idea of the nanosensor design is to get a label molecule to move very close to the star’s spikes when a specific stretch of target RNA is recognized and captured. When a laser is then shined on the triggered sensor, the lightning rod effect of the nanostar tips causes the label molecule to shine extremely brightly, signaling the capture of the target RNA.
    The researchers set this trigger by tethering a label molecule to one of the nanostar’s points with a stretch of DNA. Although it’s built to curl in on itself in a loop, the DNA is held open by an RNA “spacer” that is tailored to bind with the target microRNA being tested for. When that microRNA comes by, it sticks to and removes the spacer, allowing the DNA to curl in on itself in a loop and bring the label molecule in close contact with the nanostar.
    Under laser excitation, that label emits a light called a Raman signal, which is generally very weak. But the shape of the nanostars — and a coupling effect of separate reactions caused by the gold nanostars and silver coating — amplifies Raman signals several million-folds, making them easier to detect.
    “The Raman signals of label molecules exhibit sharp peaks with very specific colors like spectral fingerprints that make them easily distinguished from one another when detected,” said Vo-Dinh. “Thus we can actually design different sensors for different microRNAs on nanostars, each with label molecules exhibiting their own specific spectral fingerprints. And because the signal is so strong, we can detect each one of these fingerprints independently of each other.”
    In this clinical study, Vo-Dinh and this team collaborated with Katherine Garman, associate professor of medicine, and colleagues at the Duke Cancer Institute to use the new nanosensor platform to demonstrate that they can detect miR-21, a specific microRNA associated with very early stages of esophageal cancer, just as well as other more elaborate state-of-the-art methods. In this case, the use of miR-21 alone is enough to distinguish healthy tissue samples from cancerous samples. For other diseases, however, it might be necessary to detect several other microRNAs to get a reliable diagnosis, which is exactly why the researchers are so excited by the general applicability of their inverse molecular sentinel nanobiosensors.
    “Usually three or four genetic biomarkers might be sufficient to get a good diagnosis, and these types of biomarkers can unmistakably identify each disease,” said Vo-Dinh. “That’s why we’re encouraged by just how strong of a signal our nanostars create without the need of time-consuming target amplification. Our method could provide a diagnostic alternative to histopathology and PCR, thus simplifying the testing process for cancer diagnostics.”
    For more than three years, Vo-Dinh has worked with his colleagues and Duke’s Office of Licensing and Ventures to patent his nanostar-based biosensors. With that patent recently awarded, the researchers are excited to begin testing the limits of their technology’s abilities and exploring technology transfer possibilities with the private sector.
    “Following these encouraging results, we are now very excited to apply this technology to detect colon cancer directly from blood samples in a new NIH-funded project,” said Vo-Dinh. “It’s very challenging to detect early biomarkers of cancer directly in the blood before a tumor even forms, but we have high hopes.” More

  • in

    Can social unrest, riot dynamics be modeled?

    Episodes of social unrest rippled throughout Chile in 2019 and disrupted the daily routines of many citizens. Researchers specializing in economics, mathematics and physics in Chile and the U.K. banded together to explore the surprising social dynamics people were experiencing.
    To do this, they combined well-known epidemic models with tools from the physics of chaos and interpreted their findings through the lens of social science as economics.
    In the journal Chaos, from AIP Publishing, the team reports that social media is changing the rules of the game, and previously applied epidemic-like models, on their own, may no longer be enough to explain current rioting dynamics. Using epidemiological mathematical models to understand the spread of infectious diseases dates back more than 100 years.
    “In the 1970s, this type of methodology was used to understand the dynamics of riots that occurred in U.S. cities in the 1960s,” said Jocelyn Olivari Narea, co-author and an assistant professor at Adolfo Ibáñez University in Chile. “More recently, it was used to model French rioting events in 2005.”
    From a mathematical point of view, the team’s work is based on the SIR epidemiological model, known for modeling infectious disease spread. This technique separates the population into susceptible, infectious and recovered individuals.
    “Within a rioting context, someone ‘susceptible’ is a potential rioter, an ‘infected individual’ is an active rioter, and a ‘recovered person’ is one that stopped rioting,” explained co-author Katia Vogt-Geisse. “Rioting spreads when effective contact between an active rioter and a potential rioter occurs.”
    They discovered that the SIR model uses Hamiltonian mechanics for mathematics, just like Newton’s laws for physics.

    advertisement

    “This allowed us to apply well-known tools of the physics of chaos to show that within the presence of an external force, the dynamics become very rich,” said co-author Sergio Rica Mery. “The external force that we included in the model represents the occasional trigger that increases rioting activity.”
    When including such triggers, the team found the way a sequence of events occurs varies greatly based on the initial number of potential rioters and active rioters.
    “Even the sequence of rioting events can be chaotic,” Rica Mery said. “Rich dynamics reveal the complexity involved in making predictions of rioting activity.”
    The team’s work comes at a timely moment as social unrest is becoming more common — even within the context of the current pandemic.
    “We just saw episodes of rioting in Minnesota due to racial unrest and how it ended up spreading to various locations within the U.S. and even abroad,” Olivari Narea said.
    The team pointed out it was surprising that the idea of disease spread can be well applied to rioting activity spread to obtain a good fit of rioting activity data.
    “While you might think that the study of disease transmission and problems of a social nature vary greatly, our work shows epidemiological models of the most simple SIR type, enriched by triggers and tools of the physics of chaos, can describe rioting activities well,” Vogt-Geisse said.

    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Photon-based processing units enable more complex machine learning

    Machine learning performed by neural networks is a popular approach to developing artificial intelligence, as researchers aim to replicate brain functionalities for a variety of applications.
    A paper in the journal Applied Physics Reviews, by AIP Publishing, proposes a new approach to perform computations required by a neural network, using light instead of electricity. In this approach, a photonic tensor core performs multiplications of matrices in parallel, improving speed and efficiency of current deep learning paradigms.
    In machine learning, neural networks are trained to learn to perform unsupervised decision and classification on unseen data. Once a neural network is trained on data, it can produce an inference to recognize and classify objects and patterns and find a signature within the data.
    The photonic TPU stores and processes data in parallel, featuring an electro-optical interconnect, which allows the optical memory to be efficiently read and written and the photonic TPU to interface with other architectures.
    “We found that integrated photonic platforms that integrate efficient optical memory can obtain the same operations as a tensor processing unit, but they consume a fraction of the power and have higher throughput and, when opportunely trained, can be used for performing inference at the speed of light,” said Mario Miscuglio, one of the authors.
    Most neural networks unravel multiple layers of interconnected neurons aiming to mimic the human brain. An efficient way to represent these networks is a composite function that multiplies matrices and vectors together. This representation allows the performance of parallel operations through architectures specialized in vectorized operations such as matrix multiplication.
    However, the more intelligent the task and the higher accuracy of the prediction desired, the more complex the network becomes. Such networks demand larger amounts of data for computation and more power to process that data.
    Current digital processors suitable for deep learning, such as graphics processing units or tensor processing units, are limited in performing more complex operations with greater accuracy by the power required to do so and by the slow transmission of electronic data between the processor and the memory.
    The researchers showed that the performance of their TPU could be 2-3 orders higher than an electrical TPU. Photons may also be an ideal match for computing node-distributed networks and engines performing intelligent tasks with high throughput at the edge of a networks, such as 5G. At network edges, data signals may already exist in the form of photons from surveillance cameras, optical sensors and other sources.
    “Photonic specialized processors can save a tremendous amount of energy, improve response time and reduce data center traffic,” said Miscuglio.
    For the end user, that means data is processed much faster, because a large portion of the data is preprocessed, meaning only a portion of the data needs to be sent to the cloud or data center.

    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More