More stories

  • in

    More realistic computer graphics

    Researchers at Dartmouth, in collaboration with industry partners, have developed software techniques that make lighting in computer-generated images look more realistic. The research will be presented at the upcoming ACM SIGGRAPH conference, the premier venue for research in computer graphics.
    The new techniques focus on “real time” graphics which need to maintain the illusion of interactivity as scenes change in response to user moves. These graphics can be used in applications such as video games, extended reality, and scientific visualization tools.
    Both papers demonstrate how developers can create sophisticated lighting effects by adapting a popular rendering technique known as ray tracing.
    “Over the last decade, ray tracing has dramatically increased the realism and visual richness of computer-generated images in movies where producing just a single frame can take hours,” said Wojciech Jarosz, an associate professor of computer science at Dartmouth who served as the senior researcher for both projects. “Our papers describe two very different approaches for bringing realistic ray-traced lighting to the constraints of real time graphics.”
    The first project, developed with NVIDIA, envisions the possibilities for future games once developers incorporate NVIDIA’s hardware-accelerated RTX ray tracing platform. Recent games have started to use RTX for physically correct shadows and reflections, but quality and complexity of lighting is currently limited by the small number of rays that can be traced per frame.
    The new technique, called reservoir-based spatiotemporal importance resampling (ReSTIR), creates realistic lighting and shadows from millions of artificial light sources. The ReSTIR approach dramatically increases the quality of rendering on a computer’s graphics card by reusing rays that were traced in neighboring pixels and in prior frames.

    advertisement

    The new technique can be integrated into the design of future games and works up to 65 times faster than previous rendering techniques.
    “This technology is not just exciting for what it can bring to real-time applications like games, but also its impact in the movie industry and beyond,” said Benedikt Bitterli, a PhD student at Dartmouth who served as the first author of a research paper on the technique.
    The second project, conducted in collaboration with Activision, describes how the video game publisher has incorporated increasingly realistic lighting effects into its games.
    Traditionally, video games create lighting sequences in real time using what are called “baked” solutions: the complex ray-traced illumination is computed only once through a time-consuming process. The lighting created using this technique can be displayed easily during gameplay, but it is constrained to assuming a fixed configuration for a scene. As a result, the lighting cannot easily react to the movement of characters and cameras.
    The research paper describes how Activision gradually evolved its “UberBake” system from the static approach to one which can depict subtle lighting changes in response to player interactions, such as turning lights on and off, or opening and closing doors.

    advertisement

    Since UberBake was developed over many years to work on current games, it needed to work on a variety of existing hardware, ranging from high-end PCs to previous-generation gaming consoles.
    “Video games are used by millions of people around the world,” said Dario Seyb, a PhD student at Dartmouth who served as the research paper’s co-first author. “With so many people interacting with video games, this technology can have a huge impact.”
    Dartmouth researchers on both projects are affiliated with the Dartmouth Visual Computing Lab.
    “These industry collaborations have been fantastic. They allow our students to work on foundational academic research informed by practical problems in industry, allowing the work to have a more immediate, real-world impact,” said Jarosz.
    The research papers will be published in ACM Transactions on Graphics and presented at SIGGRAPH 2020 taking place online during the summer.

    Story Source:
    Materials provided by Dartmouth College. Note: Content may be edited for style and length. More

  • in

    If relaxed too soon, physical distancing measures might have been all for naught

    If physical distancing measures in the United States are relaxed while there is still no COVID-19 vaccine or treatment and while personal protective equipment remains in short supply, the number of resulting infections could be about the same as if distancing had never been implemented to begin with, according to a UCLA-led team of mathematicians and scientists.
    The researchers compared the results of three related mathematical models of disease transmission that they used to analyze data emerging from local and national governments, including one that measures the dynamic reproduction number — the average number of susceptible people infected by one previously infected person. The models all highlight the dangers of relaxing public health measures too soon.
    “Distancing efforts that appear to have succeeded in the short term may have little impact on the total number of infections expected over the course of the pandemic,” said lead author Andrea Bertozzi, a distinguished professor of mathematics who holds UCLA’s Betsy Wood Knapp Chair for Innovation and Creativity. “Our mathematical models demonstrate that relaxing these measures in the absence of pharmaceutical interventions may allow the pandemic to reemerge. It’s about reducing contact with other people, and this can be done with PPE as well as distancing.”
    The study is published in the journal Proceedings of the National Academy of Sciences and is applicable to both future spikes of COVID-19 and future pandemics, the researchers say.
    If distancing and shelter-in-place measures had not been taken in March and April, it is very likely the number of people infected in California, New York and elsewhere would have been dramatically higher, posing a severe burden on hospitals, Bertozzi said. But the total number of infections predicted if these precautions end too soon is similar to the number that would be expected over the course of the pandemic without such measures, she said. In other words, short-term distancing can slow the spread of the disease but may not result in fewer people becoming infected.
    Mathematically modeling and forecasting the spread of COVID-19 are critical for effective public health policy, but wide differences in precautionary approaches across the country have made it a challenge, said Bertozzi, who is also a distinguished professor of mechanical and aerospace engineering. Social distancing and wearing face masks reduce the spread of COVID-19, but people in many states are not following distancing guidelines and are not wearing masks — and the number of infections continues to rise.

    advertisement

    What are the implications of these findings for policymakers who want to relax social distancing in an effort to revive their economies?
    “Policymakers need to be careful,” Bertozzi said. “Our study predicts a surge in cases in California after distancing measures are relaxed. Alternative strategies exist that would allow the economy to ramp up without substantial new infections. Those strategies all involve significant use of PPE and increased testing.”
    During the 1918 influenza pandemic, social distancing was first enforced and then relaxed in some areas. Bertozzi points to a study published in Proceedings of the National Academy of Sciences in 2007 that looked at several American cities during that pandemic where a second wave of infections occurred after public health measures were removed too early.
    That study found that the timing of public health interventions had a profound influence on the pattern of the second wave of the 1918 pandemic in different cities. Cities that had introduced measures early in the pandemic achieved significant reductions in overall mortality. Larger reductions in peak mortality were achieved by those cities that extended the public health measures for longer. San Francisco, St. Louis, Milwaukee and Kansas City, for instance, had the most effective interventions, reducing transmission rates by 30% to 50%.
    “Researchers Martin Bootsma and Neil Ferguson were able to analyze the effectiveness of distancing measures by comparing the data against an estimate for what might have happened had distancing measures not been introduced,” Bertozzi said of the 2007 study. “They considered data from the full pandemic, while we addressed the question of fitting models to early-time data for this pandemic. During the 1918 influenza pandemic, the early relaxation of social distancing measures led to a swift uptick in deaths in some U.S. cities. Our mathematical models help to explain why this effect might occur today.”
    The COVID-19 data in the new study are from April 1, 2020, and are publicly available. The study is aimed at scientists who are not experts in epidemiology.
    “Epidemiologists are in high demand during a pandemic, and public health officials from local jurisdictions may have a need for help interpreting data,” Bertozzi said. “Scientists with relevant background can be tapped to assist these people.”
    Study co-authors are Elisa Franco, a UCLA associate professor of mechanical and aerospace engineering and bioengineering; George Mohler, an associate professor of computer and information science at Indiana University-Purdue University Indianapolis; Martin Short, an associate professor of mathematics at Georgia Tech; and Daniel Sledge, an associate professor of political science at the University of Texas at Arlington. More

  • in

    Researchers develop a method for predicting unprecedented events

    A black swan event is a highly unlikely but massively consequential incident, such as the 2008 global recession and the loss of one-third of the world’s saiga antelope in a matter of days in 2015. Challenging the quintessentially unpredictable nature of black swan events, bioengineers at Stanford University are suggesting a method for forecasting these supposedly unforeseeable fluctuations.
    “By analyzing long-term data from three ecosystems, we were able to show that fluctuations that happen in different biological species are statistically the same across different ecosystems,” said Samuel Bray, a research assistant in the lab of Bo Wang, assistant professor of bioengineering at Stanford. “That suggests there are certain underlying universal processes that we can take advantage of in order to forecast this kind of extreme behavior.”
    The forecasting method the researchers have developed, which was detailed recently in PLOS Computational Biology, is based on natural systems and could find use in health care and environmental research. It also has potential applications in disciplines outside ecology that have their own black swan events, such as economics and politics.
    “This work is exciting because it’s a chance to take the knowledge and the computational tools that we’re building in the lab and use those to better understand — even predict or forecast — what happens in the world surrounding us,” said Wang, who is senior author of the paper. “It connects us to the bigger world.”
    From microbes to avalanches
    Over years of studying microbial communities, Bray noticed several instances where one species would undergo an unanticipated population boom, overtaking its neighbors. Discussing these events with Wang, they wondered whether this phenomenon occurred outside the lab as well and, if so, whether it could be predicted.

    advertisement

    In order to address this question, the researchers had to find other biological systems that experience black swan events. The researchers needed details, not only about the black swan events themselves but also the context in which they occurred. So, they specifically sought ecosystems that scientists have been closely monitoring for many years.
    “These data have to capture long periods of time and that’s hard to collect,” said Bray, who is lead author of the paper. “It’s much more than a PhD-worth of information. But that’s the only way you can see the spectra of these fluctuations at large scales.”
    Bray settled on three eclectic datasets: an eight-year study of plankton from the Baltic Sea with species levels measured twice weekly; net carbon measurements from a deciduous broadleaf forest at Harvard University, gathered every 30 minutes since 1991; and measurements of barnacles, algae and mussels on the coast of New Zealand, taken monthly for over 20 years.
    The researchers then analyzed these three datasets using theory about avalanches — physical fluctuations that, like black swan events, exhibit short-term, sudden, extreme behavior. At its core, this theory attempts to explain the physics of systems like avalanches, earthquakes, fire embers, or even crumpling candy wrappers, which all respond to external forces with discrete events of various magnitudes or sizes — a phenomenon scientists call “crackling noise.”
    Built on the analysis, the researchers developed a method for predicting black swan events, one that is designed to be flexible across species and timespans, and able to work with data that are far less detailed and more complex than those used to develop it.

    advertisement

    “Existing methods rely on what we have seen to predict what might happen in the future, and that’s why they tend to miss black swan events,” said Wang. “But Sam’s method is different in that it assumes we are only seeing part of the world. It extrapolates a little about what we’re missing, and it turns out that helps tremendously in terms of prediction.”
    Forecasting in the real world
    The researchers tested their method using the three ecosystem datasets on which it was built. Using only fragments of each dataset — specifically fragments which contained the smallest fluctuations in the variable of interest — they were able to accurately predict extreme events that occurred in those systems.
    They would like to expand the application of their method to other systems in which black swan events are also present, such as in economics, epidemiology, politics and physics. At present, the researchers are hoping to collaborate with field scientists and ecologists to apply their method to real-world situations where they could make a positive difference in the lives of other people and the planet.
    This research was funded by the Volkswagen Foundation and Arnold and Mabel Beckman Foundation. Wang is also a member of Stanford Bio-X and the Wu Tsai Neurosciences Institute. More

  • in

    A new MXene material shows extraordinary electromagnetic interference shielding ability

    As we welcome wireless technology into more areas of life, the additional electronic bustle is making for an electromagnetically noisy neighborhood. In hopes of limiting the extra traffic, researchers at Drexel University have been testing two-dimensional materials known for their interference-blocking abilities. Their latest discovery, reported in the journal Science, is of the exceptional shielding ability of a new two-dimensional material that can absorb electromagnetic interference rather than just deflecting back into the fray.
    The material, called titanium carbonitride, is part of a family of two-dimensional materials, called MXenes, that were first produced at Drexel in 2011. Researchers have revealed that these materials have a number of exceptional properties, including impressive strength, high electrical conductivity and molecular filtration abilities. Titanium carbonitride’s exceptional trait is that it can block and absorb electromagnetic interference more effectively than any known material, including the metal foils currently used in most electronic devices.
    “This discovery breaks all the barriers that existed in the electromagnetic shielding field. It not only reveals a shielding material that works better than copper, but it also shows an exciting, new physics emerging, as we see discrete two-dimensional materials interact with electromagnetic radiation in a different way than bulk metals,” said Yury Gogotsi, PhD, Distinguished University and Bach professor in Drexel’s College of Engineering, who led the research group that made this MXene discovery, which also included scientists from the Korea Institute of Science and Technology, and students from Drexel’s co-op partnership with the Institute.
    While electromagnetic interference — “EMI” to engineers and technologists — is noticed only infrequently by the users of technology, likely as a buzzing noise from a microphone or speaker, it is a constant concern for the engineers who design it. The things that EMI is interfering with are other electrical components, such as antennas and circuitry. It diminishes electrical performance, can slow data exchange rates and can even interrupt the function of devices.
    Electronics designers and engineers tend to use shielding materials to contain and deflect EMI in devices, either by covering the entire circuit board with a copper cage, or, more recently by wrapping individual components in foil shielding. But both of these strategies add bulk and weight to the devices.
    Gogotsi’s group discovered that its MXene materials, which are much thinner and lighter than copper, can be quite effective at EMI shielding. Their findings, reported in Science four years ago, indicated that a MXene called titanium carbide showed the potential to be as effective as the industry-standard materials at the time, and it could be easily applied as a coating. This research quickly became one of the most impactful discoveries in the field and inspired other researchers to look at other materials for EMI shielding.

    advertisement

    But as the Drexel and KIST teams continued to inspect other members of the family for this application, they uncovered the unique qualities of titanium carbonitride that make it an even more promising candidate for EMI shielding applications.
    “Titanium carbonitride has a very similar structure by comparison to titanium carbide — they’re actually identical aside from one replacing half of its carbon atoms with nitrogen atoms — but titanium carbonitride is about an order of magnitude less conductive,” said Kanit Hantanasirisakul, a doctoral candidate in Drexel’s Department of Materials Science and Engineering. “So we wanted to gain a fundamental understanding of the effects of conductivity and elemental composition on EMI shielding application.”
    Through a series of tests, the group made a startling discovery. Namely, that a film of the titanium carbonitride material -many times thinner than the thickness of a strand of human hair — could actually block EMI interference about 3-5 times more effectively than a similar thickness of copper foil, which is typically used in electronic devices.
    “It’s important to note that we didn’t initially expect the titanium carbonitride MXene to be better compared to the most conductive of all MXenes known: titanium carbide,” Hantanasirisakul said. “We first thought there might be something wrong with the measurements or the calculations. So, we repeated experiments over and over again to make sure we did everything correctly and the values were reproducible.”
    Perhaps more significant than the team’s discovery of the material’s shielding prowess is their new understanding of the way it works. Most EMI shielding materials simply prevent the penetration of the electromagnetic waves by reflecting it away. While this is effective for protecting components, it doesn’t alleviate the overall problem of EMI propagation in the environment. Gogotsi’s group found that titanium carbonitride actually blocks EMI by absorbing the electromagnetic waves.

    advertisement

    “This is a much more sustainable way to handle electromagnetic pollution than simply reflecting waves that can still damage other devices that are not shielded,” Hantanasirisakul said. “We found that most of the waves are absorbed by the layered carbonitride MXene films. It’s like the difference between kicking litter out of your way or picking it up — this is ultimately a much better solution.”
    This also means that titanium carbonitride could be used to individually coat components inside a device to contain their EMI even while they are being placed closely together. Companies like Apple have been trying this containment strategy for several years, but with success limited by the thickness of the copper foil. As devices designers strive to make them ubiquitous by making them smaller, less noticeable and more integrated, this strategy is likely to become the new norm.
    The researchers suspect that titanium carbonitride’s uniqueness is due to its layered, porous structure, which allows EMI to partially penetrate the material, and its chemical composition, which traps and dissipates the EMI. This combination of characteristics emerges within the material when it is heated in a final step of formation, called annealing.
    “It was a counterintuitive finding. EMI shielding effectiveness typically increases with electrical conductivity. We knew that heat treatment can increase conductivity, so we tried that with the titanium carbonitride to see if it would improve its shielding ability. What we discovered is that it only marginally improved its conductivity, but vastly boosted its shielding effectiveness,” Gogotsi said. “This work motivates us, and should motivate others in the field, to look into properties and applications of other MXenes, as they may show even better performance despite being less electrically conductive.”
    The Drexel team has been expanding its scope and has already examined EMI shielding capabilities of 16 different MXene materials — about half of all MXenes produced in its lab. It plans to continue its investigation of titanium carbonitride to better understand its unique electromagnetic behavior, in hope of predicting hidden abilities in other materials.
    In addition to Gogotsi and Hantanasirisakul, Aamir Iqbal, Faisal Shahzad, Myung-Ki Kim, Hisung Kwon, Junpyo Hong, Hyerim Kim, Daesin Kim and Chong Min Koo; researchers from Korea Institute of Science and Technology (KIST) contributed to this research. More

  • in

    How a few negative online reviews early on can hurt a restaurant

    Just a few negative online restaurant reviews can determine early on how many reviews a restaurant receives long-term, a new study has found.
    The study, published online earlier this month in the journal Papers in Applied Geography, also found that a neighborhood’s median household income affected whether restaurants were rated at all.
    “These online platforms advertise themselves as being unbiased, but we found that that is not the case,” said Yasuyuki Motoyama, lead author of the paper and an assistant professor of city and regional planning at The Ohio State University.
    “The way these platforms work, popular restaurants get even more popular, and restaurants with some initial low ratings can stagnate.”
    The study evaluated reviews in Franklin County, Ohio, from the websites Yelp and Tripadvisor of about 3,000 restaurants per website. Franklin County, home to Columbus and Ohio State, is also home to the headquarters of more than 20 restaurant chains. Previous research has found that the food industry considers consumer preferences in the area to be a litmus test for the broader U.S. market.
    The researchers collected reviews for restaurants published in May 2019, then analyzed those reviews by rating and geographic location. They considered demographics for each neighborhood, and noted the socioeconomics of each neighborhood, too, based on household income.

    advertisement

    The study found that restaurants with a smaller number of reviews on sites like Yelp and TripAdvisor had higher likelihood of a low rating.
    “The more reviews a restaurant received, the higher the average rating of the restaurant,” said Kareem Usher, co-author of the paper and an assistant professor of city and regional planning at Ohio State. “But this has implications: If one of the first reviews a restaurant receives comes from a dissatisfied customer, and people check that later and think ‘I don’t want to go there’ based on that one review, then there will be fewer reviews of that restaurant.”
    The opposite is true for restaurants that receive positive reviews or a large number of reviews: More people are likely to review those restaurants, improving the likelihood that a restaurant’s average rating will be higher.
    The study found that 17.6 percent of restaurants with only one to four reviews received a low rating on Yelp. But that decreased to 9.3 percent for those with between five and 10 reviews. On Tripadvisor, those with one to four reviews had a 5.6 percent probability of having a poor review, going down to 0.6 percent for those with five to 10 reviews.
    Researchers also found that restaurants in several of the poorest neighborhoods in Franklin County tend not to be rated on the sites. However, the researchers did not find a direct link between a neighborhood’s socioeconomics or racial makeup and the average rating of the restaurants there.
    Motoyama cautioned that the study had some limits: It was conducted in one county, and future work could expand to include other areas around the country. The high level multivariate analysis could only use the Yelp data, as the majority of key information in Tripadvisor was missing. The researchers also did not analyze the content of the reviews, which could offer additional clues about bias.
    But, he said, the study does indicate that online review sites can have significant effects on a restaurant’s success or failure — and suggests that, perhaps, the sites can set up policies that might be more fair.
    “Maybe these online platforms can withhold reviews until a restaurant gets a certain number of reviews — say, 10 or more,” he said. “That way if there are two or three customers who are very dissatisfied with a particular experience, they are not directing the restaurant’s success or failure.”

    Story Source:
    Materials provided by Ohio State University. Original written by Laura Arenschield. Note: Content may be edited for style and length. More

  • in

    Software of autonomous driving systems

    The future has already arrived. (Partially) autonomous cars are already on our roads today with automated systems such as braking or lane departure warning systems. As a central vehicle component, the software of these systems must continuously and reliably meet high quality criteria. Franz Wotawa from the Institute of Software Technology at TU Graz and his team in close collaboration with the cyber-physical system testing team of AVL are dedicated to the great challenges of this future technology: the guarantee of safety through the automatic generation of extensive test scenarios for simulations and system-internal error compensation by means of an adaptive control method.
    Ontologies instead of test kilometers
    Test drives alone do not provide sufficient evidence for the accident safety of autonomous driving systems, explains Franz Wotawa: “Autonomous vehicles would have to be driven around 200 million kilometers to prove their reliability — especially for accident scenarios. That is 10,000 times more test kilometers than are required for conventional cars.” However, critical test scenarios with danger to life and limb cannot be reproduced in real test drives. Autonomous driving systems must therefore be tested for their safety in simulations. “Although the tests so far cover many scenarios, the question always remains whether this is sufficient and whether all possible accident scenarios have been considered,” says Wotawa. Mihai Nica from the AVL underlines this statement: “in order to test highly autonomous system, it is required to re-think how the automotive industry must validate and certify Advanced Driver Assistance Systes (ADAS) and Autonomous Driving (AD) systems. Therefore, AVL participates with TU Graz to develop a unique and highly efficient method and workflow based on simulation and test case generation for prove fulfillment of Safety Of The Intended Functionality (SOTIF), quality and system integrity requirements of the autonomous systems.”
    Together the project team is working on innovative methods with which far more test scenarios can be simulated than before. The researchers’ approach is as follows: instead of driving millions of kilometers, they use ontologies to describe the environment of autonomous vehicles. Ontologies are knowledge bases for the exchange of relevant information within a machine system. For example, interfaces, behavior and relationships of individual system units can communicate with each other. In the case of autonomous driving systems, these would be “decision making,” “traffic description” or “autopilot.” The Graz researchers worked with basic detailed information about environments in driving scenarios and fed the knowledge bases with details about the construction of roads, intersections and the like, which AVL provided. From this, driving scenarios can be derived, by using AVL’s world leading test case generation algorithm, that test the behavior of the automated driving systems in simulations.
    Additional weaknesses uncovered
    As part of the EU AutoDrive project, researchers have used two algorithms to convert these ontologies into input models for combinatorial testing that can subsequently be executed using simulation environments. “In initial experimental tests we have discovered serious weaknesses in automated driving functions. Without these automatically generated test scenarios, the vulnerabilities would not have been detected so quickly: nine out of 319 test cases investigated have led to accidents.” For example, in one test scenario, a brake assistance system failed to detect two people coming from different directions at the same time and one of them was badly hit due to the initiated braking maneuver. “This means that with our method, you can find test scenarios that are difficult to test in reality and that you might not even be able to focus on,” says Wotawa.
    This work by Franz Wotawa et al was also presented in the journal “Information and Software Technology” at the beginning of 2020 and overlaps with the “Christian Doppler Laboratory for Methods for Quality Assurance of Cyber-Physical Systems.” The CD lab is led by Franz Wotawa, and AVL is a corporate partner. Das Christian Doppler Labor (CD-Labor) wird von Franz Wotawa geleitet, die AVL ist Unternehmenspartnerin.
    Adaptive compensation of internal errors
    Autonomous systems and in particular autonomous driving systems must be able to correct themselves in the event of malfunctions or changed environmental conditions and reliably reach given target states at all times. “When we look at semi-automated systems already in use today, such as cruise control, it quickly becomes clear that in the case of errors, the driver can and will always intervene. With fully autonomous vehicles, this is no longer an option, so the system itself must be able to act accordingly,” explains Franz Wotawa.
    In a new publication for the Software Quality Journal, Franz Wotawa and his PhD student Martin Zimmermann present a control method that can adaptively compensate for internal errors in the software system. The presented method selects alternative actions in such a way that predetermined target states can be achieved, while providing a certain degree of redundancy. Action selection is based on weighting models that are adjusted over time and measure the success rate of specific actions already performed. In addition to the method, the researchers also present a Java implementation and its validation using two case studies motivated by the requirements of the autonomous driving range.

    Story Source:
    Materials provided by Graz University of Technology. Original written by Susanne Eigner. Note: Content may be edited for style and length. More

  • in

    Tracking misinformation campaigns in real-time is possible, study shows

    A research team led by Princeton University has developed a technique for tracking online foreign misinformation campaigns in real time, which could help mitigate outside interference in the 2020 American election.
    The researchers developed a method for using machine learning to identify malicious internet accounts, or trolls, based on their past behavior. Featured in Science Advances, the model investigated past misinformation campaigns from China, Russia, and Venezuela that were waged against the United States before and after the 2016 election.
    The team identified the patterns these campaigns followed by analyzing posts to Twitter and Reddit and the hyperlinks or URLs they included. After running a series of tests, they found their model was effective in identifying posts and accounts that were part of a foreign influence campaign, including those by accounts that had never been used before.
    They hope that software engineers will be able to build on their work to create a real-time monitoring system for exposing foreign influence in American politics.
    “What our research means is that you could estimate in real time how much of it is out there, and what they’re talking about,” said Jacob N. Shapiro, professor of politics and international affairs at the Princeton School of Public and International Affairs. “It’s not perfect, but it would force these actors to get more creative and possibly stop their efforts. You can only imagine how much better this could be if someone puts in the engineering efforts to optimize it.”
    Shapiro and associate research scholar Meysam Alizadeh conducted the study with Joshua Tucker, professor of politics at New York University, and Cody Buntain, assistant professor in informatics at New Jersey Institute of Technology.

    advertisement

    The team began with a simple question: Using only content-based features and examples of known influence campaign activity, could you look at other content and tell whether a given post was part of an influence campaign?
    They chose to investigate a unit known as a “postURL pair,” which is simply a post with a hyperlink. To have real influence, coordinated operations require intense human and bot-driven information sharing. The team theorized that similar posts may appear frequently across platforms over time.
    They combined data on troll campaigns from Twitter and Reddit with a rich dataset on posts by politically engaged users and average users collected over many years by NYU’s Center for Social Media and Politics (CSMaP). The troll data included publicly available Twitter and Reddit data from Chinese, Russian, and Venezuelan trolls totaling 8,000 accounts and 7.2 million posts from late 2015 through 2019.
    “We couldn’t have conducted the analysis without that baseline comparison dataset of regular, ordinary tweets,” said Tucker, co-director of CSMaP. “We used it to train the model to distinguish between tweets from coordinated influence campaigns and those from ordinary users.”
    The team considered the characteristics of the post itself, like the timing, word count, or if the mentioned URL domain is a news website. They also looked at what they called “metacontent,” or how the messaging in a post related to other information shared at that time (for example, whether a URL was in the top 25 political domains shared by trolls.)
    “Meysam’s insight on metacontent was key,” Shapiro said. “He saw that we could use the machine to replicate the human intuition that ‘something about this post just looks out of place.’ Both trolls and normal people often include local news URLs in their posts, but the trolls tended to mention different users in such posts, probably because they are trying to draw their audience’s attention in a new direction. Metacontent lets the algorithm find such anomalies.”

    advertisement

    The team tested their method extensively, examining performance month to month on five different prediction tasks across four influence campaigns. Across almost all of the 463 different tests, it was clear which posts were and were not part of an influence operation, meaning that content-based features can indeed help find coordinated influence campaigns on social media.
    In some countries, the patterns were easier to spot than others. Venezuelan trolls only retweeted certain people and topics, making them easy to detect. Russian and Chinese trolls were better at making their content look organic, but they, too, could be found. In early 2016, for example, Russian trolls quite often linked to far-right URLs, which was unusual given the other aspects of their posts, and, in early 2017, they linked to political websites in odd ways.
    Overall, Russian troll activity became harder to find as time went on. It is possible that investigative groups or others caught on to the false information, flagging the posts and forcing trolls to change their tactics or approach, though Russians also appear to have produced less in 2018 than in previous years.
    While the research shows there is no stable set of characteristics that will find influence efforts, it also shows that troll content will almost always be different in detectable ways. In one set of tests, the authors show the method can find never-before-used accounts that are part of an ongoing campaign. And while social media platforms regularly delete accounts associated with foreign disinformation campaigns, the team’s findings could lead to a more effective solution.
    “When the platforms ban these accounts, it not only makes it hard to collect data to find similar accounts in the future, but it signals to the disinformation actor that they should avoid the behavior that led to deletion,” said Buntain. “This mechanism allows [the platform] to identify these accounts, silo them away from the rest of Twitter, and make it appear to these actors as though they are continuing to share their disinformation material.”
    The work highlights the importance of interdisciplinary research between social and computational science, as well as the criticality of funding research data archives.
    “The American people deserve to understand how much is being done by foreign countries to influence our politics,” said Shapiro. “These results suggest that providing that knowledge is technically feasible. What we currently lack is the political will and funding, and that is a travesty.”
    The method is no panacea, the researchers cautioned. It requires that someone has already identified recent influence campaign activity to learn from. And how the different features combine to indicate questionable content changes over time and between campaigns.
    The paper, “Content-Based Features Predict Social Media Influence Operations,” will appear in Science Advances. More

  • in

    Is it a bird, a plane? Not superman, but a flapping wing drone

    A drone prototype that mimics the aerobatic manoeuvres of one of the world’s fastest birds, the swift, is being developed by an international team of engineers in the latest example of biologically inspired flight.
    A research team from Singapore, Australia, China and Taiwan has designed a 26 gram ornithopter (flapping wing aircraft) which can hover, dart, glide, brake and dive just like a swift, making them more versatile, safer and quieter than the existing quadcopter drones.
    Weighing the equivalent of two tablespoons of flour, the flapping wing drone has been optimised to fly in cluttered environments near humans, with the ability to glide, hover at very low power, and stop quickly from fast speeds, avoiding collisions — all things that quadcopters can’t do.
    National University of Singapore research scientist, Dr Yao-Wei Chin, who has led the project published today in Science Robotics, says the team has designed a flapping wing drone similar in size to a swift, or large moth, that can perform some aggressive bird flight manoeuvres.
    “Unlike common quadcopters that are quite intrusive and not very agile, biologically-inspired drones could be used very successfully in a range of environments,” Dr Chin says.
    The surveillance applications are clear, but novel applications include pollination of indoor vertical farms without damaging dense vegetation, unlike the rotary-propelled quadcopters whose blades risk shredding crops.

    advertisement

    Because of their stability in strong winds, the ornithopter drone could also be used to chase birds away from airports, reducing the risk of them getting sucked into jet engines.
    University of South Australia (UniSA) aerospace engineer, Professor Javaan Chahl, says copying the design of birds, like swifts, is just one strategy to improve the flight performance of flapping wing drones.
    “There are existing ornithopters that can fly forward and backward as well as circling and gliding, but until now, they haven’t been able to hover or climb. We have overcome these issues with our prototype, achieving the same thrust generated by a propeller,” Dr Chahl says.
    “The triple roles of flapping wings for propulsion, lift and drag enable us to replicate the flight patterns of aggressive birds by simple tail control. Essentially, the ornithopter drone is a combination of a paraglider, aeroplane and helicopter.”
    There are currently no commercialised ornithopters being used for surveillance, but this could change with the latest breakthrough, researchers claim.

    advertisement

    By improving the design so ornithopters can now produce enough thrust to hover and to carry a camera and accompanying electronics, the flapping wing drone could be used for crowd and traffic monitoring, information gathering and surveying forests and wildlife.
    “The light weight and the slow beating wings of the ornithopter poses less danger to the public than quadcopter drones in the event of a crash and given sufficient thrust and power banks it could be modified to carry different payloads depending on what is required,” Dr Chin says.
    One area that requires more research is how birds will react to a mechanical flying object resembling them in size and shape. Small, domesticated birds are easily scared by drones but large flocks and much bigger birds have been known to attack ornithopters.
    And while the bio-inspired breakthrough is impressive, we are a long way from replicating biological flight, Dr Chin says.
    “Although ornithopters are the closest to biological flight with their flapping wing propulsion, birds and insects have multiple sets of muscles which enable them to fly incredibly fast, fold their wings, twist, open feather slots and save energy.
    “Their wing agility allows them to turn their body in mid-air while still flapping at different speeds and angles.
    “Common swifts can cruise at a maximum speed of 31 metres a second, equivalent to 112 kilometres per hour or 90 miles per hour.
    “At most, I would say we are replicating 10 per cent of biological flight,” he says. More