More stories

  • in

    A novel strategy for quickly identifying twitter trolls

    Two algorithms that account for distinctive use of repeated words and word pairs require as few as 50 tweets to accurately distinguish deceptive “troll” messages from those posted by public figures. Sergei Monakhov of Friedrich Schiller University in Jena, Germany, presents these findings in the open-access journal PLOS ONE on August 12, 2020.
    Troll internet messages aim to achieve a specific purpose, while also masking that purpose. For instance, in 2018, 13 Russian nationals were accused of using false personas to interfere with the 2016 U.S. presidential election via social media posts. While previous research has investigated distinguishing characteristics of troll tweets — such as timing, hashtags, and geographical location — few studies have examined linguistic features of the tweets themselves.
    Monakhov took a sociolinguistic approach, focusing on the idea that trolls have a limited number of messages to convey, but must do so multiple times and with enough diversity of wording and topics to fool readers. Using a library of Russian troll tweets and genuine tweets from U.S. congresspeople, Monakhov showed that these troll-specific restrictions result in distinctive patterns of repeated words and word pairs that are different from patterns seen in genuine, non-troll tweets.
    Then, Monakhov tested an algorithm that uses these distinctive patterns to distinguish between genuine tweets and troll tweets. He found that the algorithm required as few as 50 tweets for accurate identification of trolls versus congresspeople. He also found that the algorithm correctly distinguished troll tweets from tweets by Donald Trump — which although provocative and “potentially misleading,” according to Twitter, are not crafted to hide his purpose.
    This new strategy for quickly identifying troll tweets could help inform efforts to combat hybrid warfare while preserving freedom of speech. Further research will be needed to determine whether it can accurately distinguish troll tweets from other types of messages that are not posted by public figures.
    Monakhov adds: “Though troll writing is usually thought of as being permeated with recurrent messages, its most characteristic trait is an anomalous distribution of repeated words and word pairs. Using the ratio of their proportions as a quantitative measure, one needs as few as 50 tweets for identifying internet troll accounts.”

    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    Coffee stains inspire optimal printing technique for electronics

    Have you ever spilled your coffee on your desk? You may then have observed one of the most puzzling phenomena of fluid mechanics — the coffee ring effect. This effect has hindered the industrial deployment of functional inks with graphene, 2D materials, and nanoparticles because it makes printed electronic devices behave irregularly.
    Now, after studying this process for years, a team of researchers have created a new family of inks that overcomes this problem, enabling the fabrication of new electronics such as sensors, light detectors, batteries and solar cells.
    Coffee rings form because the liquid evaporates quicker at the edges, causing an accumulation of solid particles that results in the characteristic dark ring. Inks behave like coffee — particles in the ink accumulate around the edges creating irregular shapes and uneven surfaces, especially when printing on hard surfaces like silicon wafers or plastics.
    Researchers, led by Tawfique Hasan from the Cambridge Graphene Centre of the University of Cambridge, with Colin Bain from the Department of Chemistry of Durham University, and Meng Zhang from School of Electronic and Information Engineering of Beihang University, studied the physics of ink droplets combining particle tracking in high-speed micro-photography, fluid mechanics, and different combinations of solvents.
    Their solution: alcohol, specifically a mixture of isopropyl alcohol and 2-butanol. Using these, ink particles tend to distribute evenly across the droplet, generating shapes with uniform thickness and properties. Their results are reported in the journal Science Advances.
    “The natural form of ink droplets is spherical — however, because of their composition, our ink droplets adopt pancake shapes,” said Hasan.
    While drying, the new ink droplets deform smoothly across the surface, spreading particles consistently. Using this universal formulation, manufacturers could adopt inkjet printing as a cheap, easy-to-access strategy for the fabrication of electronic devices and sensors. The new inks also avoid the use of polymers or surfactants — commercial additives used to tackle the coffee ring effect, but at the same time thwart the electronic properties of graphene and other 2D materials.
    Most importantly, the new methodology enables reproducibility and scalability — researchers managed to print 4500 nearly identical devices on a silicon wafer and plastic substrate. In particular, they printed gas sensors and photodetectors, both displaying very little variations in performance. Previously, printing a few hundred such devices was considered a success, even if they showed uneven behaviour.
    “Understanding this fundamental behaviour of ink droplets has allowed us to find this ideal solution for inkjet printing all kinds of two-dimensional crystals,” said first author Guohua Hu. “Our formulation can be easily scaled up to print new electronic devices on silicon wafers, or plastics, and even in spray painting and wearables, already matching or exceeding the manufacturability requirements for printed devices.”
    Beyond graphene, the team has optimised over a dozen ink formulations containing different materials. Some of them are graphene two-dimensional ‘cousins’ such as black phosphorus and boron nitride, others are more complex structures like heterostructures — ‘sandwiches’ of different 2D materials — and nanostructured materials. Researchers say their ink formulations can also print pure nanoparticles and organic molecules.This variety of materials could boost the manufacturing of electronic and photonic devices, as well as more efficient catalysts, solar cells, batteries and functional coatings.
    The team expects to see industrial applications of this technology very soon. Their first proofs of concept — printed sensors and photodetectors — have shown promising results in terms of sensitivity and consistency, exceeding the usual industry requirements. This should attract investors interested in printed and flexible electronics.
    “Our technology could speed up the adoption of inexpensive, low-power, ultra-connected sensors for the internet of things,” said Hasan. “The dream of smart cities will come true.” More

  • in

    Quantum researchers create an error-correcting cat

    Yale physicists have developed an error-correcting cat — a new device that combines the Schrödinger’s cat concept of superposition (a physical system existing in two states at once) with the ability to fix some of the trickiest errors in a quantum computation.
    It is Yale’s latest breakthrough in the effort to master and manipulate the physics necessary for a useful quantum computer: correcting the stream of errors that crop up among fragile bits of quantum information, called qubits, while performing a task.
    A new study reporting on the discovery appears in the journal Nature. The senior author is Michel Devoret, Yale’s F.W. Beinecke Professor of Applied Physics and Physics. The study’s co-first authors are Alexander Grimm, a former postdoctoral associate in Devoret’s lab who is now a tenure-track scientist at the Paul Scherrer Institute in Switzerland, and Nicholas Frattini, a graduate student in Devoret’s lab.
    Quantum computers have the potential to transform an array of industries, from pharmaceuticals to financial services, by enabling calculations that are orders of magnitude faster than today’s supercomputers.
    Yale — led by Devoret, Robert Schoelkopf, and Steven Girvin — continues to build upon two decades of groundbreaking quantum research. Yale’s approach to building a quantum computer is called “circuit QED” and employs particles of microwave light (photons) in a superconducting microwave resonator.
    In a traditional computer, information is encoded as either 0 or 1. The only errors that crop up during calculations are “bit-flips,” when a bit of information accidentally flips from 0 to 1 or vice versa. The way to correct it is by building in redundancy: using three “physical” bits of information to ensure one “effective” — or accurate — bit.

    advertisement

    In contrast, quantum information bits — qubits — are subject to both bit-flips and “phase-flips,” in which a qubit randomly flips between quantum superpositions (when two opposite states exist simultaneously).
    Until now, quantum researchers have tried to fix errors by adding greater redundancy, requiring an abundance of physical qubits for each effective qubit.
    Enter the cat qubit — named for Schrödinger’s cat, the famous paradox used to illustrate the concept of superposition.
    The idea is that a cat is placed in a sealed box with a radioactive source and a poison that will be triggered if an atom of the radioactive substance decays. The superposition theory of quantum physics suggests that until someone opens the box, the cat is both alive and dead, a superposition of states. Opening the box to observe the cat causes it to abruptly change its quantum state randomly, forcing it to be either alive or dead.
    “Our work flows from a new idea. Why not use a clever way to encode information in a single physical system so that one type of error is directly suppressed?” Devoret asked.

    advertisement

    Unlike the multiple physical qubits needed to maintain one effective qubit, a single cat qubit can prevent phase flips all by itself. The cat qubit encodes an effective qubit into superpositions of two states within a single electronic circuit — in this case a superconducting microwave resonator whose oscillations correspond to the two states of the cat qubit.
    “We achieve all of this by applying microwave frequency signals to a device that is not significantly more complicated than a traditional superconducting qubit,” Grimm said.
    The researchers said they are able to change their cat qubit from any one of its superposition states to any other superposition state, on command. In addition, the researchers developed a new way of reading out — or identifying — the information encoded into the qubit.
    “This makes the system we have developed a versatile new element that will hopefully find its use in many aspects of quantum computation with superconducting circuits,” Devoret said.
    Co-authors of the study are Girvin, Shruti Puri, Shantanu Mundhada, and Steven Touzard, all of Yale; Mazyar Mirrahimi of Inria Paris; and Shyam Shankar of the University of Texas-Austin.
    The United States Department of Defense, the United States Army Research Office, and the National Science Foundation funded the research.

    Story Source:
    Materials provided by Yale University. Original written by Jim Shelton. Note: Content may be edited for style and length. More

  • in

    Engaging undergrads remotely with an escape room game

    To prevent the spread of COVID-19, many universities canceled classes or held them online this spring — a change likely to continue for many this fall. As a result, hands-on chemistry labs are no longer accessible to undergraduate students. In a new study in the Journal of Chemical Education, researchers describe an alternative way to engage students: a virtual game, modeled on an escape room, in which teams solve chemistry problems to progress and “escape.”
    While some lab-related activities, such as calculations and data analysis, can be done remotely, these can feel like extra work. Faced with the cancellation of their own in-person laboratory classes during the COVID-19 pandemic, Matthew J. Vergne and colleagues looked outside-the-box. They sought to develop an online game for their students that would mimic the cooperative learning that normally accompanies a lab experience.
    To do so, they designed a virtual escape game with an abandoned chocolate factory theme. Using a survey-creation app, they set up a series of “rooms,” each containing a problem that required students to, for example, calculate the weight of theobromine, a component of chocolate. They tested the escape room game on a class of eight third- and fourth-year undergraduate chemistry and biochemistry students. The researchers randomly paired the students, who worked together over a video conferencing app. In a video call afterward, the students reported collaborating effectively and gave the game good reviews, say the researchers, who also note that it was not possible to ensure students didn’t use outside resources to solve the problems.
    Future versions of the game could potentially incorporate online simulations or remote access to computer-controlled lab instrumentation on campus, they say.

    Story Source:
    Materials provided by American Chemical Society. Note: Content may be edited for style and length. More

  • in

    Soldiers could teach future robots how to outperform humans

    In the future, a Soldier and a game controller may be all that’s needed to teach robots how to outdrive humans.
    At the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory and the University of Texas at Austin, researchers designed an algorithm that allows an autonomous ground vehicle to improve its existing navigation systems by watching a human drive. The team tested its approach — called adaptive planner parameter learning from demonstration, or APPLD — on one of the Army’s experimental autonomous ground vehicles.
    “Using approaches like APPLD, current Soldiers in existing training facilities will be able to contribute to improvements in autonomous systems simply by operating their vehicles as normal,” said Army researcher Dr. Garrett Warnell. “Techniques like these will be an important contribution to the Army’s plans to design and field next-generation combat vehicles that are equipped to navigate autonomously in off-road deployment environments.”
    The researchers fused machine learning from demonstration algorithms and more classical autonomous navigation systems. Rather than replacing a classical system altogether, APPLD learns how to tune the existing system to behave more like the human demonstration. This paradigm allows for the deployed system to retain all the benefits of classical navigation systems — such as optimality, explainability and safety — while also allowing the system to be flexible and adaptable to new environments, Warnell said.
    “A single demonstration of human driving, provided using an everyday Xbox wireless controller, allowed APPLD to learn how to tune the vehicle’s existing autonomous navigation system differently depending on the particular local environment,” Warnell said. “For example, when in a tight corridor, the human driver slowed down and drove carefully. After observing this behavior, the autonomous system learned to also reduce its maximum speed and increase its computation budget in similar environments. This ultimately allowed the vehicle to successfully navigate autonomously in other tight corridors where it had previously failed.”
    This research is part of the Army’s Open Campus initiative, through which Army scientists in Texas collaborate with academic partners at UT Austin.

    advertisement

    “APPLD is yet another example of a growing stream of research results that has been facilitated by the unique collaboration arrangement between UT Austin and the Army Research Lab,” said Dr. Peter Stone, professor and chair of the Robotics Consortium at UT Austin. “By having Dr. Warnell embedded at UT Austin full-time, we are able to quickly identify and tackle research problems that are both cutting-edge scientific advances and also immediately relevant to the Army.”
    The team’s experiments showed that, after training, the APPLD system was able to navigate the test environments more quickly and with fewer failures than with the classical system. Additionally, the trained APPLD system often navigated the environment faster than the human who trained it. The peer-reviewed journal, IEEE Robotics and Automation Letters, published the team’s work: APPLD: Adaptive Planner Parameter Learning From Demonstration .
    “From a machine learning perspective, APPLD contrasts with so called end-to-end learning systems that attempt to learn the entire navigation system from scratch,” Stone said. “These approaches tend to require a lot of data and may lead to behaviors that are neither safe nor robust. APPLD leverages the parts of the control system that have been carefully engineered, while focusing its machine learning effort on the parameter tuning process, which is often done based on a single person’s intuition.”
    APPLD represents a new paradigm in which people without expert-level knowledge in robotics can help train and improve autonomous vehicle navigation in a variety of environments. Rather than small teams of engineers trying to manually tune navigation systems in a small number of test environments, a virtually unlimited number of users would be able to provide the system the data it needs to tune itself to an unlimited number of environments.
    “Current autonomous navigation systems typically must be re-tuned by hand for each new deployment environment,” said Army researcher Dr. Jonathan Fink. “This process is extremely difficult — it must be done by someone with extensive training in robotics, and it requires a lot of trial and error until the right systems settings can be found. In contrast, APPLD tunes the system automatically by watching a human drive the system — something that anyone can do if they have experience with a video game controller. During deployment, APPLD also allows the system to re-tune itself in real-time as the environment changes.”
    The Army’s focus on modernizing the Next Generation Combat Vehicle includes designing both optionally manned fighting vehicles and robotic combat vehicles that can navigate autonomously in off-road deployment environments. While Soldiers can navigate these environments driving current combat vehicles, the environments remain too challenging for state-of-the-art autonomous navigation systems. APPLD and similar approaches provide a new potential way for the Army to improve existing autonomous navigation capabilities.
    “In addition to the immediate relevance to the Army, APPLD also creates the opportunity to bridge the gap between traditional engineering approaches and emerging machine learning techniques, to create robust, adaptive, and versatile mobile robots in the real-world,” said Dr. Xuesu Xiao, a postdoctoral researcher at UT Austin and lead author of the paper.
    To continue this research, the team will test the APPLD system in a variety of outdoor environments, employ Soldier drivers, and experiment with a wider variety of existing autonomous navigation approaches. Additionally, the researchers will investigate whether including additional sensor information such as camera images can lead to learning more complex behaviors such as tuning the navigation system to operate under varying conditions, such as on different terrain or with other objects present. More

  • in

    Quantum materials quest could benefit from graphene that buckles

    Graphene, an extremely thin two-dimensional layer of the graphite used in pencils, buckles when cooled while attached to a flat surface, resulting in beautiful pucker patterns that could benefit the search for novel quantum materials and superconductors, according to Rutgers-led research in the journal Nature.
    Quantum materials host strongly interacting electrons with special properties, such as entangled trajectories, that could provide building blocks for super-fast quantum computers. They also can become superconductors that could slash energy consumption by making power transmission and electronic devices more efficient.
    “The buckling we discovered in graphene mimics the effect of colossally large magnetic fields that are unattainable with today’s magnet technologies, leading to dramatic changes in the material’s electronic properties,” said lead author Eva Y. Andrei, Board of Governors professor in the Department of Physics and Astronomy in the School of Arts and Sciences at Rutgers University-New Brunswick. “Buckling of stiff thin films like graphene laminated on flexible materials is gaining ground as a platform for stretchable electronics with many important applications, including eye-like digital cameras, energy harvesting, skin sensors, health monitoring devices like tiny robots and intelligent surgical gloves. Our discovery opens the way to the development of devices for controlling nano-robots that may one day play a role in biological diagnostics and tissue repair.”
    The scientists studied buckled graphene crystals whose properties change radically when they’re cooled, creating essentially new materials with electrons that slow down, become aware of each other and interact strongly, enabling the emergence of fascinating phenomena such as superconductivity and magnetism, according to Andrei.
    Using high-tech imaging and computer simulations, the scientists showed that graphene placed on a flat surface made of niobium diselenide, buckles when cooled to 4 degrees above absolute zero. To the electrons in graphene, the mountain and valley landscape created by the buckling appears as gigantic magnetic fields. These pseudo-magnetic fields are an electronic illusion, but they act as real magnetic fields, according to Andrei.
    “Our research demonstrates that buckling in 2D materials can dramatically alter their electronic properties,” she said.
    The next steps include developing ways to engineer buckled 2D materials with novel electronic and mechanical properties that could be beneficial in nano-robotics and quantum computing, according to Andrei.
    The first author is Jinhai Mao, formerly a research associate in the Department of Physics and Astronomy and now a researcher at the University of Chinese Academy of Sciences. Rutgers co-authors include doctoral student Xinyuan Lai and a former post-doctoral associate, Yuhang Jiang, who is now a researcher at the University of Chinese Academy of Sciences. Slaviša Milovanović, who led the theory effort, is a graduate student working with professors Lucian Covaci and Francois Peeters at the Universiteit Antwerpen. Scientists at the University of Manchester and the Institute of material Science in Tsukuba Japan contributed to the study.

    Story Source:
    Materials provided by Rutgers University. Note: Content may be edited for style and length. More

  • in

    Scientists identify hundreds of drug candidates to treat COVID-19

    Scientists at the University of California, Riverside, have used machine learning to identify hundreds of new potential drugs that could help treat COVID-19, the disease caused by the novel coronavirus, or SARS-CoV-2.
    “There is an urgent need to identify effective drugs that treat or prevent COVID-19,” said Anandasankar Ray, a professor of molecular, cell, and systems biology who led the research. “We have developed a drug discovery pipeline that identified several candidates.”
    The drug discovery pipeline is a type of computational strategy linked to artificial intelligence — a computer algorithm that learns to predict activity through trial and error, improving over time.
    With no clear end in sight, the COVID-19 pandemic has disrupted lives, strained health care systems, and weakened economies. Efforts to repurpose drugs, such as Remdesivir, have achieved some success. A vaccine for the SARS-CoV-2 virus could be months away, though it is not guaranteed.
    “As a result, drug candidate pipelines, such as the one we developed, are extremely important to pursue as a first step toward systematic discovery of new drugs for treating COVID-19,” Ray said. “Existing FDA-approved drugs that target one or more human proteins important for viral entry and replication are currently high priority for repurposing as new COVID-19 drugs. The demand is high for additional drugs or small molecules that can interfere with both entry and replication of SARS-CoV-2 in the body. Our drug discovery pipeline can help.”
    Joel Kowalewski, a graduate student in Ray’s lab, used small numbers of previously known ligands for 65 human proteins that are known to interact with SARS-CoV-2 proteins. He generated machine learning models for each of the human proteins.

    advertisement

    “These models are trained to identify new small molecule inhibitors and activators — the ligands — simply from their 3-D structures,” Kowalewski said.
    Kowalewski and Ray were thus able to create a database of chemicals whose structures were predicted as interactors of the 65 protein targets. They also evaluated the chemicals for safety.
    “The 65 protein targets are quite diverse and are implicated in many additional diseases as well, including cancers,” Kowalewski said. “Apart from drug-repurposing efforts ongoing against these targets, we were also interested in identifying novel chemicals that are currently not well studied.”
    Ray and Kowalewski used their machine learning models to screen more than 10 million commercially available small molecules from a database of 200 million chemicals, and identified the best-in-class hits for the 65 human proteins that interact with SARS-CoV-2 proteins.
    Taking it a step further, they identified compounds among the hits that are already FDA approved, such as drugs and compounds used in food. They also used the machine learning models to compute toxicity, which helped them reject potentially toxic candidates. This helped them prioritize the chemicals that were predicted to interact with SARS-CoV-2 targets. Their method allowed them to not only identify the highest scoring candidates with significant activity against a single human protein target, but also find a few chemicals that were predicted to inhibit two or more human protein targets.

    advertisement

    “Compounds I am most excited to pursue are those predicted to be volatile, setting up the unusual possibility of inhaled therapeutics,” Ray said.
    “Historically, disease treatments become increasingly more complex as we develop a better understanding of the disease and how individual genetic variability contributes to the progression and severity of symptoms,” Kowalewski said. “Machine learning approaches like ours can play a role in anticipating the evolving treatment landscape by providing researchers with additional possibilities for further study. While the approach crucially depends on experimental data, virtual screening may help researchers ask new questions or find new insight.”
    Ray and Kowalewski argue that their computational strategy for the initial screening of vast numbers of chemicals has an advantage over traditional cell-culture-dependent assays that are expensive and can take years to test.
    “Our database can serve as a resource for rapidly identifying and testing novel, safe treatment strategies for COVID-19 and other diseases where the same 65 target proteins are relevant,” he said. “While the COVID-19 pandemic was what motivated us, we expect our predictions from more than 10 million chemicals will accelerate drug discovery in the fight against not only COVID-19 but also a number of other diseases.”
    Ray is looking for funding and collaborators to move toward testing cell lines, animal models, and eventually clinical trials.
    The research paper, “Predicting Novel Drugs for SARS-CoV-2 using Machine Learning from a >10 Million Chemical Space,” appears in the journal Heliyon, an interdisciplinary journal from Cell Press.
    The technology has been disclosed to the UCR Office of Technology Partnerships, assigned UC case number 2020-249, and is patent pending under the title “Therapeutic compounds and methods thereof.” More

  • in

    Security gap allows eavesdropping on mobile phone calls

    Calls via the LTE mobile network, also known as 4G, are encrypted and should therefore be tap-proof. However, researchers from the Horst Görtz Institute for IT Security (HGI) at Ruhr-Universität Bochum have shown that this is not always the case. They were able to decrypt the contents of telephone calls if they were in the same radio cell as their target, whose mobile phone they then called immediately following the call they wanted to intercept. They exploit a flaw that some manufacturers had made in implementing the base stations.
    The results were published by the HGI team David Rupprecht, Dr. Katharina Kohls, and Professor Thorsten Holz from the Chair of Systems Security together with Professor Christina Pöpper from the New York University Abu Dhabi at the 29th Usenix Security Symposium, which takes place as an online conference from 12 to 14 August 2020. The relevant providers and manufacturers were contacted prior to the publication; by now the vulnerability should be fixed.
    Reusing keys results in security gap
    The vulnerability affects Voice over LTE, the telephone standard used for almost all mobile phone calls if they are not made via special messenger services. When two people call each other, a key is generated to encrypt the conversation. “The problem was that the same key was also reused for other calls,” says David Rupprecht. Accordingly, if an attacker called one of the two people shortly after their conversation and recorded the encrypted traffic from the same cell, he or she would get the same key that secured the previous conversation.
    “The attacker has to engage the victim in a conversation,” explains David Rupprecht. “The longer the attacker talked to the victim, the more content of the previous conversation he or she was able to decrypt.” For example, if attacker and victim spoke for five minutes, the attacker could later decode five minutes of the previous conversation.
    Identifying relevant base stations via app
    In order to determine how widespread the security gap was, the IT experts tested a number of randomly selected radio cells across Germany. The security gap affected 80 per cent of the analysed radio cells. By now, the manufacturers and mobile phone providers have updated the software of the base stations to fix the problem. David Rupprecht gives the all-clear: “We then tested several random radio cells all over Germany and haven’t detected any problems since then,” he says. Still, it can’t be ruled out that there are radio cells somewhere in the world where the vulnerability occurs.
    In order to track them down, the Bochum-based group has developed an app for Android devices. Tech-savvy volunteers can use it to help search worldwide for radio cells that still contain the security gap and report them to the HGI team. The researchers forward the information to the worldwide association of all mobile network operators, GSMA, which ensures that the base stations are updated.
    “Voice over LTE has been in use for six years,” says David Rupprecht. “We’re unable to verify whether attackers have exploited the security gap in the past.” He is campaigning for the new mobile phone standard to be modified so that the same problem can’t occur again when 5G base stations are set up.

    Story Source:
    Materials provided by Ruhr-University Bochum. Original written by Julia Weiler. Note: Content may be edited for style and length. More