More stories

  • in

    Engaging undergrads remotely with an escape room game

    To prevent the spread of COVID-19, many universities canceled classes or held them online this spring — a change likely to continue for many this fall. As a result, hands-on chemistry labs are no longer accessible to undergraduate students. In a new study in the Journal of Chemical Education, researchers describe an alternative way to engage students: a virtual game, modeled on an escape room, in which teams solve chemistry problems to progress and “escape.”
    While some lab-related activities, such as calculations and data analysis, can be done remotely, these can feel like extra work. Faced with the cancellation of their own in-person laboratory classes during the COVID-19 pandemic, Matthew J. Vergne and colleagues looked outside-the-box. They sought to develop an online game for their students that would mimic the cooperative learning that normally accompanies a lab experience.
    To do so, they designed a virtual escape game with an abandoned chocolate factory theme. Using a survey-creation app, they set up a series of “rooms,” each containing a problem that required students to, for example, calculate the weight of theobromine, a component of chocolate. They tested the escape room game on a class of eight third- and fourth-year undergraduate chemistry and biochemistry students. The researchers randomly paired the students, who worked together over a video conferencing app. In a video call afterward, the students reported collaborating effectively and gave the game good reviews, say the researchers, who also note that it was not possible to ensure students didn’t use outside resources to solve the problems.
    Future versions of the game could potentially incorporate online simulations or remote access to computer-controlled lab instrumentation on campus, they say.

    Story Source:
    Materials provided by American Chemical Society. Note: Content may be edited for style and length. More

  • in

    Soldiers could teach future robots how to outperform humans

    In the future, a Soldier and a game controller may be all that’s needed to teach robots how to outdrive humans.
    At the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory and the University of Texas at Austin, researchers designed an algorithm that allows an autonomous ground vehicle to improve its existing navigation systems by watching a human drive. The team tested its approach — called adaptive planner parameter learning from demonstration, or APPLD — on one of the Army’s experimental autonomous ground vehicles.
    “Using approaches like APPLD, current Soldiers in existing training facilities will be able to contribute to improvements in autonomous systems simply by operating their vehicles as normal,” said Army researcher Dr. Garrett Warnell. “Techniques like these will be an important contribution to the Army’s plans to design and field next-generation combat vehicles that are equipped to navigate autonomously in off-road deployment environments.”
    The researchers fused machine learning from demonstration algorithms and more classical autonomous navigation systems. Rather than replacing a classical system altogether, APPLD learns how to tune the existing system to behave more like the human demonstration. This paradigm allows for the deployed system to retain all the benefits of classical navigation systems — such as optimality, explainability and safety — while also allowing the system to be flexible and adaptable to new environments, Warnell said.
    “A single demonstration of human driving, provided using an everyday Xbox wireless controller, allowed APPLD to learn how to tune the vehicle’s existing autonomous navigation system differently depending on the particular local environment,” Warnell said. “For example, when in a tight corridor, the human driver slowed down and drove carefully. After observing this behavior, the autonomous system learned to also reduce its maximum speed and increase its computation budget in similar environments. This ultimately allowed the vehicle to successfully navigate autonomously in other tight corridors where it had previously failed.”
    This research is part of the Army’s Open Campus initiative, through which Army scientists in Texas collaborate with academic partners at UT Austin.

    advertisement

    “APPLD is yet another example of a growing stream of research results that has been facilitated by the unique collaboration arrangement between UT Austin and the Army Research Lab,” said Dr. Peter Stone, professor and chair of the Robotics Consortium at UT Austin. “By having Dr. Warnell embedded at UT Austin full-time, we are able to quickly identify and tackle research problems that are both cutting-edge scientific advances and also immediately relevant to the Army.”
    The team’s experiments showed that, after training, the APPLD system was able to navigate the test environments more quickly and with fewer failures than with the classical system. Additionally, the trained APPLD system often navigated the environment faster than the human who trained it. The peer-reviewed journal, IEEE Robotics and Automation Letters, published the team’s work: APPLD: Adaptive Planner Parameter Learning From Demonstration .
    “From a machine learning perspective, APPLD contrasts with so called end-to-end learning systems that attempt to learn the entire navigation system from scratch,” Stone said. “These approaches tend to require a lot of data and may lead to behaviors that are neither safe nor robust. APPLD leverages the parts of the control system that have been carefully engineered, while focusing its machine learning effort on the parameter tuning process, which is often done based on a single person’s intuition.”
    APPLD represents a new paradigm in which people without expert-level knowledge in robotics can help train and improve autonomous vehicle navigation in a variety of environments. Rather than small teams of engineers trying to manually tune navigation systems in a small number of test environments, a virtually unlimited number of users would be able to provide the system the data it needs to tune itself to an unlimited number of environments.
    “Current autonomous navigation systems typically must be re-tuned by hand for each new deployment environment,” said Army researcher Dr. Jonathan Fink. “This process is extremely difficult — it must be done by someone with extensive training in robotics, and it requires a lot of trial and error until the right systems settings can be found. In contrast, APPLD tunes the system automatically by watching a human drive the system — something that anyone can do if they have experience with a video game controller. During deployment, APPLD also allows the system to re-tune itself in real-time as the environment changes.”
    The Army’s focus on modernizing the Next Generation Combat Vehicle includes designing both optionally manned fighting vehicles and robotic combat vehicles that can navigate autonomously in off-road deployment environments. While Soldiers can navigate these environments driving current combat vehicles, the environments remain too challenging for state-of-the-art autonomous navigation systems. APPLD and similar approaches provide a new potential way for the Army to improve existing autonomous navigation capabilities.
    “In addition to the immediate relevance to the Army, APPLD also creates the opportunity to bridge the gap between traditional engineering approaches and emerging machine learning techniques, to create robust, adaptive, and versatile mobile robots in the real-world,” said Dr. Xuesu Xiao, a postdoctoral researcher at UT Austin and lead author of the paper.
    To continue this research, the team will test the APPLD system in a variety of outdoor environments, employ Soldier drivers, and experiment with a wider variety of existing autonomous navigation approaches. Additionally, the researchers will investigate whether including additional sensor information such as camera images can lead to learning more complex behaviors such as tuning the navigation system to operate under varying conditions, such as on different terrain or with other objects present. More

  • in

    Quantum materials quest could benefit from graphene that buckles

    Graphene, an extremely thin two-dimensional layer of the graphite used in pencils, buckles when cooled while attached to a flat surface, resulting in beautiful pucker patterns that could benefit the search for novel quantum materials and superconductors, according to Rutgers-led research in the journal Nature.
    Quantum materials host strongly interacting electrons with special properties, such as entangled trajectories, that could provide building blocks for super-fast quantum computers. They also can become superconductors that could slash energy consumption by making power transmission and electronic devices more efficient.
    “The buckling we discovered in graphene mimics the effect of colossally large magnetic fields that are unattainable with today’s magnet technologies, leading to dramatic changes in the material’s electronic properties,” said lead author Eva Y. Andrei, Board of Governors professor in the Department of Physics and Astronomy in the School of Arts and Sciences at Rutgers University-New Brunswick. “Buckling of stiff thin films like graphene laminated on flexible materials is gaining ground as a platform for stretchable electronics with many important applications, including eye-like digital cameras, energy harvesting, skin sensors, health monitoring devices like tiny robots and intelligent surgical gloves. Our discovery opens the way to the development of devices for controlling nano-robots that may one day play a role in biological diagnostics and tissue repair.”
    The scientists studied buckled graphene crystals whose properties change radically when they’re cooled, creating essentially new materials with electrons that slow down, become aware of each other and interact strongly, enabling the emergence of fascinating phenomena such as superconductivity and magnetism, according to Andrei.
    Using high-tech imaging and computer simulations, the scientists showed that graphene placed on a flat surface made of niobium diselenide, buckles when cooled to 4 degrees above absolute zero. To the electrons in graphene, the mountain and valley landscape created by the buckling appears as gigantic magnetic fields. These pseudo-magnetic fields are an electronic illusion, but they act as real magnetic fields, according to Andrei.
    “Our research demonstrates that buckling in 2D materials can dramatically alter their electronic properties,” she said.
    The next steps include developing ways to engineer buckled 2D materials with novel electronic and mechanical properties that could be beneficial in nano-robotics and quantum computing, according to Andrei.
    The first author is Jinhai Mao, formerly a research associate in the Department of Physics and Astronomy and now a researcher at the University of Chinese Academy of Sciences. Rutgers co-authors include doctoral student Xinyuan Lai and a former post-doctoral associate, Yuhang Jiang, who is now a researcher at the University of Chinese Academy of Sciences. Slaviša Milovanović, who led the theory effort, is a graduate student working with professors Lucian Covaci and Francois Peeters at the Universiteit Antwerpen. Scientists at the University of Manchester and the Institute of material Science in Tsukuba Japan contributed to the study.

    Story Source:
    Materials provided by Rutgers University. Note: Content may be edited for style and length. More

  • in

    Scientists identify hundreds of drug candidates to treat COVID-19

    Scientists at the University of California, Riverside, have used machine learning to identify hundreds of new potential drugs that could help treat COVID-19, the disease caused by the novel coronavirus, or SARS-CoV-2.
    “There is an urgent need to identify effective drugs that treat or prevent COVID-19,” said Anandasankar Ray, a professor of molecular, cell, and systems biology who led the research. “We have developed a drug discovery pipeline that identified several candidates.”
    The drug discovery pipeline is a type of computational strategy linked to artificial intelligence — a computer algorithm that learns to predict activity through trial and error, improving over time.
    With no clear end in sight, the COVID-19 pandemic has disrupted lives, strained health care systems, and weakened economies. Efforts to repurpose drugs, such as Remdesivir, have achieved some success. A vaccine for the SARS-CoV-2 virus could be months away, though it is not guaranteed.
    “As a result, drug candidate pipelines, such as the one we developed, are extremely important to pursue as a first step toward systematic discovery of new drugs for treating COVID-19,” Ray said. “Existing FDA-approved drugs that target one or more human proteins important for viral entry and replication are currently high priority for repurposing as new COVID-19 drugs. The demand is high for additional drugs or small molecules that can interfere with both entry and replication of SARS-CoV-2 in the body. Our drug discovery pipeline can help.”
    Joel Kowalewski, a graduate student in Ray’s lab, used small numbers of previously known ligands for 65 human proteins that are known to interact with SARS-CoV-2 proteins. He generated machine learning models for each of the human proteins.

    advertisement

    “These models are trained to identify new small molecule inhibitors and activators — the ligands — simply from their 3-D structures,” Kowalewski said.
    Kowalewski and Ray were thus able to create a database of chemicals whose structures were predicted as interactors of the 65 protein targets. They also evaluated the chemicals for safety.
    “The 65 protein targets are quite diverse and are implicated in many additional diseases as well, including cancers,” Kowalewski said. “Apart from drug-repurposing efforts ongoing against these targets, we were also interested in identifying novel chemicals that are currently not well studied.”
    Ray and Kowalewski used their machine learning models to screen more than 10 million commercially available small molecules from a database of 200 million chemicals, and identified the best-in-class hits for the 65 human proteins that interact with SARS-CoV-2 proteins.
    Taking it a step further, they identified compounds among the hits that are already FDA approved, such as drugs and compounds used in food. They also used the machine learning models to compute toxicity, which helped them reject potentially toxic candidates. This helped them prioritize the chemicals that were predicted to interact with SARS-CoV-2 targets. Their method allowed them to not only identify the highest scoring candidates with significant activity against a single human protein target, but also find a few chemicals that were predicted to inhibit two or more human protein targets.

    advertisement

    “Compounds I am most excited to pursue are those predicted to be volatile, setting up the unusual possibility of inhaled therapeutics,” Ray said.
    “Historically, disease treatments become increasingly more complex as we develop a better understanding of the disease and how individual genetic variability contributes to the progression and severity of symptoms,” Kowalewski said. “Machine learning approaches like ours can play a role in anticipating the evolving treatment landscape by providing researchers with additional possibilities for further study. While the approach crucially depends on experimental data, virtual screening may help researchers ask new questions or find new insight.”
    Ray and Kowalewski argue that their computational strategy for the initial screening of vast numbers of chemicals has an advantage over traditional cell-culture-dependent assays that are expensive and can take years to test.
    “Our database can serve as a resource for rapidly identifying and testing novel, safe treatment strategies for COVID-19 and other diseases where the same 65 target proteins are relevant,” he said. “While the COVID-19 pandemic was what motivated us, we expect our predictions from more than 10 million chemicals will accelerate drug discovery in the fight against not only COVID-19 but also a number of other diseases.”
    Ray is looking for funding and collaborators to move toward testing cell lines, animal models, and eventually clinical trials.
    The research paper, “Predicting Novel Drugs for SARS-CoV-2 using Machine Learning from a >10 Million Chemical Space,” appears in the journal Heliyon, an interdisciplinary journal from Cell Press.
    The technology has been disclosed to the UCR Office of Technology Partnerships, assigned UC case number 2020-249, and is patent pending under the title “Therapeutic compounds and methods thereof.” More

  • in

    Security gap allows eavesdropping on mobile phone calls

    Calls via the LTE mobile network, also known as 4G, are encrypted and should therefore be tap-proof. However, researchers from the Horst Görtz Institute for IT Security (HGI) at Ruhr-Universität Bochum have shown that this is not always the case. They were able to decrypt the contents of telephone calls if they were in the same radio cell as their target, whose mobile phone they then called immediately following the call they wanted to intercept. They exploit a flaw that some manufacturers had made in implementing the base stations.
    The results were published by the HGI team David Rupprecht, Dr. Katharina Kohls, and Professor Thorsten Holz from the Chair of Systems Security together with Professor Christina Pöpper from the New York University Abu Dhabi at the 29th Usenix Security Symposium, which takes place as an online conference from 12 to 14 August 2020. The relevant providers and manufacturers were contacted prior to the publication; by now the vulnerability should be fixed.
    Reusing keys results in security gap
    The vulnerability affects Voice over LTE, the telephone standard used for almost all mobile phone calls if they are not made via special messenger services. When two people call each other, a key is generated to encrypt the conversation. “The problem was that the same key was also reused for other calls,” says David Rupprecht. Accordingly, if an attacker called one of the two people shortly after their conversation and recorded the encrypted traffic from the same cell, he or she would get the same key that secured the previous conversation.
    “The attacker has to engage the victim in a conversation,” explains David Rupprecht. “The longer the attacker talked to the victim, the more content of the previous conversation he or she was able to decrypt.” For example, if attacker and victim spoke for five minutes, the attacker could later decode five minutes of the previous conversation.
    Identifying relevant base stations via app
    In order to determine how widespread the security gap was, the IT experts tested a number of randomly selected radio cells across Germany. The security gap affected 80 per cent of the analysed radio cells. By now, the manufacturers and mobile phone providers have updated the software of the base stations to fix the problem. David Rupprecht gives the all-clear: “We then tested several random radio cells all over Germany and haven’t detected any problems since then,” he says. Still, it can’t be ruled out that there are radio cells somewhere in the world where the vulnerability occurs.
    In order to track them down, the Bochum-based group has developed an app for Android devices. Tech-savvy volunteers can use it to help search worldwide for radio cells that still contain the security gap and report them to the HGI team. The researchers forward the information to the worldwide association of all mobile network operators, GSMA, which ensures that the base stations are updated.
    “Voice over LTE has been in use for six years,” says David Rupprecht. “We’re unable to verify whether attackers have exploited the security gap in the past.” He is campaigning for the new mobile phone standard to be modified so that the same problem can’t occur again when 5G base stations are set up.

    Story Source:
    Materials provided by Ruhr-University Bochum. Original written by Julia Weiler. Note: Content may be edited for style and length. More

  • in

    What violin synchronization can teach us about better networking in complex times

    Human networking involves every field and includes small groups of people to large, coordinated systems working together toward a goal, be it traffic management in an urban area, economic systems or epidemic control. A new study published in Nature Communications suggests by using a model of violin synchronization in a network of violin players, there are ways to drown out distractions and miscommunications that could be used as a model for human networks in society.
    Titled “The Synchronization of Complex Human Networks,” the study was conceived by Elad Shniderman, a graduate student in the Department of Music in the College of Arts and Sciences at Stony Brook University, and scientist Moti Fridman, PhD, at the Institute of Nanotechnology and Advanced Materials at Bar-llan University. He co-authored the paper with Daniel Weymouth, PhD, Associate Professor of Composition and Theory in the Department of Music and scientists at Bar-llan and the Weizmann Institute of Science in Israel. The collaboration was initiated at the Fetter Museum of Nanoscience and Art.
    The research team devised an experiment involving 16 violinists with electric violins connected to a computer system. Each of the violinists had sound-canceling headphones, hearing only the sound received from the computer. All violinists played a simple repeating musical phrase and tried to synchronize with other violinists according to what they heard in their headphones.
    According to Shniderman, Weymouth and their fellow authors: “Research on network links or coupling has focused predominantly on all-to-do coupling, whereas current social networks and human interactions are often based on complex coupling configurations.
    This study of synchronization between violin players in complex networks with full control over network connectivity, coupling strength and delay, revealed that players can tune their playing period and delete connections by ignoring frustrating signals to find a stable solution. These controlled and new degrees of freedom enable new strategies and yield better solutions potentially applicable for other human networking models.”
    “Society in its complexity is recognizing how human networks affect a broad range of crucial issues, including economic inequality, stock market crashes, political polarization and the spread of disease,” says Weymouth. “We believe there are a lot of important, real-world applications to the results of this experiment and ongoing work.”

    Story Source:
    Materials provided by Stony Brook University. Note: Content may be edited for style and length. More

  • in

    AI-enhanced precision medicine identifies novel autism subtype

    A novel precision medicine approach enhanced by artificial intelligence (AI) has laid the groundwork for what could be the first biomedical screening and intervention tool for a subtype of autism, reports a new study from Northwestern University, Ben Gurion University, Harvard University and the Massachusetts Institute of Technology.
    The approach is believed to be the first of its kind in precision medicine.
    “Previously, autism subtypes have been defined based on symptoms only — autistic disorder, Asperger syndrome, etc. — and they can be hard to differentiate as it is really a spectrum of symptoms,” said study co-first author Dr. Yuan Luo, associate professor of preventive medicine: health and biomedical informatics at the Northwestern University Feinberg School of Medicine. “The autism subtype characterized by abnormal levels identified in this study is the first multidimensional evidenced-based subtype that has distinct molecular features and an underlying cause.”
    Luo is also chief AI officer at the Northwestern University Clinical and Translational Sciences Institute and the Institute of Augmented Intelligence in Medicine. He also is a member of the McCormick School of Engineering.
    The findings were published August 10 in Nature Medicine.
    Autism affects an estimated 1 in 54 children in the United States, according to the Centers for Disease Control and Prevention. Boys are four times more likely than girls to be diagnosed. Most children are diagnosed after age 4, although autism can be reliably diagnosed based on symptoms as early as age 2.

    advertisement

    The subtype of the disorder studied by Luo and colleagues is known as dyslipidemia-associated autism, which represents 6.55% of all diagnosed autism spectrum disorders in the U.S.
    “Our study is the first precision medicine approach to overlay an array of research and health care data — including genetic mutation data, sexually different gene expression patterns, animal model data, electronic health record data and health insurance claims data — and then use an AI-enhanced precision medicine approach to attempt to define one of the world’s most complex inheritable disorders,” said Luo.
    The idea is similar to that of today’s digital maps. In order to get a true representation of the real world, the team overlaid different layers of information on top of one another.
    “This discovery was like finding a needle in a haystack, as there are thousands of variants in hundreds of genes thought to underlie autism, each of which is mutated in less than 1% of families with the disorder. We built a complex map, and then needed to develop a magnifier to zoom in,” said Luo.
    To build that magnifier, the research team identified clusters of gene exons that function together during brain development. They then used a state-of-the-art AI algorithm graph clustering technique on gene expression data. Exons are the parts of genes that contain information coding for a protein. Proteins do most of the work in our cells and organs, or in this case, the brain.
    “The map and magnifier approach showcases a generalizable way of using multiple data modalities for subtyping autism and it holds the potential for many other genetically complex diseases to inform targeted clinical trials,” said Luo.
    Using the tool, the research team also identified a strong association of parental dyslipidemia with autism spectrum disorder in their children. They further saw altered blood lipid profiles in infants later diagnosed with autism spectrum disorder. These findings have led the team to pursue subsequent studies, including clinical trials that aim to promote early screening and early intervention of autism.
    “Today, autism is diagnosed based only on symptoms, and the reality is when a physician identifies it, it’s often when early and critical brain developmental windows have passed without appropriate intervention,” said Luo. “This discovery could shift that paradigm.”

    Story Source:
    Materials provided by Northwestern University. Original written by Roger Anderson. Note: Content may be edited for style and length. More

  • in

    Machine learning can predict market behavior

    Machine learning can assess the effectiveness of mathematical tools used to predict the movements of financial markets, according to new Cornell research based on the largest dataset ever used in this area.
    The researchers’ model could also predict future market movements, an extraordinarily difficult task because of markets’ massive amounts of information and high volatility.
    “What we were trying to do is bring the power of machine learning techniques to not only evaluate how well our current methods and models work, but also to help us extend these in a way that we never could do without machine learning,” said Maureen O’Hara, the Robert W. Purcell Professor of Management at the SC Johnson College of Business.
    O’Hara is co-author of “Microstructure in the Machine Age,” published July 7 in The Review of Financial Studies.
    “Trying to estimate these sorts of things using standard techniques gets very tricky, because the databases are so big. The beauty of machine learning is that it’s a different way to analyze the data,” O’Hara said. “The key thing we show in this paper is that in some cases, these microstructure features that attach to one contract are so powerful, they can predict the movements of other contracts. So we can pick up the patterns of how markets affect other markets, which is very difficult to do using standard tools.”
    Markets generate vast amounts of data, and billions of dollars are at stake in mining that data for patterns to shed light on future market behavior. Companies on Wall Street and elsewhere employ various algorithms, examining different variables and factors, to find such patterns and predict the future.

    advertisement

    In the study, the researchers used what’s known as a random forest machine learning algorithm to better understand the effectiveness of some of these models. They assessed the tools using a dataset of 87 futures contracts — agreements to buy or sell assets in the future at predetermined prices.
    “Our sample is basically all active futures contracts around the world for five years, and we use every single trade — tens of millions of them — in our analysis,” O’Hara said. “What we did is use machine learning to try to understand how well microstructure tools developed for less complex market settings work to predict the future price process both within a contract and then collectively across contracts. We find that some of the variables work very, very well — and some of them not so great.”
    Machine learning has long been used in finance, but typically as a so-called “black box” — in which an artificial intelligence algorithm uses reams of data to predict future patterns but without revealing how it makes its determinations. This method can be effective in the short term, O’Hara said, but sheds little light on what actually causes market patterns.
    “Our use for machine learning is: I have a theory about what moves markets, so how can I test it?” she said. “How can I really understand whether my theories are any good? And how can I use what I learned from this machine learning approach to help me build better models and understand things that I can’t model because it’s too complex?”
    Huge amounts of historical market data are available — every trade has been recorded since the 1980’s — and vast volumes of information are generated every day. Increased computing power and greater availability of data have made it possible to perform more fine-grained and comprehensive analyses, but these datasets, and the computing power needed to analyze them, can be prohibitively expensive for scholars.
    In this research, finance industry practitioners partnered with the academic researchers to provide the data and the computers for the study as well as expertise in machine learning algorithms used in practice.
    “This partnership brings benefits to both,” said O’Hara, adding that the paper is one in a line of research she, Easley and Lopez de Prado have completed over the last decade. “It allows us to do research in ways generally unavailable to academic researchers.”

    Story Source:
    Materials provided by Cornell University. Original written by Melanie Lefkowitz. Note: Content may be edited for style and length. More