More stories

  • in

    Quantum materials quest could benefit from graphene that buckles

    Graphene, an extremely thin two-dimensional layer of the graphite used in pencils, buckles when cooled while attached to a flat surface, resulting in beautiful pucker patterns that could benefit the search for novel quantum materials and superconductors, according to Rutgers-led research in the journal Nature.
    Quantum materials host strongly interacting electrons with special properties, such as entangled trajectories, that could provide building blocks for super-fast quantum computers. They also can become superconductors that could slash energy consumption by making power transmission and electronic devices more efficient.
    “The buckling we discovered in graphene mimics the effect of colossally large magnetic fields that are unattainable with today’s magnet technologies, leading to dramatic changes in the material’s electronic properties,” said lead author Eva Y. Andrei, Board of Governors professor in the Department of Physics and Astronomy in the School of Arts and Sciences at Rutgers University-New Brunswick. “Buckling of stiff thin films like graphene laminated on flexible materials is gaining ground as a platform for stretchable electronics with many important applications, including eye-like digital cameras, energy harvesting, skin sensors, health monitoring devices like tiny robots and intelligent surgical gloves. Our discovery opens the way to the development of devices for controlling nano-robots that may one day play a role in biological diagnostics and tissue repair.”
    The scientists studied buckled graphene crystals whose properties change radically when they’re cooled, creating essentially new materials with electrons that slow down, become aware of each other and interact strongly, enabling the emergence of fascinating phenomena such as superconductivity and magnetism, according to Andrei.
    Using high-tech imaging and computer simulations, the scientists showed that graphene placed on a flat surface made of niobium diselenide, buckles when cooled to 4 degrees above absolute zero. To the electrons in graphene, the mountain and valley landscape created by the buckling appears as gigantic magnetic fields. These pseudo-magnetic fields are an electronic illusion, but they act as real magnetic fields, according to Andrei.
    “Our research demonstrates that buckling in 2D materials can dramatically alter their electronic properties,” she said.
    The next steps include developing ways to engineer buckled 2D materials with novel electronic and mechanical properties that could be beneficial in nano-robotics and quantum computing, according to Andrei.
    The first author is Jinhai Mao, formerly a research associate in the Department of Physics and Astronomy and now a researcher at the University of Chinese Academy of Sciences. Rutgers co-authors include doctoral student Xinyuan Lai and a former post-doctoral associate, Yuhang Jiang, who is now a researcher at the University of Chinese Academy of Sciences. Slaviša Milovanović, who led the theory effort, is a graduate student working with professors Lucian Covaci and Francois Peeters at the Universiteit Antwerpen. Scientists at the University of Manchester and the Institute of material Science in Tsukuba Japan contributed to the study.

    Story Source:
    Materials provided by Rutgers University. Note: Content may be edited for style and length. More

  • in

    Scientists identify hundreds of drug candidates to treat COVID-19

    Scientists at the University of California, Riverside, have used machine learning to identify hundreds of new potential drugs that could help treat COVID-19, the disease caused by the novel coronavirus, or SARS-CoV-2.
    “There is an urgent need to identify effective drugs that treat or prevent COVID-19,” said Anandasankar Ray, a professor of molecular, cell, and systems biology who led the research. “We have developed a drug discovery pipeline that identified several candidates.”
    The drug discovery pipeline is a type of computational strategy linked to artificial intelligence — a computer algorithm that learns to predict activity through trial and error, improving over time.
    With no clear end in sight, the COVID-19 pandemic has disrupted lives, strained health care systems, and weakened economies. Efforts to repurpose drugs, such as Remdesivir, have achieved some success. A vaccine for the SARS-CoV-2 virus could be months away, though it is not guaranteed.
    “As a result, drug candidate pipelines, such as the one we developed, are extremely important to pursue as a first step toward systematic discovery of new drugs for treating COVID-19,” Ray said. “Existing FDA-approved drugs that target one or more human proteins important for viral entry and replication are currently high priority for repurposing as new COVID-19 drugs. The demand is high for additional drugs or small molecules that can interfere with both entry and replication of SARS-CoV-2 in the body. Our drug discovery pipeline can help.”
    Joel Kowalewski, a graduate student in Ray’s lab, used small numbers of previously known ligands for 65 human proteins that are known to interact with SARS-CoV-2 proteins. He generated machine learning models for each of the human proteins.

    advertisement

    “These models are trained to identify new small molecule inhibitors and activators — the ligands — simply from their 3-D structures,” Kowalewski said.
    Kowalewski and Ray were thus able to create a database of chemicals whose structures were predicted as interactors of the 65 protein targets. They also evaluated the chemicals for safety.
    “The 65 protein targets are quite diverse and are implicated in many additional diseases as well, including cancers,” Kowalewski said. “Apart from drug-repurposing efforts ongoing against these targets, we were also interested in identifying novel chemicals that are currently not well studied.”
    Ray and Kowalewski used their machine learning models to screen more than 10 million commercially available small molecules from a database of 200 million chemicals, and identified the best-in-class hits for the 65 human proteins that interact with SARS-CoV-2 proteins.
    Taking it a step further, they identified compounds among the hits that are already FDA approved, such as drugs and compounds used in food. They also used the machine learning models to compute toxicity, which helped them reject potentially toxic candidates. This helped them prioritize the chemicals that were predicted to interact with SARS-CoV-2 targets. Their method allowed them to not only identify the highest scoring candidates with significant activity against a single human protein target, but also find a few chemicals that were predicted to inhibit two or more human protein targets.

    advertisement

    “Compounds I am most excited to pursue are those predicted to be volatile, setting up the unusual possibility of inhaled therapeutics,” Ray said.
    “Historically, disease treatments become increasingly more complex as we develop a better understanding of the disease and how individual genetic variability contributes to the progression and severity of symptoms,” Kowalewski said. “Machine learning approaches like ours can play a role in anticipating the evolving treatment landscape by providing researchers with additional possibilities for further study. While the approach crucially depends on experimental data, virtual screening may help researchers ask new questions or find new insight.”
    Ray and Kowalewski argue that their computational strategy for the initial screening of vast numbers of chemicals has an advantage over traditional cell-culture-dependent assays that are expensive and can take years to test.
    “Our database can serve as a resource for rapidly identifying and testing novel, safe treatment strategies for COVID-19 and other diseases where the same 65 target proteins are relevant,” he said. “While the COVID-19 pandemic was what motivated us, we expect our predictions from more than 10 million chemicals will accelerate drug discovery in the fight against not only COVID-19 but also a number of other diseases.”
    Ray is looking for funding and collaborators to move toward testing cell lines, animal models, and eventually clinical trials.
    The research paper, “Predicting Novel Drugs for SARS-CoV-2 using Machine Learning from a >10 Million Chemical Space,” appears in the journal Heliyon, an interdisciplinary journal from Cell Press.
    The technology has been disclosed to the UCR Office of Technology Partnerships, assigned UC case number 2020-249, and is patent pending under the title “Therapeutic compounds and methods thereof.” More

  • in

    Security gap allows eavesdropping on mobile phone calls

    Calls via the LTE mobile network, also known as 4G, are encrypted and should therefore be tap-proof. However, researchers from the Horst Görtz Institute for IT Security (HGI) at Ruhr-Universität Bochum have shown that this is not always the case. They were able to decrypt the contents of telephone calls if they were in the same radio cell as their target, whose mobile phone they then called immediately following the call they wanted to intercept. They exploit a flaw that some manufacturers had made in implementing the base stations.
    The results were published by the HGI team David Rupprecht, Dr. Katharina Kohls, and Professor Thorsten Holz from the Chair of Systems Security together with Professor Christina Pöpper from the New York University Abu Dhabi at the 29th Usenix Security Symposium, which takes place as an online conference from 12 to 14 August 2020. The relevant providers and manufacturers were contacted prior to the publication; by now the vulnerability should be fixed.
    Reusing keys results in security gap
    The vulnerability affects Voice over LTE, the telephone standard used for almost all mobile phone calls if they are not made via special messenger services. When two people call each other, a key is generated to encrypt the conversation. “The problem was that the same key was also reused for other calls,” says David Rupprecht. Accordingly, if an attacker called one of the two people shortly after their conversation and recorded the encrypted traffic from the same cell, he or she would get the same key that secured the previous conversation.
    “The attacker has to engage the victim in a conversation,” explains David Rupprecht. “The longer the attacker talked to the victim, the more content of the previous conversation he or she was able to decrypt.” For example, if attacker and victim spoke for five minutes, the attacker could later decode five minutes of the previous conversation.
    Identifying relevant base stations via app
    In order to determine how widespread the security gap was, the IT experts tested a number of randomly selected radio cells across Germany. The security gap affected 80 per cent of the analysed radio cells. By now, the manufacturers and mobile phone providers have updated the software of the base stations to fix the problem. David Rupprecht gives the all-clear: “We then tested several random radio cells all over Germany and haven’t detected any problems since then,” he says. Still, it can’t be ruled out that there are radio cells somewhere in the world where the vulnerability occurs.
    In order to track them down, the Bochum-based group has developed an app for Android devices. Tech-savvy volunteers can use it to help search worldwide for radio cells that still contain the security gap and report them to the HGI team. The researchers forward the information to the worldwide association of all mobile network operators, GSMA, which ensures that the base stations are updated.
    “Voice over LTE has been in use for six years,” says David Rupprecht. “We’re unable to verify whether attackers have exploited the security gap in the past.” He is campaigning for the new mobile phone standard to be modified so that the same problem can’t occur again when 5G base stations are set up.

    Story Source:
    Materials provided by Ruhr-University Bochum. Original written by Julia Weiler. Note: Content may be edited for style and length. More

  • in

    What violin synchronization can teach us about better networking in complex times

    Human networking involves every field and includes small groups of people to large, coordinated systems working together toward a goal, be it traffic management in an urban area, economic systems or epidemic control. A new study published in Nature Communications suggests by using a model of violin synchronization in a network of violin players, there are ways to drown out distractions and miscommunications that could be used as a model for human networks in society.
    Titled “The Synchronization of Complex Human Networks,” the study was conceived by Elad Shniderman, a graduate student in the Department of Music in the College of Arts and Sciences at Stony Brook University, and scientist Moti Fridman, PhD, at the Institute of Nanotechnology and Advanced Materials at Bar-llan University. He co-authored the paper with Daniel Weymouth, PhD, Associate Professor of Composition and Theory in the Department of Music and scientists at Bar-llan and the Weizmann Institute of Science in Israel. The collaboration was initiated at the Fetter Museum of Nanoscience and Art.
    The research team devised an experiment involving 16 violinists with electric violins connected to a computer system. Each of the violinists had sound-canceling headphones, hearing only the sound received from the computer. All violinists played a simple repeating musical phrase and tried to synchronize with other violinists according to what they heard in their headphones.
    According to Shniderman, Weymouth and their fellow authors: “Research on network links or coupling has focused predominantly on all-to-do coupling, whereas current social networks and human interactions are often based on complex coupling configurations.
    This study of synchronization between violin players in complex networks with full control over network connectivity, coupling strength and delay, revealed that players can tune their playing period and delete connections by ignoring frustrating signals to find a stable solution. These controlled and new degrees of freedom enable new strategies and yield better solutions potentially applicable for other human networking models.”
    “Society in its complexity is recognizing how human networks affect a broad range of crucial issues, including economic inequality, stock market crashes, political polarization and the spread of disease,” says Weymouth. “We believe there are a lot of important, real-world applications to the results of this experiment and ongoing work.”

    Story Source:
    Materials provided by Stony Brook University. Note: Content may be edited for style and length. More

  • in

    AI-enhanced precision medicine identifies novel autism subtype

    A novel precision medicine approach enhanced by artificial intelligence (AI) has laid the groundwork for what could be the first biomedical screening and intervention tool for a subtype of autism, reports a new study from Northwestern University, Ben Gurion University, Harvard University and the Massachusetts Institute of Technology.
    The approach is believed to be the first of its kind in precision medicine.
    “Previously, autism subtypes have been defined based on symptoms only — autistic disorder, Asperger syndrome, etc. — and they can be hard to differentiate as it is really a spectrum of symptoms,” said study co-first author Dr. Yuan Luo, associate professor of preventive medicine: health and biomedical informatics at the Northwestern University Feinberg School of Medicine. “The autism subtype characterized by abnormal levels identified in this study is the first multidimensional evidenced-based subtype that has distinct molecular features and an underlying cause.”
    Luo is also chief AI officer at the Northwestern University Clinical and Translational Sciences Institute and the Institute of Augmented Intelligence in Medicine. He also is a member of the McCormick School of Engineering.
    The findings were published August 10 in Nature Medicine.
    Autism affects an estimated 1 in 54 children in the United States, according to the Centers for Disease Control and Prevention. Boys are four times more likely than girls to be diagnosed. Most children are diagnosed after age 4, although autism can be reliably diagnosed based on symptoms as early as age 2.

    advertisement

    The subtype of the disorder studied by Luo and colleagues is known as dyslipidemia-associated autism, which represents 6.55% of all diagnosed autism spectrum disorders in the U.S.
    “Our study is the first precision medicine approach to overlay an array of research and health care data — including genetic mutation data, sexually different gene expression patterns, animal model data, electronic health record data and health insurance claims data — and then use an AI-enhanced precision medicine approach to attempt to define one of the world’s most complex inheritable disorders,” said Luo.
    The idea is similar to that of today’s digital maps. In order to get a true representation of the real world, the team overlaid different layers of information on top of one another.
    “This discovery was like finding a needle in a haystack, as there are thousands of variants in hundreds of genes thought to underlie autism, each of which is mutated in less than 1% of families with the disorder. We built a complex map, and then needed to develop a magnifier to zoom in,” said Luo.
    To build that magnifier, the research team identified clusters of gene exons that function together during brain development. They then used a state-of-the-art AI algorithm graph clustering technique on gene expression data. Exons are the parts of genes that contain information coding for a protein. Proteins do most of the work in our cells and organs, or in this case, the brain.
    “The map and magnifier approach showcases a generalizable way of using multiple data modalities for subtyping autism and it holds the potential for many other genetically complex diseases to inform targeted clinical trials,” said Luo.
    Using the tool, the research team also identified a strong association of parental dyslipidemia with autism spectrum disorder in their children. They further saw altered blood lipid profiles in infants later diagnosed with autism spectrum disorder. These findings have led the team to pursue subsequent studies, including clinical trials that aim to promote early screening and early intervention of autism.
    “Today, autism is diagnosed based only on symptoms, and the reality is when a physician identifies it, it’s often when early and critical brain developmental windows have passed without appropriate intervention,” said Luo. “This discovery could shift that paradigm.”

    Story Source:
    Materials provided by Northwestern University. Original written by Roger Anderson. Note: Content may be edited for style and length. More

  • in

    Machine learning can predict market behavior

    Machine learning can assess the effectiveness of mathematical tools used to predict the movements of financial markets, according to new Cornell research based on the largest dataset ever used in this area.
    The researchers’ model could also predict future market movements, an extraordinarily difficult task because of markets’ massive amounts of information and high volatility.
    “What we were trying to do is bring the power of machine learning techniques to not only evaluate how well our current methods and models work, but also to help us extend these in a way that we never could do without machine learning,” said Maureen O’Hara, the Robert W. Purcell Professor of Management at the SC Johnson College of Business.
    O’Hara is co-author of “Microstructure in the Machine Age,” published July 7 in The Review of Financial Studies.
    “Trying to estimate these sorts of things using standard techniques gets very tricky, because the databases are so big. The beauty of machine learning is that it’s a different way to analyze the data,” O’Hara said. “The key thing we show in this paper is that in some cases, these microstructure features that attach to one contract are so powerful, they can predict the movements of other contracts. So we can pick up the patterns of how markets affect other markets, which is very difficult to do using standard tools.”
    Markets generate vast amounts of data, and billions of dollars are at stake in mining that data for patterns to shed light on future market behavior. Companies on Wall Street and elsewhere employ various algorithms, examining different variables and factors, to find such patterns and predict the future.

    advertisement

    In the study, the researchers used what’s known as a random forest machine learning algorithm to better understand the effectiveness of some of these models. They assessed the tools using a dataset of 87 futures contracts — agreements to buy or sell assets in the future at predetermined prices.
    “Our sample is basically all active futures contracts around the world for five years, and we use every single trade — tens of millions of them — in our analysis,” O’Hara said. “What we did is use machine learning to try to understand how well microstructure tools developed for less complex market settings work to predict the future price process both within a contract and then collectively across contracts. We find that some of the variables work very, very well — and some of them not so great.”
    Machine learning has long been used in finance, but typically as a so-called “black box” — in which an artificial intelligence algorithm uses reams of data to predict future patterns but without revealing how it makes its determinations. This method can be effective in the short term, O’Hara said, but sheds little light on what actually causes market patterns.
    “Our use for machine learning is: I have a theory about what moves markets, so how can I test it?” she said. “How can I really understand whether my theories are any good? And how can I use what I learned from this machine learning approach to help me build better models and understand things that I can’t model because it’s too complex?”
    Huge amounts of historical market data are available — every trade has been recorded since the 1980’s — and vast volumes of information are generated every day. Increased computing power and greater availability of data have made it possible to perform more fine-grained and comprehensive analyses, but these datasets, and the computing power needed to analyze them, can be prohibitively expensive for scholars.
    In this research, finance industry practitioners partnered with the academic researchers to provide the data and the computers for the study as well as expertise in machine learning algorithms used in practice.
    “This partnership brings benefits to both,” said O’Hara, adding that the paper is one in a line of research she, Easley and Lopez de Prado have completed over the last decade. “It allows us to do research in ways generally unavailable to academic researchers.”

    Story Source:
    Materials provided by Cornell University. Original written by Melanie Lefkowitz. Note: Content may be edited for style and length. More

  • in

    Brain-NET, a deep learning methodology, accurately predicts surgeon certification scores based on neuroimaging data

    In order to earn certification in general surgery, residents in the United States need to demonstrate proficiency in the Fundamentals of Laparoscopic program (FLS), a test that requires manipulation of laparoscopic tools within a physical training unit. Central to that assessment is a quantitative score, known as the FLS score, which is manually calculated using a formula that is time-consuming and labor-intensive.
    By combining brain optical imaging, and a deep learning framework they call “Brain-NET,” a multidisciplinary team of engineers at Rensselaer Polytechnic Institute, in close collaboration with the Department of Surgery at the Jacobs School of Medicine & Biomedical Sciences at the University at Buffalo, has developed a new methodology that has the potential to transform training and the certification process for surgeons.
    In a new article in IEEE Transactions on Biomedical Engineering, the researchers demonstrated how Brain-NET can accurately predict a person’s level of expertise in terms of their surgical motor skills, based solely on neuroimaging data. These results support the future adoption of a new, more efficient method of surgeon certification that the team has developed.
    “This is an area of expertise that is really unique to RPI,” said Xavier Intes, a professor of biomedical engineering at Rensselaer, who led this research.
    According to Intes, Brain-NET not only performed more quickly than the traditional prediction model, but also more accurately, especially as it analyzed larger datasets.
    Brain-NET builds upon the research team’s earlier work in this area. Researchers led by Suvranu De, the head of the Rensselaer Department of Mechanical, Aerospace, and Nuclear Engineering, previously showed that they could accurately assess a doctor’s surgical motor skills by analyzing brain activation signals using optical imaging.
    In addition to its potential to streamline the surgeon certification process, the development of Brain-NET, combined with that optical imaging analysis, also enables real-time score feedback for surgeons who are training.
    “If you can get the measurement of the predicted score, you can give feedback right away,” Intes said. “What this opens the door to is to engage in remediation or training.”

    Story Source:
    Materials provided by Rensselaer Polytechnic Institute. Original written by Torie Wells. Note: Content may be edited for style and length. More

  • in

    Ultraviolet communication for secure networks

    Of ever-increasing concern for operating a tactical communications network is the possibility that a sophisticated adversary may detect friendly transmissions. Army researchers developed an analysis framework that enables the rigorous study of the detectability of ultraviolet communication systems, providing the insights needed to deliver the requirements of future, more secure Army networks.
    In particular, ultraviolet communication has unique propagation characteristics that not only allow for a novel non-line-of-sight optical link, but also imply that the transmissions may be harder for an adversary to detect.
    Building off of experimentally validated channel modeling, channel simulations, and detection and estimation theory, the developed framework enables the evaluation of tradeoffs associated with different design choices and the manner of operation of ultraviolet communication systems, said Dr. Robert Drost of the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory.
    “While many techniques have been proposed to decrease the detectability of conventional radio-frequency, or RF, communications, the increased atmospheric absorption of deep-ultraviolet wavelengths implies that ultraviolet communication, or UVC, has a natural low-probability-of-detection, or LPD, characteristic,” Drost said.
    “In order to fully take advantage of this characteristic, a rigorous understanding of the LPD properties of UVC is needed.”
    In particular, Drost said, such understanding is essential for optimizing the design and operation of UVC systems and networks and for predicting the quality of the LPD property in a given scenario, such as using UVC to securely network a command post that has an estimate of the direction and distance to the adversary.

    advertisement

    Without such a predictive capability, he said, users would lack the guidance needed to know the extent and limit of their detectability, and this lack of awareness would substantially limit the usefulness of the LPD capability.
    The researchers, including Drs. Mike Weisman, Fikadu Dagefu, Terrence Moore and Drost from CCDC ARL and Dr. Hakan Arlsan, Oak Ridge Associated Universities postdoctoral fellow at the lab, demonstrated this by applying their framework to produce a number of key insights regarding the LPD characteristics of UVC, including:
    LPD capability is relatively insensitive to a number of system and channel properties, which is important for the robustness of the LPD property Adversarial line-of-sight detection of a non-line-of-sight communication link is not as significant of a concern as one might fear Perhaps counter to intuition, steering of a UVC transmitter does not appear to be an effective detection-mitigation strategy in many cases Line-of-sight UVC link provides non-line-of-sight standoff distances that are commensurate with the communication range
    Prior modeling and experimental research has demonstrated that UVC signals attenuate dramatically at long distance, leading to the hypothesis that UVC has a fundamental LPD property, Drost said. However, there has been little effort on rigorously and precisely quantifying this property in terms of the detectability of a communication signal.
    “Our work provides a framework enabling the study of the fundamental limits of detectability for an ultraviolet communication system meeting desired communication performance requirements,” Drost said.

    advertisement

    Although this research is focused on longer-term applications, he said, it is addressing the Army Modernization Priority on Networks by developing the fundamental understanding of a novel communications capability, with a goal of providing the Soldier with network connectivity despite challenging environments that include adversarial activity.
    “The future communications and networking challenges that the Army faces are immense, and it is essential that we explore all possible means to overcoming those challenges,” Drost said. “Our research is ensuring that the community has the fundamental understanding of the potential for and limitations of using ultraviolet wavelengths for communications, and I am confident that this understanding will inform the development of future Army networking capabilities. Conducting fundamental research that impacts decision making and Army technologies is why we work for the Army, and it is very satisfying to know that our work will ultimately support the warfighter in his or her mission.”
    The researchers are currently continuing to develop refined understanding of how best to design and operate ultraviolet communications, and an important next step is the application of this framework to understand the detectability of a network of ultraviolet communications systems.
    Another key effort involves the experimental characterization, exploration and demonstration of this technology in a practical network using ARL’s Common Sensor Radio, a sophisticated mesh-networking radio designed to provide robust and energy-efficient networking.
    This research supports the laboratory’s FREEDOM (Foundational Research for Electronic Warfare in Multi-Domain Operations) Essential Research Program goal of studying the integration of low-signature communications technologies with advanced camouflage and decoy techniques.
    According to Drost, the work is also an on-ramp to studying how ultraviolet communications and other communications modalities, including conventional radio-frequency communications, can operate together in a seamless and autonomous extremely heterogeneous network, which the researchers believe is needed in order to fully realize the benefits of individual novel communication technologies.
    As they make continued progress on these fundamental research questions, the researchers will continue to work closely with their transition partner at the CCDC C5ISR (Command, Control, Computers, Communications, Cyber, Intelligence, Surveillance and Reconnaissance) Center to push ultraviolet communications toward nearer term transition to the warfighter. More