More stories

  • in

    Cybersecurity vulnerabilities of common seismological equipment

    Seismic monitoring devices linked to the internet are vulnerable to cyberattacks that could disrupt data collection and processing, say researchers who have probed the devices for weak points.
    Common security issues such as non-encrypted data, insecure protocols, and poor user authentication mechanisms are among the biggest culprits that leave seismological networks open to security breaches, Michael Samios of the National Observatory of Athens and colleagues write in a new study published in Seismological Research Letters.
    Modern seismic stations are now implemented as an Internet-of-Things (IoT) station, with physical devices that connect and exchange data with other devices and systems over the Internet. In their test attacks on different brands of seismographs, accelerographs and GNSS receivers, Samios and his colleagues identified threats to the equipment that information technology security professionals commonly find in IoT devices.
    “It seems that most seismologists and network operators are unaware of the vulnerabilities of their IoT devices, and the potential risk that their monitoring networks are exposed to,” said Samios. “Educating and supporting seismologists on information security is imperative, as in most cases unauthorized users will try to gain access through a legitimate user’s computer to abuse monitoring networks and IoT devices.”
    By exploiting these vulnerabilities, a malicious user could alter geophysical data, slow down data transmission and processing, or produce false alarms in earthquake early warning systems, the researchers noted, causing the public to lose trust in seismic monitoring and potentially affecting emergency and economic responses to a seismic event.
    Samios and colleagues launched a security assessment of seismic and GNSS devices attached to their own monitoring networks after a security incident at one of their seismic stations. There are several potential weak points in the security of these devices, they noted, including physical security in sometimes remote locations, difficulties and costs of updating security of hardware and software, usage of non-encrypted protocols, and default or easy login credentials.
    Using their cybersecurity skills, the researchers tested these weak points using a typical “ethical hacking” process to surveil, scan and gain access to geophysical devices with their default settings. The most notable security issues, they discovered, were a lack of data encryption, weak user authentication protocols and the absence of a secure initial-default configuration
    Samios and colleagues were able to demonstrate a launch of a successful denial-of-service or DOS attack against the devices, causing them to be unavailable for the period of the attack, as well as retrieve usernames and passwords for some of the devices.
    “Security weaknesses between different devices do not depend on the type of the device, but whether this device uses insecure protocols, outdated software and a potentially insecure default configuration,” Samios said. “It is interesting, though, that while these vulnerabilities normally appear on low-cost IoT devices priced at $50 or less, it was also confirmed that they are observed even in seismological and GNSS devices that cost many times more.”
    As part of their tests, the research team was also able to intercept seismological data transferred through the SeedLink protocol, a data transmission service used by many seismologists. SeedLink may lack some of the necessary encryption and authentication protocols to keep data safe, Samios said. He noted that in a follow-up lab experiment not included in the SRL paper the researchers were able to manipulate waveforms transferred by SeedLink.
    “This could potentially generate or conceal alarms on earthquake early warning and seismic monitoring systems, leading to disturbing situations,” he said.
    While device manufacturers and data transmission services should take steps to improve security functions such as data encryption, Samios said, seismic network operators can work with information security experts to help them develop safer user practices and enhance hardware and software systems. More

  • in

    Artificial emotional intelligence: a safer, smarter future with 5G and emotion recognition

    With the advent of 5G communication technology and its integration with AI, we are looking at the dawn of a new era in which people, machines, objects, and devices are connected like never before. This smart era will be characterized by smart facilities and services such as self-driving cars, smart UAVs, and intelligent healthcare. This will be the aftermath of a technological revolution.
    But the flip side of such technological revolution is that AI itself can be used to attack or threaten the security of 5G-enabled systems which, in turn, can greatly compromise their reliability. It is, therefore, imperative to investigate such potential security threats and explore countermeasures before a smart world is realized.
    In a recent study published in IEEE Network, a team of researchers led by Prof. Hyunbum Kim from Incheon National University, Korea, address such issues in relation to an AI-based, 5G-integrated virtual emotion recognition system called 5G-I-VEmoSYS, which detects human emotions using wireless signals and body movement. “Emotions are a critical characteristic of human beings and separates humans from machines, defining daily human activity. However, some emotions can also disrupt the normal functioning of a society and put people’s lives in danger, such as those of an unstable driver. Emotion detection technology thus has great potential for recognizing any disruptive emotion and in tandem with 5G and beyond-5G communication, warning others of potential dangers,” explains Prof. Kim. “For instance, in the case of the unstable driver, the AI enabled driver system of the car can inform the nearest network towers, from where nearby pedestrians can be informed via their personal smart devices.”
    The virtual emotion system developed by Prof. Kim’s team, 5G-I-VEmoSYS, can recognize at least five kinds of emotion (joy, pleasure, a neutral state, sadness, and anger) and is composed of three subsystems dealing with the detection, flow, and mapping of human emotions. The system concerned with detection is called Artificial Intelligence-Virtual Emotion Barrier, or AI-VEmoBAR, which relies on the reflection of wireless signals from a human subject to detect emotions. This emotion information is then handled by the system concerned with flow, called Artificial Intelligence-Virtual Emotion Flow, or AI-VEmoFLOW, which enables the flow of specific emotion information at a specific time to a specific area. Finally, the Artificial Intelligence-Virtual Emotion Map, or AI-VEmoMAP, utilizes a large amount of this virtual emotion data to create a virtual emotion map that can be utilized for threat detection and crime prevention.
    A notable advantage of 5G-I-VEmoSYS is that it allows emotion detection without revealing the face or other private parts of the subjects, thereby protecting the privacy of citizens in public areas. Moreover, in private areas, it gives the user the choice to remain anonymous while providing information to the system. Furthermore, when a serious emotion, such as anger or fear, is detected in a public area, the information is rapidly conveyed to the nearest police department or relevant entities who can then take steps to prevent any potential crime or terrorism threats.
    However, the system suffers from serious security issues such as the possibility of illegal signal tampering, abuse of anonymity, and hacking-related cyber-security threats. Further, the danger of sending false alarms to authorities remains.
    While these concerns do put the system’s reliability at stake, Prof. Kim’s team are confident that they can be countered with further research. “This is only an initial study. In the future, we need to achieve rigorous information integrity and accordingly devise robust AI-based algorithms that can detect compromised or malfunctioning devices and offer protection against potential system hacks,” explains Prof. Kim, “Only then will it enable people to have safer and more convenient lives in the advanced smart cities of the future.”

    Story Source:
    Materials provided by Incheon National University. Note: Content may be edited for style and length. More

  • in

    New study suggests better approach in search for COVID-19 drugs

    Research from the University of Kent, Goethe-University in Frankfurt am Main, and the Philipps-University in Marburg has provided crucial insights into the biological composition of SARS-CoV-2, the cause of COVID-19, revealing vital clues for the discovery of antiviral drugs.
    Researchers compared SARS-CoV-2 and the closely related virus SARS-CoV, the cause of the 2002/03 SARS outbreak. Despite being 80% biologically identical, the viruses differ in crucial properties. SARS-CoV-2 is more contagious and less deadly, with a fatality rate of 2% compared to SARS-CoV’s 10%. Moreover, SARS-CoV-2 can be spread by asymptomatic individuals, whereas SARS-CoV was only transmitted by those who were already ill.
    Most functions in cells are carried out by proteins; large molecules made up of amino acids. The amino acid sequence determines the function of a protein. Viruses encode proteins that reprogramme infected cells to produce more viruses. Despite the proteins of SARS-CoV-2 and SARS-CoV having largely the same amino acid sequences, the study identifies a small subset of amino acid sequence positions that differ between them and are responsible for the observed changes in the behaviour of both viruses.
    Crucially, these dissimilarities between SARS-CoV-2 and SARS-CoV also result in different sensitivities to drugs for the treatment of COVID-19. This is vitally important, as many attempts to identify COVID-19 drugs are based on drug response data from other coronaviruses like SARS-CoV. However, the study findings show that the effectiveness of drugs against SARS-CoV or other coronaviruses does not indicate their effectiveness against SARS-CoV-2.
    Martin Michaelis, Professor of Molecular Medicine at Kent’s School of Biosciences, said: “We have now a much better idea how the small differences between SARS-CoV and SARS-CoV-2 can have such a massive impact on the behaviour of these viruses and the diseases that they cause. Our data also show that we must be more careful with the experimental systems that are used for the discovery of drugs for COVID-19. Only research using SARS-CoV-2 produces reliable results.”
    Professor Jindrich Cinatl, Goethe-University, said: “Since the COVID-19 pandemic started, I have been amazed that two so similar viruses can behave so differently. Now we start to understand this. This also includes a better idea of what we have to do to get better at finding drugs for COVID-19.”

    Story Source:
    Materials provided by University of Kent. Original written by Sam Wood. Note: Content may be edited for style and length. More

  • in

    Wafer-scale production of graphene-based photonic devices

    Our world needs reliable telecommunications more than ever before. However, classic devices have limitations in terms of size and cost and, especially, power consumption — which is directly related to greenhouse emissions. Graphene could change this and transform the future of broadband. Now, Graphene Flagship researchers have devised a wafer-scale fabrication technology that, thanks to predetermined graphene single-crystal templates, allows for integration into silicon wafers, enabling automation and paving the way to large scale production.
    This work, published in the journal ACS Nano, is a great example of a collaboration fostered by the Graphene Flagship ecosystem. It counted on the participation of several Graphene Flagship partner institutions like CNIT and the Istituto Italiano di Tecnologia (IIT), in Italy, the Cambridge Graphene Centre at the University of Cambridge, UK, and Graphene Flagship Associated Member and spin-off CamGraphIC. Furthermore, Graphene Flagship-linked third party INPHOTEC and researchers at the Tecip Institute in Italy provided the graphene photonics integrated circuits fabrication. Through the Wafer-scale Integration Work Package and Spearhead Projects such as Metrograph, the Graphene Flagship fosters collaboration between academia and leading industries to develop high-technology readiness level prototypes and products, until they can reach market exploitation.
    The new fabrication technique is enabled by the adoption of single-crystal graphene arrays. “Traditionally, when aiming at wafer-scale integration, one grows a wafer-sized layer of graphene and then transfer it onto silicon,” explains Camilla Coletti, coordinator of IIT’s Graphene Labs, who co-led the study. “Transferring an atom-thick layer of graphene over wafers while maintaining its integrity and quality is challenging” she adds. “The crystal seeding, growth and transfer technique adopted in this work ensures wafer-scale high-mobility graphene exactly where is needed: a great advantage for the scalable fabrication of photonic devices like modulators,” continues Coletti.
    It is estimated that, by 2023, the world will see over 28 billion connected devices, most of which will require 5G. These challenging requirements will demand new technologies. “Silicon and germanium alone have limitations; however, graphene provides many advantages,” says Marco Romagnoli from Graphene Flagship partner CNIT, linked third party INPHOTEC, and associated member CamGraphiC, who co-led the study. “This methodology allows us to obtain over 12,000 graphene crystals in one wafer, matching the exact configuration and disposition we need for graphene-enabled photonic devices,” he adds. Furthermore, the process is compatible with existing automated fabrication systems, which will accelerate its industrial uptake and implementation.
    In another publication in Nature Communications, researchers from Graphene Flagship partners CNIT, Istituto Italiano di Tecnologia (IIT), in Italy, Nokia — including their teams in Italy and Germany, Graphene Flagship-linked third party INPHOTEC and researchers at Tecip, used this approach to demonstrate a practical implementation: “We used our technique to design high-speed graphene photodetectors,” says Coletti. “Together, these advances will accelerate the commercial implementation of graphene-based photonic devices,” she adds.
    Graphene-enabled photonic devices offer several advantages. They absorb light from ultraviolet to the far-infrared — this allows for ultra-broadband communications. Graphene devices can have ultra-high mobility of carriers — electrons and holes — enabling data transmission that exceeds the best performing ethernet networks, breaking the barrier of 100 gigabits per second.
    Reducing the energetic demands of telecom and datacom is fundamental to provide more sustainable solutions. At present, Information and communication technologies are already responsible for almost 4% of all greenhouse emissions, comparable to the carbon footprint of the airline industry, projected to increase to around 14% by 2040. “In graphene, almost all the energy of light can be converted into electric signals, which massively reduces power consumption and maximises efficiency,” adds Romagnoli.
    Frank Koppens, Graphene Flagship Leader for Photonics and Optoelectronics, says: “This is the first time that high-quality graphene has been integrated on the wafer-scale. The work shows direct relevance by revealing high-yield and high-speed absorption modulators. These impressive achievements bring commercialisation of graphene devices into 5G communications very close.”
    Andrea C. Ferrari, Science and Technology Officer of the Graphene Flagship and Chair of its Management Panel added: “This work is a major milestone for the Graphene Flagship. A close collaboration between academic and industrial partners has finally developed a wafer-scale process for graphene integration. The Graphene Foundry is no more a distant goal, but it starts today.”

    Story Source:
    Materials provided by Graphene Flagship. Original written by Fernando Gomollón-Bel. Note: Content may be edited for style and length. More

  • in

    Smartphone app to change your personality

    Personality traits such as conscientiousness or sociability are patterns of experience and behavior that can change throughout our lives. Individual changes usually take place slowly as people gradually adapt to the demands of society and their environment. However, it is unclear whether certain personality traits can also be psychologically influenced in a short-term and targeted manner.
    Researchers from the universities of Zurich, St. Gallen, Brandeis, Illinois, and ETH Zurich have now investigated this question using a digital intervention. In their study, around 1,500 participants were provided with a specially developed smartphone app for three months and the researchers then assessed whether and how their personalities had changed. The five major personality traits of openness, conscientiousness, sociability (extraversion), considerateness (agreeableness), and emotional vulnerability (neuroticism) were examined. The app included elements of knowledge transfer, behavioral and resource activation, self-reflection, and feedback on progress. All communication with the digital coach and companion (a chatbot) took place virtually. The chatbot supported the participants on a daily basis to help them make the desired changes.
    Changes after three months
    The majority of participants said that they wanted to reduce their emotional vulnerability, increase their conscientiousness, or increase their extraversion. Those who participated in the intervention for more than three months reported greater success in achieving their change goals than the control group who took part for only two months. Close friends and family members also observed changes in those participants who wanted to increase expression of a certain personality trait. However, for those who wanted to reduce expression of a trait, the people close to them noticed little change. This group mainly comprised those participants who wanted to become less emotionally vulnerable, an inner process that is less observable from the outside.
    “The participants and their friends alike reported that three months after the end of the intervention, the personality changes brought about by using the app had persisted,” says Mathias Allemand, professor of psychology at UZH. “These surprising results show that we are not just slaves to our personality, but that we can deliberately make changes to routine experience and behavior patterns.”
    Important for health promotion and prevention
    The findings also indicate that development of the personality structure can happen more quickly than was previously believed. “In addition, change processes accompanied by digital tools can be used in everyday life,” explains first author Mirjam Stieger of Brandeis University in the USA, who did her doctorate at UZH. However, more evidence of the effectiveness of digital interventions is needed. For example, it was unclear whether the changes achieved were permanent or only reflected temporary fluctuations.
    The present findings are not only interesting for research, but could also find application in a variety of areas of life. In health promotion and prevention, for example, such apps could boost the resources of individuals, as people’s attitude to their situation and personality traits such as conscientiousness have an influence on health and healthy aging.
    The Smartphone App PEACH (PErsonality coACH)
    The smartphone application PEACH was developed as part of a project funded by the Swiss National Science Foundation (SNSF) to study personality change through a digital intervention. The application provides scalable communication capabilities using a digital agent that mimics a conversation with a human. The PEACH app also includes digital journaling, reminders of individual goals, video clips, opportunities for self-reflection and feedback on progress. Weekly core topics and small interventions aim to address and activate the desired changes and thus the development of personality traits.
    The app was developed as a research tool. In the future, however, it is thought that research apps such as PEACH will be made widely available.

    Story Source:
    Materials provided by University of Zurich. Note: Content may be edited for style and length. More

  • in

    Silicon chip provides low cost solution to help machines see the world clearly

    Researchers in Southampton and San Francisco have developed the first compact 3D LiDAR imaging system that can match and exceed the performance and accuracy of most advanced, mechanical systems currently used.
    3D LiDAR can provide accurate imaging and mapping for many applications; it is the “eyes” for autonomous cars and is used in facial recognition software and by autonomous robots and drones. Accurate imaging is essential for machines to map and interact with the physical world but the size and costs of the technology currently needed has limited LIDAR’s use in commercial applications.
    Now a team of researchers from Pointcloud Inc in San Francisco and the University of Southampton’s Optoelectronic Research Centre (ORC) have developed a new, integrated system, which uses silicon photonic components and CMOS electronic circuits in the same microchip. The prototype they have developed would be a low-cost solution and could pave the way to large volume production of low-cost, compact and high-performance 3D imaging cameras for use in robotics, autonomous navigation systems, mapping of building sites to increase safety and in healthcare.
    Graham Reed, Professor of Silicon Photonics within the ORC said, “LIDAR has been promising a lot but has not always delivered on its potential in recent years because, although experts have recognised that integrated versions can scale down costs, the necessary performance has not been there. Until now.
    “The silicon photonics system we have developed provides much higher accuracy at distance compared to other chip-based LIDAR systems to date, and most mechanical versions, showing that the much sought-after integrated system for LIDAR is viable.”
    Remus Nicolaescu, CEO of Pointcloud Inc added, “The combination of high performance and low cost manufacturing, will accelerate existing applications in autonomy and augmented reality, as well as open new directions, such as industrial and consumer digital twin applications requiring high depth accuracy, or preventive healthcare through remote behavioural and vital signs monitoring requiring high velocity accuracy.
    “The collaboration with the world class team at the ORC has been instrumental, and greatly accelerated the technology development.”
    The latest tests of the prototype, published in the journal Nature, show that it has an accuracy of 3.1 millimetres at a distance of 75 metres.
    Amongst the problems faced by previous integrated systems are the difficulties in providing a dense array of pixels that can be easily addressed; this has restricted them to fewer than 20 pixels whereas this new system is the first large-scale 2D coherent detector array consisting of 512 pixels. The research teams are now working to extend the pixels arrays and the beam steering technology to make the system even better suited to real-world applications and further improve performance.

    Story Source:
    Materials provided by University of Southampton. Note: Content may be edited for style and length. More

  • in

    Computational medicine: Moving from uncertainty to precision

    Individual choices in medicine carry a certain amount of uncertainty.
    An innovative partnership at The University of Texas at Austin takes aim at medicine down to the individual level by applying state-of-the-art computation to medical care.
    “Medicine in its essence is decision-making under uncertainty, decisions about tests and treatments,” said Radek Bukowski, MD, PhD, professor and associate chair of Investigation and Discovery in the Department of Women’s Health at Dell Medical School at UT Austin.
    “The human body and the healthcare system are complex systems made of a vast number of intensely interacting elements,” he said. “In such complex systems, there are many different pathways along which an outcome can occur. Our bodies are robust, but this also makes us very individualized, and the practice of medicine challenging. Everyone is made of different combinations of risk factors and protective characteristics. This is why precision medicine is paramount going forward.”
    To that effect, in the January 2021 edition of the American Journal of Obstetrics Gynecology, experts at Dell Med, Oden Institute for Computational and Engineering Sciences (Oden Institute), and Texas Advanced Computing Center (TACC), along with stakeholders across healthcare, industry, and government, stated that the emergence of computational medicine will revolutionize the future of medicine and health care. Craig Cordola of Ascension and Christopher Zarins of HeartFlow co-authored this editorial review with Bukowski and others.
    According to Bukowski, this interdisciplinary group provides a unique combination of resources that are poised to make Texas a leader in providing computational solutions to today’s and tomorrow’s health care issues.

    advertisement

    “At UT Austin we’re fortunate to have found ourselves at a very opportune point in time for computational medical research,” Bukowski said. “The Oden Institute has world-class expertise in mathematical modeling, applied math, and computational medicine; TACC is home to the world’s largest supercomputer for open science, and also committed to improving medical care, including outcomes for women and babies.”
    Powered by such collaborations, the emerging discipline of computational medicine focuses on developing quantitative approaches to understanding the mechanisms, diagnosis, and treatment of human disease through applications, more commonly found in mathematics, engineering, and computational science. These computational approaches are well-suited to modeling complex systems such as the human body.
    An On-Point area of Study for Obstetrics
    While computation is pivotal to all domains in medicine, it is especially promising in obstetrics because it concerns at least two patients — mother and baby, who frequently have conflicting interests, making medical decision-making particularly difficult and the stakes exceptionally high.
    According to state Rep. Donna Howard, D-Austin, a co-author of the editorial review, Texas legislators should be concerned about the unacceptably high rate of maternal morbidity and mortality in the state.

    advertisement

    “When I became aware of the efforts to bring computational medical approaches to addressing maternal morbidity and mortality, I was immediately intrigued,” Howard said. “And when I learned of the interdisciplinary expertise that has found itself conveniently positioned to create this new frontier of medicine, I was sold.”
    Individualized medicine is happening now because of advancements in computing power and mathematical modeling that can solve the problems which were unsolvable until now.
    Case in point: in 2018 the National Science Foundation awarded UT Austin a $1.2 million grant to support research using computational medicine and smartphones to monitor the activity and behavior of 1,000 pregnant women in the Austin area.
    In particular, the growing array of data sources including health records, administrative databases, randomized controlled trials, and internet-connected sensors provides a wealth of information at multiple timescales for which to develop sophisticated data-driven models and inform theoretical formulations.
    “When combined with analysis platforms via high performance computing, we now have the capability to provide patients and medical providers analysis of outcomes and risk assessment on a per-individual basis to improve the shared decision making process,” Bukowski concluded. More

  • in

    New wearable device turns the body into a battery

    Researchers at the University of Colorado Boulder have developed a new, low-cost wearable device that transforms the human body into a biological battery.
    The device, described today in the journal Science Advances, is stretchy enough that you can wear it like a ring, a bracelet or any other accessory that touches your skin. It also taps into a person’s natural heat — employing thermoelectric generators to convert the body’s internal temperature into electricity.
    “In the future, we want to be able to power your wearable electronics without having to include a battery,” said Jianliang Xiao, senior author of the new paper and an associate professor in the Paul M. Rady Department of Mechanical Engineering at CU Boulder.
    The concept may sound like something out of The Matrix film series, in which a race of robots have enslaved humans to harvest their precious organic energy. Xiao and his colleagues aren’t that ambitious: Their devices can generate about 1 volt of energy for every square centimeter of skin space — less voltage per area than what most existing batteries provide but still enough to power electronics like watches or fitness trackers.
    Scientists have previously experimented with similar thermoelectric wearable devices, but Xiao’s is stretchy, can heal itself when damaged and is fully recyclable — making it a cleaner alternative to traditional electronics.
    “Whenever you use a battery, you’re depleting that battery and will, eventually, need to replace it,” Xiao said. “The nice thing about our thermoelectric device is that you can wear it, and it provides you with constant power.”
    High-tech bling

    advertisement

    The project isn’t Xiao’s first attempt to meld human with robot. He and his colleagues previously experimented with designing “electronic skin,” wearable devices that look, and behave, much like real human skin. That android epidermis, however, has to be connected to an external power source to work.
    Until now. The group’s latest innovation begins with a base made out of a stretchy material called polyimine. The scientists then stick a series of thin thermoelectric chips into that base, connecting them all with liquid metal wires. The final product looks like a cross between a plastic bracelet and a miniature computer motherboard or maybe a techy diamond ring.
    “Our design makes the whole system stretchable without introducing much strain to the thermoelectric material, which can be really brittle,” Xiao said.
    Just pretend that you’re out for a jog. As you exercise, your body heats up, and that heat will radiate out to the cool air around you. Xiao’s device captures that flow of energy rather than letting it go to waste.
    “The thermoelectric generators are in close contact with the human body, and they can use the heat that would normally be dissipated into the environment,” he said.

    advertisement

    Lego blocks
    He added that you can easily boost that power by adding in more blocks of generators. In that sense, he compares his design to a popular children’s toy.
    “What I can do is combine these smaller units to get a bigger unit,” he said. “It’s like putting together a bunch of small Lego pieces to make a large structure. It gives you a lot of options for customization.”
    Xiao and his colleagues calculated, for example, that a person taking a brisk walk could use a device the size of a typical sports wristband to generate about 5 volts of electricity — which is more than what many watch batteries can muster.
    Like Xiao’s electronic skin, the new devices are as resilient as biological tissue. If your device tears, for example, you can pinch together the broken ends, and they’ll seal back up in just a few minutes. And when you’re done with the device, you can dunk it into a special solution that will separate out the electronic components and dissolve the polyimine base — each and every one of those ingredients can then be reused.
    “We’re trying to make our devices as cheap and reliable as possible, while also having as close to zero impact on the environment as possible,” Xiao said.
    While there are still kinks to work out in the design, he thinks that his group’s devices could appear on the market in five to 10 years. Just don’t tell the robots. We don’t want them getting any ideas.
    Coauthors on the new paper include researchers from China’s Harbin Institute of Technology, Southeast University, Zhejiang University, Tongji University and Huazhong University of Science and Technology.
    Video: https://www.youtube.com/watch?v=hexScHvEFwQ&feature=emb_logo More