More stories

  • in

    Detecting single molecules and diagnosing diseases with a smartphone

    Biomarkers play a central role in the diagnosis of disease and assessment of its course. Among the markers now in use are genes, proteins, hormones, lipids and other classes of molecules. Biomarkers can be found in the blood, in cerebrospinal fluid, urine and various types of tissues, but most of them have one thing in common: They occur in extremely low concentrations, and are therefore technically challenging to detect and quantify.
    Many detection procedures use molecular probes, such as antibodies or short nucleic-acid sequences, which are designed to bind to specific biomarkers. When a probe recognizes and binds to its target, chemical or physical reactions give rise to fluorescence signals. Such methods work well, provided they are sensitive enough to recognize the relevant biomarker in a high percentage of all patients who carry it in their blood. In addition, before such fluorescence-based tests can be used in practice, the biomarkers themselves or their signals must be amplified. The ultimate goal is to enable medical screening to be carried out directly on patients, without having to send the samples to a distant laboratory for analysis.
    Molecular antennas amplify fluorescence signals Philip Tinnefeld, who holds a Chair in Physical Chemistry at LMU, has developed a strategy for determining levels of biomarkers present in low concentrations. He has succeeded in coupling DNA probes to tiny particles of gold or silver. Pairs of particles (‘dimers’) act as nano-antennas that amplify the fluorescence signals. The trick works as follows: Interactions between the nanoparticles and incoming light waves intensify the local electromagnetic fields, and this in turn leads to a massive increase in the amplitude of the fluorescence. In this way, bacteria that contain antibiotic resistance genes and even viruses can be specifically detected.
    “DNA-based nano-antennas have been studied for the last few years,” says Kateryna Trofymchuk, joint first author of the study. “But the fabrication of these nanostructures presents challenges.” Philip Tinnefeld’s research group has now succeeded in configuring the components of their nano-antennas more precisely, and in positioning the DNA molecules that serve as capture probes at the site of signal amplification. Together, these modifications enable the fluorescence signal to be more effectively amplified. Furthermore, in the minuscule volume involved, which is on the order of zeptoliters (a zeptoliter equals 10-21 of a liter), even more molecules can be captured.
    The high degree of positioning control is made possible by DNA nanotechnology, which exploits the structural properties of DNA to guide the assembly of all sorts of nanoscale objects — in extremely large numbers. “In one sample, we can simultaneously produce billions of these nano-antennas, using a procedure that basically consists of pipetting a few solutions together,” says Trofymchuk.
    Routine diagnostics on the smartphone “In the future,” says Viktorija Glembockyte, also joint first author of the publication, “our technology could be utilized for diagnostic tests even in areas in which access to electricity or laboratory equipment is restricted. We have shown that we can directly detect small fragments of DNA in blood serum, using a portable, smartphone-based microscope that runs on a conventional USB power pack to monitor the assay.” Newer smartphones are usually equipped with pretty good cameras. Apart from that, all that’s needed is a laser and a lens — two readily available and cheap components. The LMU researchers used this basic recipe to construct their prototypes.
    They went on to demonstrate that DNA fragments that are specific for antibiotic resistance genes in bacteria could be detected by this set-up. But the assay could be easily modified to detect a whole range of interesting target types, such as viruses. Tinnefeld is optimistic: “The past year has shown that there is always a need for new and innovative diagnostic methods, and perhaps our technology can one day contribute to the development of an inexpensive and reliable diagnostic test that can be carried out at home.”

    Story Source:
    Materials provided by Ludwig-Maximilians-Universität München. Note: Content may be edited for style and length. More

  • in

    New machine learning theory raises questions about nature of science

    A novel computer algorithm, or set of rules, that accurately predicts the orbits of planets in the solar system could be adapted to better predict and control the behavior of the plasma that fuels fusion facilities designed to harvest on Earth the fusion energy that powers the sun and stars.
    The algorithm, devised by a scientist at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL), applies machine learning, the form of artificial intelligence (AI) that learns from experience, to develop the predictions. “Usually in physics, you make observations, create a theory based on those observations, and then use that theory to predict new observations,” said PPPL physicist Hong Qin, author of a paper detailing the concept in Scientific Reports. “What I’m doing is replacing this process with a type of black box that can produce accurate predictions without using a traditional theory or law.”
    Qin (pronounced Chin) created a computer program into which he fed data from past observations of the orbits of Mercury, Venus, Earth, Mars, Jupiter, and the dwarf planet Ceres. This program, along with an additional program known as a “serving algorithm,” then made accurate predictions of the orbits of other planets in the solar system without using Newton’s laws of motion and gravitation. “Essentially, I bypassed all the fundamental ingredients of physics. I go directly from data to data,” Qin said. “There is no law of physics in the middle.”
    The program does not happen upon accurate predictions by accident. “Hong taught the program the underlying principle used by nature to determine the dynamics of any physical system,” said Joshua Burby, a physicist at the DOE’s Los Alamos National Laboratory who earned his Ph.D. at Princeton under Qin’s mentorship. “The payoff is that the network learns the laws of planetary motion after witnessing very few training examples. In other words, his code really ‘learns’ the laws of physics.”
    Machine learning is what makes computer programs like Google Translate possible. Google Translate sifts through a vast amount of information to determine how frequently one word in one language has been translated into a word in the other language. In this way, the program can make an accurate translation without actually learning either language.
    The process also appears in philosophical thought experiments like John Searle’s Chinese Room. In that scenario, a person who did not know Chinese could nevertheless “translate” a Chinese sentence into English or any other language by using a set of instructions, or rules, that would substitute for understanding. The thought experiment raises questions about what, at root, it means to understand anything at all, and whether understanding implies that something else is happening in the mind besides following rules.

    advertisement

    Qin was inspired in part by Oxford philosopher Nick Bostrom’s philosophical thought experiment that the universe is a computer simulation. If that were true, then fundamental physical laws should reveal that the universe consists of individual chunks of space-time, like pixels in a video game. “If we live in a simulation, our world has to be discrete,” Qin said. The black box technique Qin devised does not require that physicists believe the simulation conjecture literally, though it builds on this idea to create a program that makes accurate physical predictions.
    The resulting pixelated view of the world, akin to what is portrayed in the movie The Matrix, is known as a discrete field theory, which views the universe as composed of individual bits and differs from the theories that people normally create. While scientists typically devise overarching concepts of how the physical world behaves, computers just assemble a collection of data points.
    Qin and Eric Palmerduca, a graduate student in the Princeton University Program in Plasma Physics, are now developing ways to use discrete field theories to predict the behavior of particles of plasma in fusion experiments conducted by scientists around the world. The most widely used fusion facilities are doughnut-shaped tokamaks that confine the plasma in powerful magnetic fields.
    Fusion, the power that drives the sun and stars, combines light elements in the form of plasma — the hot, charged state of matter composed of free electrons and atomic nuclei that represents 99% of the visible universe — to generate massive amounts of energy. Scientists are seeking to replicate fusion on Earth for a virtually inexhaustible supply of power to generate electricity.
    “In a magnetic fusion device, the dynamics of plasmas are complex and multi-scale, and the effective governing laws or computational models for a particular physical process that we are interested in are not always clear,” Qin said. “In these scenarios, we can apply the machine learning technique that I developed to create a discrete field theory and then apply this discrete field theory to understand and predict new experimental observations.”
    This process opens up questions about the nature of science itself. Don’t scientists want to develop physics theories that explain the world, instead of simply amassing data? Aren’t theories fundamental to physics and necessary to explain and understand phenomena?

    advertisement

    “I would argue that the ultimate goal of any scientist is prediction,” Qin said. “You might not necessarily need a law. For example, if I can perfectly predict a planetary orbit, I don’t need to know Newton’s laws of gravitation and motion. You could argue that by doing so you would understand less than if you knew Newton’s laws. In a sense, that is correct. But from a practical point of view, making accurate predictions is not doing anything less.”
    Machine learning could also open up possibilities for more research. “It significantly broadens the scope of problems that you can tackle because all you need to get going is data,” Palmerduca said.
    The technique could also lead to the development of a traditional physical theory. “While in some sense this method precludes the need of such a theory, it can also be viewed as a path toward one,” Palmerduca said. “When you’re trying to deduce a theory, you’d like to have as much data at your disposal as possible. If you’re given some data, you can use machine learning to fill in gaps in that data or otherwise expand the data set.”
    Support for this research came from the DOE Office of Science (Fusion Energy Sciences). More

  • in

    Applying quantum computing to a particle process

    A team of researchers at Lawrence Berkeley National Laboratory (Berkeley Lab) used a quantum computer to successfully simulate an aspect of particle collisions that is typically neglected in high-energy physics experiments, such as those that occur at CERN’s Large Hadron Collider.
    The quantum algorithm they developed accounts for the complexity of parton showers, which are complicated bursts of particles produced in the collisions that involve particle production and decay processes.
    Classical algorithms typically used to model parton showers, such as the popular Markov Chain Monte Carlo algorithms, overlook several quantum-based effects, the researchers note in a study published online Feb. 10 in the journal Physical Review Letters that details their quantum algorithm.
    “We’ve essentially shown that you can put a parton shower on a quantum computer with efficient resources,” said Christian Bauer, who is Theory Group leader and serves as principal investigator for quantum computing efforts in Berkeley Lab’s Physics Division, “and we’ve shown there are certain quantum effects that are difficult to describe on a classical computer that you could describe on a quantum computer.” Bauer led the recent study.
    Their approach meshes quantum and classical computing: It uses the quantum solution only for the part of the particle collisions that cannot be addressed with classical computing, and uses classical computing to address all of the other aspects of the particle collisions.
    Researchers constructed a so-called “toy model,” a simplified theory that can be run on an actual quantum computer while still containing enough complexity that prevents it from being simulated using classical methods.

    advertisement

    “What a quantum algorithm does is compute all possible outcomes at the same time, then picks one,” Bauer said. “As the data gets more and more precise, our theoretical predictions need to get more and more precise. And at some point these quantum effects become big enough that they actually matter,” and need to be accounted for.
    In constructing their quantum algorithm, researchers factored in the different particle processes and outcomes that can occur in a parton shower, accounting for particle state, particle emission history, whether emissions occurred, and the number of particles produced in the shower, including separate counts for bosons and for two types of fermions.
    The quantum computer “computed these histories at the same time, and summed up all of the possible histories at each intermediate stage,” Bauer noted.
    The research team used the IBM Q Johannesburg chip, a quantum computer with 20 qubits. Each qubit, or quantum bit, is capable of representing a zero, one, and a state of so-called superposition in which it represents both a zero and a one simultaneously. This superposition is what makes qubits uniquely powerful compared to standard computing bits, which can represent a zero or one.
    Researchers constructed a four-step quantum computer circuit using five qubits, and the algorithm requires 48 operations. Researchers noted that noise in the quantum computer is likely to blame for differences in results with the quantum simulator.

    advertisement

    While the team’s pioneering efforts to apply quantum computing to a simplified portion of particle collider data are promising, Bauer said that he doesn’t expect quantum computers to have a large impact on the high-energy physics field for several years — at least until the hardware improves.
    Quantum computers will need more qubits and much lower noise to have a real breakthrough, Bauer said. “A lot depends on how quickly the machines get better.” But he noted that there is a huge and growing effort to make that happen, and it’s important to start thinking about these quantum algorithms now to be ready for the coming advances in hardware.
    Such quantum leaps in technology are a prime focus of an Energy Department-supported collaborative quantum R&D center that Berkeley Lab is a part of, called the Quantum Systems Accelerator.
    As hardware improves it will be possible to account for more types of bosons and fermions in the quantum algorithm, which will improve its accuracy.
    Such algorithms should eventually have broad impact in the high-energy physics field, he said, and could also find application in heavy-ion-collider experiments.
    Also participating in the study were Benjamin Nachman and Davide Provasoli of the Berkeley Lab Physics Division, and Wibe de Jong of the Berkeley Lab Computational Research Division.
    This work was supported by the U.S. Department of Energy Office of Science. It used resources at the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science user facility. More

  • in

    Spontaneous quantum error correction demonstrated

    To build a universal quantum computer from fragile quantum components, effective implementation of quantum error correction (QEC) is an essential requirement and a central challenge. QEC is used in quantum computing, which has the potential to solve scientific problems beyond the scope of supercomputers, to protect quantum information from errors due to various noise.
    Published by the journal Nature, research co-authored by University of Massachusetts Amherst physicist Chen Wang, graduate students Jeffrey Gertler and Shruti Shirol, and postdoctoral researcher Juliang Li takes a step toward building a fault-tolerant quantum computer. They have realized a novel type of QEC where the quantum errors are spontaneously corrected.
    Today’s computers are built with transistors representing classical bits (0’s or 1’s). Quantum computing is an exciting new paradigm of computation using quantum bits (qubits) where quantum superposition can be exploited for exponential gains in processing power. Fault-tolerant quantum computing may immensely advance new materials discovery, artificial intelligence, biochemical engineering and many other disciplines.
    Since qubits are intrinsically fragile, the most outstanding challenge of building such powerful quantum computers is efficient implementation of quantum error correction. Existing demonstrations of QEC are active, meaning that they require periodically checking for errors and immediately fixing them, which is very demanding in hardware resources and hence hinders the scaling of quantum computers.
    In contrast, the researchers’ experiment achieves passive QEC by tailoring the friction (or dissipation) experienced by the qubit. Because friction is commonly considered the nemesis of quantum coherence, this result may appear quite surprising. The trick is that the dissipation has to be designed specifically in a quantum manner. This general strategy has been known in theory for about two decades, but a practical way to obtain such dissipation and put it in use for QEC has been a challenge.
    “Although our experiment is still a rather rudimentary demonstration, we have finally fulfilled this counterintuitive theoretical possibility of dissipative QEC,” says Chen. “Looking forward, the implication is that there may be more avenues to protect our qubits from errors and do so less expensively. Therefore, this experiment raises the outlook of potentially building a useful fault-tolerant quantum computer in the mid to long run.”
    Chen describes in layman’s terms how strange the quantum world can be. “As in German physicist Erwin Schrödinger’s famous (or infamous) example, a cat packed in a closed box can be dead or alive at the same time. Each logical qubit in our quantum processor is very much like a mini-Schrödinger’s cat. In fact, we quite literally call it a `cat qubit.’ Having lots of such cats can help us solve some of the world’s most difficult problems.
    “Unfortunately, it is very difficult to keep a cat staying that way since any gas, light, or anything leaking into box will destroy the magic: The cat will become either dead or just a regular live cat,” explains Chen. “The most straightforward strategy to protect a Schrodinger’s cat is to make the box as tight as possible, but that also makes it harder to use it for computation. What we just demonstrated was akin to painting the inside of the box in a special way and that somehow helps the cat better survive the inevitable harm of the outside world.”

    Story Source:
    Materials provided by University of Massachusetts Amherst. Note: Content may be edited for style and length. More

  • in

    Mathematical modeling suggests kids half as susceptible to COVID-19 as adults

    A new computational analysis suggests that people under the age of 20 are about half as susceptible to COVID-19 infection as adults, and they are less likely to infect others. Itai Dattner of the University of Haifa, Israel, and colleagues present these findings in the open-access journal PLOS Computational Biology.
    Earlier studies have found differences in symptoms and the clinical course of COVID-19 in children compared to adults. Others have reported that a lower proportion of children are diagnosed compared to older age groups. However, only a few studies have compared transmission patterns between age groups, and their conclusions are not definitive.
    To better understand susceptibility and infectivity of children, Dattner and colleagues fitted mathematical and statistical models of transmission within households to a dataset of COVID-19 testing results from the dense city of Bnei Brak, Israel. The dataset covered 637 households whose members all underwent PCR testing for active infection in spring of 2020. Some individuals also received serology testing for SARS-CoV-2 antibodies.
    By adjusting model parameters to fit the data, the researchers found that people under 20 are 43 percent as susceptible as people over 20. With an infectivity estimated at 63 percent of that of adults, children are also less likely to spread COVID-19 to others. The researchers also found that children are more likely than adults to receive a negative PCR result despite actually being infected.
    These findings could explain worldwide reports that a lower proportion of children are diagnosed compared to adults. They could help inform mathematical modeling of COVID-19 dynamics, public health policy, and control measures. Future computational research could explore transmission dynamics in other settings, such as nursing homes and schools.
    “When we began this research, understanding children’s role in transmission was a top priority, in connection with the question of reopening schools,” Dattner says. “It was exciting to work in a large, multidisciplinary team, which was assembled by the Israeli Ministry of Health to address this topic rapidly.”

    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    Nanowire could provide a stable, easy-to-make superconducting transistor

    Superconductors — materials that conduct electricity without resistance — are remarkable. They provide a macroscopic glimpse into quantum phenomena, which are usually observable only at the atomic level. Beyond their physical peculiarity, superconductors are also useful. They’re found in medical imaging, quantum computers, and cameras used with telescopes.
    But superconducting devices can be finicky. Often, they’re expensive to manufacture and prone to err from environmental noise. That could change, thanks to research from Karl Berggren’s group in the Department of Electrical Engineering and Computer Science.
    The researchers are developing a superconducting nanowire, which could enable more efficient superconducting electronics. The nanowire’s potential benefits derive from its simplicity, says Berggren. “At the end of the day, it’s just a wire.”
    Berggren will present a summary of the research at this month’s IEEE Solid-state Circuits Conference.
    Resistance is futile
    Most metals lose resistance and become superconducting at extremely low temperatures, usually just a few degrees above absolute zero. They’re used to sense magnetic fields, especially in highly sensitive situations like monitoring brain activity. They also have applications in both quantum and classical computing.

    advertisement

    Underlying many of these superconductors is a device invented in the 1960s called the Josephson junction — essentially two superconductors separated by a thin insulator. “That’s what led to conventional superconducting electronics, and then ultimately to the superconducting quantum computer,” says Berggren.
    However, the Josephson junction “is fundamentally quite a delicate object,” Berggren adds. That translates directly into cost and complexity of manufacturing, especially for the thin insulating later. Josephson junction-based superconductors also may not play well with others: “If you try to interface it with conventional electronics, like the kinds in our phones or computers, the noise from those just swamps the Josephson junction. So, this lack of ability to control larger-scale objects is a real disadvantage when you’re trying to interact with the outside world.”
    To overcome these disadvantages, Berggren is developing a new technology — the superconducting nanowire — with roots older than the Josephson junction itself.
    Cryotron reboot
    In 1956, MIT electrical engineer Dudley Buck published a description of a superconducting computer switch called the cryotron. The device was little more than two superconducting wires: One was straight, and the other was coiled around it. The cryotron acts as a switch, because when current flows through the coiled wire, its magnetic field reduces the current flowing through the straight wire.

    advertisement

    At the time, the cryotron was much smaller than other types of computing switches, like vacuum tubes or transistors, and Buck thought the cryotron could become the building block of computers. But in 1959, Buck died suddenly at age 32, halting the development of the cryotron. (Since then, transistors have been scaled to microscopic sizes and today make up the core logic components of computers.)
    Now, Berggren is rekindling Buck’s ideas about superconducting computer switches. “The devices we’re making are very much like cryotrons in that they don’t require Josephson junctions,” he says. He dubbed his superconducting nanowire device the nano-cryotron in tribute to Buck — though it works a bit differently than the original cryotron.
    The nano-cryotron uses heat to trigger a switch, rather than a magnetic field. In Berggren’s device, current runs through a superconducting, supercooled wire called the “channel.” That channel is intersected by an even smaller wire called a “choke” — like a multilane highway intersected by a side road. When current is sent through the choke, its superconductivity breaks down and it heats up. Once that heat spreads from the choke to the main channel, it causes the main channel to also lose its superconducting state.
    Berggren’s group has already demonstrated proof-of-concept for the nano-cryotron’s use as an electronic component. A former student of Berggren’s, Adam McCaughan, developed a device that uses nano-cryotrons to add binary digits. And Berggren has successfully used nano-cryotrons as an interface between superconducting devices and classical, transistor-based electronics.
    Berggren says his group’s superconducting nanowire could one day complement — or perhaps compete with — Josephson junction-based superconducting devices. “Wires are relatively easy to make, so it may have some advantages in terms of manufacturability,” he says.
    He thinks the nano-cryotron could one day find a home in superconducting quantum computers and supercooled electronics for telescopes. Wires have low power dissipation, so they may also be handy for energy-hungry applications, he says. “It’s probably not going to replace the transistors in your phone, but if it could replace the transistor in a server farm or data center? That would be a huge impact.”
    Beyond specific applications, Berggren takes a broad view of his work on superconducting nanowires. “We’re doing fundamental research, here. While we’re interested in applications, we’re just also interested in: What are some different kinds of ways to do computing? As a society, we’ve really focused on semiconductors and transistors. But we want to know what else might be out there.”
    Initial funding for nano-cryotron research in the Berggren lab was provided by the National Science Foundation. More

  • in

    Cybersecurity vulnerabilities of common seismological equipment

    Seismic monitoring devices linked to the internet are vulnerable to cyberattacks that could disrupt data collection and processing, say researchers who have probed the devices for weak points.
    Common security issues such as non-encrypted data, insecure protocols, and poor user authentication mechanisms are among the biggest culprits that leave seismological networks open to security breaches, Michael Samios of the National Observatory of Athens and colleagues write in a new study published in Seismological Research Letters.
    Modern seismic stations are now implemented as an Internet-of-Things (IoT) station, with physical devices that connect and exchange data with other devices and systems over the Internet. In their test attacks on different brands of seismographs, accelerographs and GNSS receivers, Samios and his colleagues identified threats to the equipment that information technology security professionals commonly find in IoT devices.
    “It seems that most seismologists and network operators are unaware of the vulnerabilities of their IoT devices, and the potential risk that their monitoring networks are exposed to,” said Samios. “Educating and supporting seismologists on information security is imperative, as in most cases unauthorized users will try to gain access through a legitimate user’s computer to abuse monitoring networks and IoT devices.”
    By exploiting these vulnerabilities, a malicious user could alter geophysical data, slow down data transmission and processing, or produce false alarms in earthquake early warning systems, the researchers noted, causing the public to lose trust in seismic monitoring and potentially affecting emergency and economic responses to a seismic event.
    Samios and colleagues launched a security assessment of seismic and GNSS devices attached to their own monitoring networks after a security incident at one of their seismic stations. There are several potential weak points in the security of these devices, they noted, including physical security in sometimes remote locations, difficulties and costs of updating security of hardware and software, usage of non-encrypted protocols, and default or easy login credentials.
    Using their cybersecurity skills, the researchers tested these weak points using a typical “ethical hacking” process to surveil, scan and gain access to geophysical devices with their default settings. The most notable security issues, they discovered, were a lack of data encryption, weak user authentication protocols and the absence of a secure initial-default configuration
    Samios and colleagues were able to demonstrate a launch of a successful denial-of-service or DOS attack against the devices, causing them to be unavailable for the period of the attack, as well as retrieve usernames and passwords for some of the devices.
    “Security weaknesses between different devices do not depend on the type of the device, but whether this device uses insecure protocols, outdated software and a potentially insecure default configuration,” Samios said. “It is interesting, though, that while these vulnerabilities normally appear on low-cost IoT devices priced at $50 or less, it was also confirmed that they are observed even in seismological and GNSS devices that cost many times more.”
    As part of their tests, the research team was also able to intercept seismological data transferred through the SeedLink protocol, a data transmission service used by many seismologists. SeedLink may lack some of the necessary encryption and authentication protocols to keep data safe, Samios said. He noted that in a follow-up lab experiment not included in the SRL paper the researchers were able to manipulate waveforms transferred by SeedLink.
    “This could potentially generate or conceal alarms on earthquake early warning and seismic monitoring systems, leading to disturbing situations,” he said.
    While device manufacturers and data transmission services should take steps to improve security functions such as data encryption, Samios said, seismic network operators can work with information security experts to help them develop safer user practices and enhance hardware and software systems. More

  • in

    Artificial emotional intelligence: a safer, smarter future with 5G and emotion recognition

    With the advent of 5G communication technology and its integration with AI, we are looking at the dawn of a new era in which people, machines, objects, and devices are connected like never before. This smart era will be characterized by smart facilities and services such as self-driving cars, smart UAVs, and intelligent healthcare. This will be the aftermath of a technological revolution.
    But the flip side of such technological revolution is that AI itself can be used to attack or threaten the security of 5G-enabled systems which, in turn, can greatly compromise their reliability. It is, therefore, imperative to investigate such potential security threats and explore countermeasures before a smart world is realized.
    In a recent study published in IEEE Network, a team of researchers led by Prof. Hyunbum Kim from Incheon National University, Korea, address such issues in relation to an AI-based, 5G-integrated virtual emotion recognition system called 5G-I-VEmoSYS, which detects human emotions using wireless signals and body movement. “Emotions are a critical characteristic of human beings and separates humans from machines, defining daily human activity. However, some emotions can also disrupt the normal functioning of a society and put people’s lives in danger, such as those of an unstable driver. Emotion detection technology thus has great potential for recognizing any disruptive emotion and in tandem with 5G and beyond-5G communication, warning others of potential dangers,” explains Prof. Kim. “For instance, in the case of the unstable driver, the AI enabled driver system of the car can inform the nearest network towers, from where nearby pedestrians can be informed via their personal smart devices.”
    The virtual emotion system developed by Prof. Kim’s team, 5G-I-VEmoSYS, can recognize at least five kinds of emotion (joy, pleasure, a neutral state, sadness, and anger) and is composed of three subsystems dealing with the detection, flow, and mapping of human emotions. The system concerned with detection is called Artificial Intelligence-Virtual Emotion Barrier, or AI-VEmoBAR, which relies on the reflection of wireless signals from a human subject to detect emotions. This emotion information is then handled by the system concerned with flow, called Artificial Intelligence-Virtual Emotion Flow, or AI-VEmoFLOW, which enables the flow of specific emotion information at a specific time to a specific area. Finally, the Artificial Intelligence-Virtual Emotion Map, or AI-VEmoMAP, utilizes a large amount of this virtual emotion data to create a virtual emotion map that can be utilized for threat detection and crime prevention.
    A notable advantage of 5G-I-VEmoSYS is that it allows emotion detection without revealing the face or other private parts of the subjects, thereby protecting the privacy of citizens in public areas. Moreover, in private areas, it gives the user the choice to remain anonymous while providing information to the system. Furthermore, when a serious emotion, such as anger or fear, is detected in a public area, the information is rapidly conveyed to the nearest police department or relevant entities who can then take steps to prevent any potential crime or terrorism threats.
    However, the system suffers from serious security issues such as the possibility of illegal signal tampering, abuse of anonymity, and hacking-related cyber-security threats. Further, the danger of sending false alarms to authorities remains.
    While these concerns do put the system’s reliability at stake, Prof. Kim’s team are confident that they can be countered with further research. “This is only an initial study. In the future, we need to achieve rigorous information integrity and accordingly devise robust AI-based algorithms that can detect compromised or malfunctioning devices and offer protection against potential system hacks,” explains Prof. Kim, “Only then will it enable people to have safer and more convenient lives in the advanced smart cities of the future.”

    Story Source:
    Materials provided by Incheon National University. Note: Content may be edited for style and length. More