More stories

  • in

    Beauty is in the brain: AI reads brain data, generates personally attractive images

    Researchers have succeeded in making an AI understand our subjective notions of what makes faces attractive. The device demonstrated this knowledge by its ability to create new portraits on its own that were tailored to be found personally attractive to individuals. The results can be utilised, for example, in modelling preferences and decision-making as well as potentially identifying unconscious attitudes.
    Researchers at the University of Helsinki and University of Copenhagen investigated whether a computer would be able to identify the facial features we consider attractive and, based on this, create new images matching our criteria. The researchers used artificial intelligence to interpret brain signals and combined the resulting brain-computer interface with a generative model of artificial faces. This enabled the computer to create facial images that appealed to individual preferences.
    “In our previous studies, we designed models that could identify and control simple portrait features, such as hair colour and emotion. However, people largely agree on who is blond and who smiles. Attractiveness is a more challenging subject of study, as it is associated with cultural and psychological factors that likely play unconscious roles in our individual preferences. Indeed, we often find it very hard to explain what it is exactly that makes something, or someone, beautiful: Beauty is in the eye of the beholder,” says Senior Researcher and Docent Michiel Spapé from the Department of Psychology and Logopedics, University of Helsinki.
    The study, which combines computer science and psychology, was published in February in the IEEE Transactions in Affective Computing journal.
    Preferences exposed by the brain
    Initially, the researchers gave a generative adversarial neural network (GAN) the task of creating hundreds of artificial portraits. The images were shown, one at a time, to 30 volunteers who were asked to pay attention to faces they found attractive while their brain responses were recorded via electroencephalography (EEG).

    advertisement

    “It worked a bit like the dating app Tinder: the participants ‘swiped right’ when coming across an attractive face. Here, however, they did not have to do anything but look at the images. We measured their immediate brain response to the images,” Spapé explains.
    The researchers analysed the EEG data with machine learning techniques, connecting individual EEG data through a brain-computer interface to a generative neural network.
    “A brain-computer interface such as this is able to interpret users’ opinions on the attractiveness of a range of images. By interpreting their views, the AI model interpreting brain responses and the generative neural network modelling the face images can together produce an entirely new face image by combining what a particular person finds attractive,” says Academy Research Fellow and Associate Professor Tuukka Ruotsalo, who heads the project.
    To test the validity of their modelling, the researchers generated new portraits for each participant, predicting they would find them personally attractive. Testing them in a double-blind procedure against matched controls, they found that the new images matched the preferences of the subjects with an accuracy of over 80%.
    “The study demonstrates that we are capable of generating images that match personal preference by connecting an artificial neural network to brain responses. Succeeding in assessing attractiveness is especially significant, as this is such a poignant, psychological property of the stimuli. Computer vision has thus far been very successful at categorising images based on objective patterns. By bringing in brain responses to the mix, we show it is possible to detect and generate images based on psychological properties, like personal taste,” Spapé explains.
    Potential for exposing unconscious attitudes
    Ultimately, the study may benefit society by advancing the capacity for computers to learn and increasingly understand subjective preferences, through interaction between AI solutions and brain-computer interfaces.
    “If this is possible in something that is as personal and subjective as attractiveness, we may also be able to look into other cognitive functions such as perception and decision-making. Potentially, we might gear the device towards identifying stereotypes or implicit bias and better understand individual differences,” says Spapé.

    Story Source:
    Materials provided by University of Helsinki. Original written by Aino Pekkarinen. Note: Content may be edited for style and length. More

  • in

    New quantum theory heats up thermodynamic research

    Researchers have developed a new quantum version of a 150-year-old thermodynamical thought experiment that could pave the way for the development of quantum heat engines.
    Mathematicians from the University of Nottingham have applied new quantum theory to the Gibbs paradox and demonstrated a fundamental difference in the roles of information and control between classical and quantum thermodynamics. Their research has been published today in Nature Communications.
    The classical Gibbs paradox led to crucial insights for the development of early thermodynamics and emphasises the need to consider an experimenter’s degree of control over a system.
    The research team developed a theory based on mixing two quantum gases — for example, one red and one blue, otherwise identical — which start separated and then mix in a box. Overall, the system has become more uniform, which is quantified by an increase in entropy. If the observer then puts on purple-tinted glasses and repeats the process; the gases look the same, so it appears as if nothing changes. In this case, the entropy change is zero.
    The lead authors on the paper, Benjamin Yadin and Benjamin Morris, explain: “Our findings seem odd because we expect physical quantities such as entropy to have meaning independent of who calculates them. In order to resolve the paradox, we must realise that thermodynamics tells us what useful things can be done by an experimenter who has devices with specific capabilities. For example, a heated expanding gas can be used to drive an engine. In order to extract work (useful energy) from the mixing process, you need a device that can “see” the difference between red and blue gases.”
    Classically, an “ignorant” experimenter, who sees the gases as indistinguishable, cannot extract work from the mixing process. The research shows that in the quantum case, despite being unable to tell the difference between the gases, the ignorant experimenter can still extract work through mixing them.
    Considering the situation when the system becomes large, where quantum behaviour would normally disappear, the researchers found that the quantum ignorant observer can extract as much work as if they had been able to distinguish the gases. Controlling these gases with a large quantum device would behave entirely differently from a classical macroscopic heat engine. This phenomenon results from the existence of special superposition states that encode more information than is available classically.
    Professor Gerardo Adesso said: “Despite a century of research, there are so many aspects we don’t know or we don’t understand yet at the heart of quantum mechanics. Such a fundamental ignorance, however, doesn’t prevent us from putting quantum features to good use, as our work reveals. We hope our theoretical study can inspire exciting developments in the burgeoning field of quantum thermodynamics and catalyse further progress in the ongoing race for quantum-enhanced technologies.
    “Quantum heat engines are microscopic versions of our everyday heaters and refrigerators, which may be realised with just one or a few atoms (as already experimentally verified) and whose performance can be boosted by genuine quantum effects such as superposition and entanglement. Presently, to see our quantum Gibbs paradox played out in a laboratory would require exquisite control over the system parameters, something which may be possible in fine-tuned “optical lattice” systems or Bose-Einstein condensates — we are currently at work to design such proposals in collaboration with experimental groups.”

    Story Source:
    Materials provided by University of Nottingham. Note: Content may be edited for style and length. More

  • in

    Can't solve a riddle? The answer might lie in knowing what doesn't work

    You look for a pattern, or a rule, and you just can’t spot it. So you back up and start over.
    That’s your brain recognizing that your current strategy isn’t working, and that you need a new way to solve the problem, according to new research from the University of Washington. With the help of about 200 puzzle-takers, a computer model and functional MRI (fMRI) images, researchers have learned more about the processes of reasoning and decision-making, pinpointing the brain pathway that springs into action when problem-solving goes south.
    “There are two fundamental ways your brain can steer you through life — toward things that are good, or away from things that aren’t working out,” said Chantel Prat, associate professor of psychology and co-author of the new study, published Feb. 23 in the journal Cognitive Science. “Because these processes are happening beneath the hood, you’re not necessarily aware of how much driving one or the other is doing.”
    Using a decision-making task developed by Michael Frank at Brown University, the researchers measured exactly how much “steering” in each person’s brain involved learning to move toward rewarding things as opposed to away from less-rewarding things. Prat and her co-authors were focused on understanding what makes someone good at problem-solving.
    The research team first developed a computer model that specified the series of steps they believed were required for solving the Raven’s Advanced Performance Matrices (Raven’s) — a standard lab test made of puzzles like the one above. To succeed, the puzzle-taker must identify patterns and predict the next image in the sequence. The model essentially describes the four steps people take to solve a puzzle:
    Identify a key feature in a pattern;
    Figure out where that feature appears in the sequence;
    Come up with a rule for manipulating the feature;
    Check whether the rule holds true for the entire pattern.
    At each step, the model evaluated whether it was making progress. When the model was given real problems to solve, it performed best when it was able to steer away from the features and strategies that weren’t helping it make progress. According to the authors, this ability to know when your “train of thought is on the wrong track” was central to finding the correct answer.
    The next step was to see whether this was true in people. To do so, the team had three groups of participants solve puzzles in three different experiments. In the first, they solved the original set of Raven’s problems using a paper-and-pencil test, along with Frank’s test which separately measured their ability to “choose” the best options and to “avoid” the worse options. Their results suggested that only the ability to “avoid” the worst options related to problem-solving success. There was no relation between one’s ability to recognize the best choice in the decision-making test, and to solve the puzzles effectively.
    The second experiment replaced the paper-and-pencil version of the puzzles with a shorter, computerized version of the task that could also be implemented in an MRI brain-scanning environment. These results confirmed that those who were best at avoiding the worse options in the decision-making task were also the best problem solvers.
    The final group of participants completed the computerized puzzles while having their brain activity recorded using fMRI. Based on the model, the researchers gauged which parts of the brain would drive problem-solving success. They zeroed in on the basal ganglia — what Prat calls the “executive assistant” to the prefrontal cortex, or “CEO” of the brain. The basal ganglia assist the prefrontal cortex in deciding which action to take using parallel paths: one that turns the volume “up” on information it believes is relevant, and another that turns the volume “down” on signals it believes to be irrelevant. The “choose” and “avoid” behaviors associated with Frank’s decision-making test relate to the functioning of these two pathways. Results from this experiment suggest that the process of “turning down the volume” in the basal ganglia predicted how successful participants were at solving the puzzles.
    “Our brains have parallel learning systems for avoiding the least good thing and getting the best thing. A lot of research has focused on how we learn to find good things, but this pandemic is an excellent example of why we have both systems. Sometimes, when there are no good options, you have to pick the least bad one! What we found here was that this is even more critical to complex problem-solving than recognizing what’s working.”
    Co-authors of the study were Andrea Stocco, associate professor, and Lauren Graham, assistant teaching professor, in the UW Department of Psychology. The research was supported by the UW Royalty Research Fund, a UW startup fund award and the Bezos Family Foundation.

    Story Source:
    Materials provided by University of Washington. Original written by Kim Eckart. Note: Content may be edited for style and length. More

  • in

    Extreme-scale computing and AI forecast a promising future for fusion power

    Efforts to duplicate on Earth the fusion reactions that power the sun and stars for unlimited energy must contend with extreme heat-load density that can damage the doughnut-shaped fusion facilities called tokamaks, the most widely used laboratory facilities that house fusion reactions, and shut them down. These loads flow against the walls of what are called divertor plates that extract waste heat from the tokamaks.
    Far larger forecast
    But using high-performance computers and artificial intelligence (AI), researchers at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) have predicted a far larger and less damaging heat-load width for the full-power operation of ITER, the international tokamak under construction in France, than previous estimates have found. The new formula produced a forecast that was over six-times wider than those developed by a simple extrapolation from present tokamaks to the much larger ITER facility whose goal is to demonstrate the feasibility of fusion power.
    “If the simple extrapolation to full-power ITER from today’s tokamaks were correct, no known material could withstand the extreme heat load without some difficult preventive measures,” said PPPL physicist C.S. Chang, leader of the team that developed the new, wider forecast and first author of a paper that Physics of Plasmas has published as an Editor’s Pick. “An accurate formula can enable scientists to operate ITER in a more comfortable and cost-effective way toward its goal of producing 10 times more fusion energy than the input energy,” Chang said.
    Fusion reactions combine light elements in the form of plasma — the hot, charged state of matter composed of free electrons and atomic nuclei that makes up to 99 percent of the visible universe — to generate massive amounts of energy. Tokamaks, the most widely used fusion facilities, confine the plasma in magnetic fields and heat it to million-degree temperatures to produce fusion reactions. Scientists around the world are seeking to produce and control such reactions to create a safe, clean, and virtually inexhaustible supply of power to generate electricity.
    The Chang team’s surprisingly optimistic forecast harkens back to results the researchers produced on the Titan supercomputer at the Oak Ridge Leadership Computing Facility (OLCF) at Oak Ridge National Laboratory in 2017. The team used the PPPL-developed XGC high-fidelity plasma turbulence code to forecast a heat load that was over six-times wider in full-power ITER operation than simple extrapolations from current tokamaks predicted.

    advertisement

    Surprise finding
    The surprising finding raised eyebrows by sharply contradicting the dangerously narrow heat-load forecasts. What accounted for the difference — might there be some hidden plasma parameter, or condition of plasma behavior, that the previous forecasts had failed to detect?
    Those forecasts arose from parameters in the simple extrapolations that regarded plasma as a fluid without considering the important kinetic, or particle motion, effects. By contrast, the XGC code produces kinetic simulations using trillions of particles on extreme-scale computers, and its six-times wider forecast suggested that there might indeed be hidden parameters that the fluid approach did not factor in.
    The team performed more refined simulations of the full-power ITER plasma on the Summit supercomputer at the Oak Ridge Leadership Computing Facility (OLCF) at Oak Ridge National Laboratory to ensure that their 2017 findings on Titan were not in error.
    The team also performed new XGC simulations on current tokamaks to compare the results to the much wider Summit and Titan findings. One simulation was on one of the highest magnetic-field plasmas on the Joint European Torus (JET) in the United Kingdom, which reaches 73 percent of the full-power ITER magnetic field strength. Another simulation was on one of the highest magnetic-field plasmas on the now decommissioned C-Mod tokamak at the Massachusetts Institute of Technology (MIT), which reaches 100 percent of the full-power ITER magnetic field.

    advertisement

    The results in both cases agreed with the narrow heat-load width forecasts from simple extrapolations. These findings strengthened the suspicion that there are indeed hidden parameters.
    Supervised machine learning
    The team then turned to a type of AI method called supervised machine learning to discover what the unnoticed parameters might be. Using kinetic XGC simulation data from future ITER plasma, the AI code identified the hidden parameter as related to the orbiting of plasma particles around the tokamak’s magnetic field lines, an orbiting called gyromotion.
    The AI program suggested a new formula that forecasts a far wider and less dangerous heat-load width for full-power ITER than the previous XGC formula derived from experimental results in present tokamaks predicted. Furthermore, the AI-produced formula recovers the previous narrow findings of the formula built for the tokamak experiments.
    “This exercise exemplifies the necessity for high-performance computing, by not only producing high-fidelity understanding and prediction but also improving the analytic formula to be more accurate and predictive.” Chang said. “It is found that the full-power ITER edge plasma is subject to a different type of turbulence than the edge in present tokamaks due to the large size of the ITER edge plasma compared to the gyromotion radius of particles.”
    Researchers then verified the AI-produced formula by performing three more simulations of future ITER plasmas on the supercomputers Summit at OLCF and Theta at the Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory. “If this formula is validated experimentally,” Chang said, “this will be huge for the fusion community and for ensuring that ITER’s divertor can accommodate the heat exhaust from the plasma without too much complication.”
    The team would next like to see experiments on current tokamaks that could be designed to test the AI-produced extrapolation formula. If it is validated, Chang said, “the formula can be used for easier operation of ITER and for the design of more economical fusion reactors.” More

  • in

    Recommended for you: Role, impact of tools behind automated product picks explored

    As you scroll through Amazon looking for the perfect product, or flip through titles on Netflix searching for a movie to fit your mood, auto-generated recommendations can help you find exactly what you’re looking for among extensive offerings.
    These recommender systems are used in retail, entertainment, social networking and more. In a recently published study, two researchers from The University of Texas at Dallas investigated the informative role of these systems and the economic impacts on competing sellers and consumers.
    “Recommender systems have become ubiquitous in e-commerce platforms and are touted as sales-support tools that help consumers find their preferred or desired product among the vast variety of products,” said Dr. Jianqing Chen, professor of information systems in the Naveen Jindal School of Management. “So far, most of the research has been focused on the technical side of recommender systems, while the research on the economic implications for sellers is limited.”
    In the study, published in the December 2020 issue of MIS Quarterly, Chen and Dr. Srinivasan Raghunathan, the Ashbel Smith Professor of information systems, developed an analytical model in which sellers sell their products through a common electronic marketplace.
    The paper focuses on the informative role of the recommender system: how it affects consumers’ decisions by informing them about products about which they otherwise may be unaware. Recommender systems seem attractive to sellers because they do not have to pay the marketplace for receiving recommendations, while traditional advertising is costly.
    The researchers note that recommender systems have been reported to increase sales on these marketplaces: More than 35% of what consumers purchase on Amazon and more than 60% of what they watch on Netflix result from recommendations. The systems use information including purchase history, search behavior, demographics and product ratings to predict a user’s preferences and recommend the product the consumer is most likely to buy.

    advertisement

    While recommender systems introduce consumers to new products and increase the market size — which benefits sellers — the free exposure is not necessarily profitable, Chen said.
    The researchers found the advertising effect causes sellers to advertise less on their own, and the competition effect causes them to decrease their prices. Sellers also are more likely to benefit from the recommender system only when it has a high precision.
    “This means that sellers are likely to benefit from the recommender system only when the recommendations are effective and the products recommended are indeed consumers’ preferred products,” Chen said.
    The researchers determined these results do not change whether sellers use targeted advertising or uniform advertising.
    Although the exposure is desirable for sellers, the negative effects on profitability could overshadow the positive effects. Sellers should carefully choose their advertising approach and adopt uniform advertising if they cannot accurately target customers, Chen said.

    advertisement

    “Free exposure turns out to not really be free,” he said. “To mitigate such a negative effect, sellers should strive to help the marketplace provide effective recommendations. For example, sellers should provide accurate product descriptions, which can help recommender systems provide better matching between products and consumers.”
    Consumers, on the other hand, benefit both directly and indirectly from recommender systems, Raghunathan said. For example, they might be introduced to a new product or benefit from price competition among sellers.
    Conversely, they also might end up paying more than the value of such recommendations in the form of increased prices, Raghunathan said.
    “Consumers should embrace recommender systems,” he said. “However, sharing additional information, such as their preference in the format of online reviews, with the platform is a double-edged sword. While it can help recommender systems more effectively find a product that a consumer might like, the additional information can be used to increase the recommendation precision, which in turn can reduce the competition pressure on sellers and can be bad for consumers.”
    The researchers said that although significant efforts are underway to develop more sophisticated recommender systems, the economic implications of these systems are poorly understood.
    “The business and societal value of recommender systems cannot be assessed properly unless economic issues surrounding them are examined,” Chen said. He and Raghunathan plan to conduct further research on this topic.
    Lusi Li PhD’17, now at California State University, Los Angeles, also contributed to the research. The project was part of Li’s doctoral dissertation at UT Dallas. More

  • in

    'Egg carton' quantum dot array could lead to ultralow power devices

    A new path toward sending and receiving information with single photons of light has been discovered by an international team of researchers led by the University of Michigan.
    Their experiment demonstrated the possibility of using an effect known as nonlinearity to modify and detect extremely weak light signals, taking advantage of distinct changes to a quantum system to advance next generation computing.
    Today, as silicon-electronics-based information technology becomes increasingly throttled by heating and energy consumption, nonlinear optics is under intense investigation as a potential solution. The quantum egg carton captures and releases photons, supporting “excited” quantum states while it possesses the extra energy. As the energy in the system rises, it takes a bigger jump in energy to get to that next excited state — that’s the nonlinearity.
    “Researchers have wondered whether detectable nonlinear effects can be sustained at extremely low power levels — down to individual photons. This would bring us to the fundamental lower limit of power consumption in information processing,” said Hui Deng, professor of physics and senior author of the paper in Nature.
    “We demonstrated a new type of hybrid state to bring us to that regime, linking light and matter through an array of quantum dots,” she added.
    The physicists and engineers used a new kind of semiconductor to create quantum dots arranged like an egg carton. Quantum dots are essentially tiny structures that can isolate and confine individual quantum particles, such as electrons and other, stranger things. These dots are the pockets in the egg carton. In this case, they confine excitons, quasi-particles made up of an electron and a “hole.” A hole appears when an electron in a semiconductor is kicked into a higher energy band, leaving a positive charge behind in its usual spot. If the hole shadows the electron in its parallel energy band, the two are considered as a single entity, an exciton.

    advertisement

    In conventional devices — with little to no nonlinearity — the excitons roam freely and scarcely meet with each other. These materials can contain many identical excitons at the same time without researchers noticing any change to the material properties.
    However, if the exciton is confined to a quantum dot, it becomes impossible to put in a second identical exciton in the same pocket. You’ll need an exciton with a higher energy if you want to get another one in there, which means you’ll need a higher energy photon to make it. This is known as quantum blockade, and it’s the cause of the nonlinearity.
    But typical quantum dots are only a few atoms across — they aren’t on a usable scale. As a solution, Deng’s team created an array of quantum dots that contribute to the nonlinearity all at once.
    The team produced this egg carton energy landscape with two flakes of semiconductor, which are considered two-dimensional materials because they are made of a single molecular layer, just a few atoms thick. 2D semiconductors have quantum properties that are very different from larger chunks. One flake was tungsten disulfide and the other was molybdenum diselenide. Laid with an angle of about 56.5 degrees between their atomic lattices, the two intertwined electronic structures created a larger electronic lattice, with pockets about 10 atoms across.
    In order for the array of quantum dots inside the 2D semiconductor to be controlled as a group with light, the team built a resonator by making one mirror at the bottom, laying the semiconductor on top of it, and then depositing a second mirror on top of the semiconductor.

    advertisement

    “You need to control the thickness very tightly so that the semiconductor is at the maximum of the optical field,” said Zhang Long, a postdoctoral research fellow in the Deng lab and first author on the paper.
    With the quantum egg carton embedded in the mirrored “cavity” that enabled red laser light to resonate, the team observed the formation of another quantum state, called a polariton. Polaritons are a hybrid of the excitons and the light in the cavity. This confirmed all the quantum dots interact with light in concert. In this system, Deng’s team showed that putting a few excitons into the carton led to a measurable change of the polariton’s energy — demonstrating nonlinearity and showing that quantum blockade was occurring.
    “Engineers can use that nonlinearity to discern energy deposited into the system, potentially down to that of a single photon, which makes the system promising as an ultra-low energy switch,” Deng said.
    Switches are among the devices needed to achieve ultralow power computing, and they can be built into more complex gates.
    “Professor Deng’s research describes how polariton nonlinearities can be tailored to consume less energy,” said Michael Gerhold, program manager at the Army Research Office, an element of the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory. “Control of polaritons is aimed at future integrated photonics used for ultra-low energy computing and information processing that could be used for neuromorphic processing for vision systems, natural language processing or autonomous robots.”
    The quantum blockade also means a similar system could possibly be used for qubits, the building blocks for quantum information processing. One forward path is figuring out how to address each quantum dot in the array as an individual qubit. Another way would be to achieve polariton blockade, similar to the exciton blockade seen here. In this version, the array of excitons, resonating in time with the light wave, would be the qubit.
    Used in these ways, the new 2D semiconductors have potential for bringing quantum devices up to room temperature, rather than the extreme cold of liquid nitrogen or liquid helium.
    “We are coming to the end of Moore’s Law,” said Steve Forrest, the Peter A. Franken Distinguished University Professor of Electrical Engineering and co-author of the paper, referring to the trend of the density of transistors on a chip doubling every two years. “Two dimensional materials have many exciting electronic and optical properties that may, in fact, lead us to that land beyond silicon.” More

  • in

    The (robotic) doctor will see you now

    In the era of social distancing, using robots for some health care interactions is a promising way to reduce in-person contact between health care workers and sick patients. However, a key question that needs to be answered is how patients will react to a robot entering the exam room.
    Researchers from MIT and Brigham and Women’s Hospital recently set out to answer that question. In a study performed in the emergency department at Brigham and Women’s, the team found that a large majority of patients reported that interacting with a health care provider via a video screen mounted on a robot was similar to an in-person interaction with a health care worker.
    “We’re actively working on robots that can help provide care to maximize the safety of both the patient and the health care workforce. The results of this study give us some confidence that people are ready and willing to engage with us on those fronts,” says Giovanni Traverso, an MIT assistant professor of mechanical engineering, a gastroenterologist at Brigham and Women’s Hospital, and the senior author of the study.
    In a larger online survey conducted nationwide, the researchers also found that a majority of respondents were open to having robots not only assist with patient triage but also perform minor procedures such as taking a nose swab.
    Peter Chai, an assistant professor of emergency medicine at Brigham and Women’s Hospital and a research affiliate in Traverso’s lab, is the lead author of the study, which appears today in JAMA Network Open.
    Triage by robot
    After the Covid-19 pandemic began early last year, Traverso and his colleagues turned their attention toward new strategies to minimize interactions between potentially sick patients and health care workers. To that end, they worked with Boston Dynamics to create a mobile robot that could interact with patients as they waited in the emergency department. The robots were equipped with sensors that allow them to measure vital signs, including skin temperature, breathing rate, pulse rate, and blood oxygen saturation. The robots also carried an iPad that allowed for remote video communication with a health care provider.

    advertisement

    This kind of robot could reduce health care workers’ risk of exposure to Covid-19 and help to conserve the personal protective equipment that is needed for each interaction. However, the question still remained whether patients would be receptive to this type of interaction.
    “Often as engineers, we think about different solutions, but sometimes they may not be adopted because people are not fully accepting of them,” Traverso says. “So, in this study we were trying to tease that out and understand if the population is receptive to a solution like this one.”
    The researchers first conducted a nationwide survey of about 1,000 people, working with a market research company called YouGov. They asked questions regarding the acceptability of robots in health care, including whether people would be comfortable with robots performing not only triage but also other tasks such as performing nasal swabs, inserting a catheter, or turning a patient over in bed. On average, the respondents stated that they were open to these types of interactions.
    The researchers then tested one of their robots in the emergency department at Brigham and Women’s Hospital last spring, when Covid-19 cases were surging in Massachusetts. Fifty-one patients were approached in the waiting room or a triage tent and asked if they would be willing to participate in the study, and 41 agreed. These patients were interviewed about their symptoms via video connection, using an iPad carried by a quadruped, dog-like robot developed by Boston Dynamics. More than 90 percent of the participants reported that they were satisfied with the robotic system.
    “For the purposes of gathering quick triage information, the patients found the experience to be similar to what they would have experienced talking to a person,” Chai says.

    advertisement

    Robotic assistants
    The numbers from the study suggest that it could be worthwhile to try to develop robots that can perform procedures that currently require a lot of human effort, such as turning a patient over in bed, the researchers say. Turning Covid-19 patients onto their stomachs, also known as “proning,” has been shown to boost their blood oxygen levels and make breathing easier. Currently the process requires several people to perform. Administering Covid-19 tests is another task that requires a lot of time and effort from health care workers, who could be deployed for other tasks if robots could help perform swabs.
    “Surprisingly, people were pretty accepting of the idea of having a robot do a nasal swab, which suggests that potential engineering efforts could go into thinking about building some of these systems,” Chai says.
    The MIT team is continuing to develop sensors that can obtain vital sign data from patients remotely, and they are working on integrating these systems into smaller robots that could operate in a variety of environments, such as field hospitals or ambulances.
    Other authors of the paper include Farah Dadabhoy, Hen-wei Huang, Jacqueline Chu, Annie Feng, Hien Le, Joy Collins, Marco da Silva, Marc Raibert, Chin Hur, and Edward Boyer. The research was funded by the National Institutes of Health, the Hans and Mavis Lopater Psychosocial Foundation, e-ink corporation, the Karl Van Tassel (1925) Career Development Professorship, MIT’s Department of Mechanical Engineering, and the Brigham and Women’s Hospital Division of Gastroenterology. More

  • in

    Cutting off stealthy interlopers: a framework for secure cyber-physical systems

    In 2015, hackers infiltrated the corporate network of Ukraine’s power grid and injected malicious software, which caused a massive power outage. Such cyberattacks, along with the dangers to society that they represent, could become more common as the number of cyber-physical systems (CPS) increases.
    A CPS is any system controlled by a network involving physical elements that tangibly interact with the material world. CPSs are incredibly common in industries, especially those integrating robotics or similar automated machinery to the production line. However, as CPSs make their way into societal infrastructures such as public transport and energy management, it becomes even more important to be able to efficiently fend off various types of cyberattacks.
    In a recent study published in IEEE Transactions on Industrial Informatics, researchers from Daegu Gyeongbuk Institute of Science and Technology (DGIST), Korea, have developed a framework for CPSs that is resilient against a sophisticated kind of cyberattack: the pole-dynamics attack (PDA). In a PDA, the hacker connects to a node in the network of the CPS and injects false sensor data. Without proper readings from the sensors of the physical elements of the system, the control signals sent by the control algorithm to the physical actuators are incorrect, causing them to malfunction and behave in unexpected, potentially dangerous ways.
    To address PDAs, the researchers adopted a technique known as software-defined networking (SDN), whereby the network of the CPS is made more dynamic by distributing the relaying of signals through controllable SDN switches. In addition, the proposed approach relies on a novel attack-detection algorithm embedded in the SDN switches, which can raise an alarm to the centralized network manager if false sensor data are being injected.
    Once the network manager is notified, it not only cuts the cyberattacker off by pruning the compromised nodes but also establishes a new safe path for the sensor data. “Existing studies have only focused on attack detection, but they fail to consider the implications of detection and recovery in real time,” explains Professor Kyung-Joon Park, who led the study, “In our study, we simultaneously considered these factors to understand their effects on real-time performance and guarantee stable CPS operation.”
    The new framework was validated experimentally in a dedicated testbed, showing promising results. Excited about the outcomes of the study, Park remarks, “Considering CPSs are a key technology of smart cities and unmanned transport systems, we expect our research will be crucial to provide reliability and resiliency to CPSs in various application domains.” Having a system that is robust against cyberattacks means that economic losses and personal injuries can be minimized. Therefore, this study paves the way to a more secure future for both CPSs and ourselves.

    Story Source:
    Materials provided by DGIST (Daegu Gyeongbuk Institute of Science and Technology). Note: Content may be edited for style and length. More