More stories

  • in

    'The robot made me do it': Robots encourage risk-taking behavior in people

    New research has shown robots can encourage people to take greater risks in a simulated gambling scenario than they would if there was nothing to influence their behaviours. Increasing our understanding of whether robots can affect risk-taking could have clear ethical, practical and policy implications, which this study set out to explore.
    Dr Yaniv Hanoch, Associate Professor in Risk Management at the University of Southampton who led the study explained, “We know that peer pressure can lead to higher risk-taking behaviour. With the ever-increasing scale of interaction between humans and technology, both online and physically, it is crucial that we understand more about whether machines can have a similar impact.”
    This new research, published in the journal Cyberpsychology, Behavior, and Social Networking, involved 180 undergraduate students taking the Balloon Analogue Risk Task (BART), a computer assessment that asks participants to press the spacebar on a keyboard to inflate a balloon displayed on the screen. With each press of the spacebar, the balloon inflates slightly, and 1 penny is added to the player’s “temporary money bank.” The balloons can explode randomly, meaning the player loses any money they have won for that balloon and they have the option to “cash-in” before this happens and move on to the next balloon.
    One-third of the participants took the test in a room on their own (the control group), one third took the test alongside a robot that only provided them with the instructions but was silent the rest of the time and the final, the experimental group, took the test with the robot providing instruction as well as speaking encouraging statements such as “why did you stop pumping?”
    The results showed that the group who were encouraged by the robot took more risks, blowing up their balloons significantly more frequently than those in the other groups did. They also earned more money overall. There was no significant difference in the behaviours of the students accompanied by the silent robot and those with no robot.
    Dr Hanoch said: “We saw participants in the control condition scale back their risk-taking behaviour following a balloon explosion, whereas those in the experimental condition continued to take as much risk as before. So, receiving direct encouragement from a risk-promoting robot seemed to override participants’ direct experiences and instincts.”
    The researcher now believe that further studies are needed to see whether similar results would emerge from human interaction with other artificial intelligence (AI) systems, such as digital assistants or on-screen avatars.
    Dr Hanoch concluded, “With the wide spread of AI technology and its interactions with humans, this is an area that needs urgent attention from the research community.”
    “On the one hand, our results might raise alarms about the prospect of robots causing harm by increasing risky behavior. On the other hand, our data points to the possibility of using robots and AI in preventive programs, such as anti-smoking campaigns in schools, and with hard to reach populations, such as addicts.”

    Story Source:
    Materials provided by University of Southampton. Note: Content may be edited for style and length. More

  • in

    Artificial intelligence helps scientists develop new general models in ecology

    In ecology, millions of species interact in billions of different ways between them and with their environment. Ecosystems often seem chaotic, or at least overwhelming for someone trying to understand them and make predictions for the future.
    Artificial intelligence and machine learning are able to detect patterns and predict outcomes in ways that often resemble human reasoning. They pave the way to increasingly powerful cooperation between humans and computers.
    Within AI, evolutionary computation methods replicate in some sense the processes of evolution of species in the natural world. A particular method called symbolic regression allows the evolution of human-interpretable formulas that explain natural laws.
    “We used symbolic regression to demonstrate that computers are able to derive formulas that represent the way ecosystems or species behave in space and time. These formulas are also easy to understand. They pave the way for general rules in ecology, something that most methods in AI cannot do,” says Pedro Cardoso, curator at the Finnish Museum of Natural History, University of Helsinki.
    With the help of the symbolic regression method, an interdisciplinary team from Finland, Portugal, and France was able to explain why some species exist in some regions and not in others, and why some regions have more species than others.
    The researchers were able, for example, to find a new general model that explains why some islands have more species than others. Oceanic islands have a natural life-cycle, emerging from volcanoes and eventually submerging with erosion after millions of years. With no human input, the algorithm was able to find that the number of species of an island increases with the island age and peaks with intermediate ages, when erosion is still low.
    “The explanation was known, a couple of formulas already existed, but we were able to find new ones that outperform the existing ones under certain circumstances,” says Vasco Branco, PhD student working on the automation of extinction risk assessments at the University of Helsinki.
    The research proposes that explainable artificial intelligence is a field to explore and promotes the cooperation between humans and machines in ways that are only now starting to scratch the surface.
    “Evolving free-form equations purely from data, often without prior human inference or hypotheses, may represent a very powerful tool in the arsenal of a discipline as complex as ecology,” says Luis Correia, computer science professor at the University of Lisbon.

    Story Source:
    Materials provided by University of Helsinki. Note: Content may be edited for style and length. More

  • in

    Artificial intelligence improves control of powerful plasma accelerators

    Researchers have used AI to control beams for the next generation of smaller, cheaper accelerators for research, medical and industrial applications.
    Experiments led by Imperial College London researchers, using the Science and Technology Facilities Council’s Central Laser Facility (CLF), showed that an algorithm was able to tune the complex parameters involved in controlling the next generation of plasma-based particle accelerators.
    The algorithm was able to optimize the accelerator much more quickly than a human operator, and could even outperform experiments on similar laser systems.
    These accelerators focus the energy of the world’s most powerful lasers down to a spot the size of a skin cell, producing electrons and x-rays with equipment a fraction of the size of conventional accelerators.
    The electrons and x-rays can be used for scientific research, such as probing the atomic structure of materials; in industrial applications, such as for producing consumer electronics and vulcanised rubber for car tyres; and could also be used in medical applications, such as cancer treatments and medical imaging.
    Several facilities using these new accelerators are in various stages of planning and construction around the world, including the CLF’s Extreme Photonics Applications Centre (EPAC) in the UK, and the new discovery could help them work at their best in the future. The results are published today in Nature Communications.
    First author Dr Rob Shalloo, who completed the work at Imperial and is now at the accelerator centre DESY, said: “The techniques we have developed will be instrumental in getting the most out of a new generation of advanced plasma accelerator facilities under construction within the UK and worldwide.

    advertisement

    “Plasma accelerator technology provides uniquely short bursts of electrons and x-rays, which are already finding uses in many areas of scientific study. With our developments, we hope to broaden accessibility to these compact accelerators, allowing scientists in other disciplines and those wishing to use these machines for applications, to benefit from the technology without being an expert in plasma accelerators.”
    The team worked with laser wakefield accelerators. These combine the world’s most powerful lasers with a source of plasma (ionised gas) to create concentrated beams of electrons and x-rays. Traditional accelerators need hundreds of metres to kilometres to accelerate electrons, but wakefield accelerators can manage the same acceleration within the space of millimetres, drastically reducing the size and cost of the equipment.
    However, because wakefield accelerators operate in the extreme conditions created when lasers are combined with plasma, they can be difficult to control and optimise to get the best performance. In wakefield acceleration, an ultrashort laser pulse is driven into plasma, creating a wave that is used to accelerate electrons. Both the laser and plasma have several parameters that can be tweaked to control the interaction, such as the shape and intensity of the laser pulse, or the density and length of the plasma.
    While a human operator can tweak these parameters, it is difficult to know how to optimise so many parameters at once. Instead, the team turned to artificial intelligence, creating a machine learning algorithm to optimise the performance of the accelerator.
    The algorithm set up to six parameters controlling the laser and plasma, fired the laser, analysed the data, and re-set the parameters, performing this loop many times in succession until the optimal parameter configuration was reached.
    Lead researcher Dr Matthew Streeter, who completed the work at Imperial and is now at Queen’s University Belfast, said: “Our work resulted in an autonomous plasma accelerator, the first of its kind. As well as allowing us to efficiently optimise the accelerator, it also simplifies their operation and allows us to spend more of our efforts on exploring the fundamental physics behind these extreme machines.”
    The team demonstrated their technique using the Gemini laser system at the CLF, and have already begun to use it in further experiments to probe the atomic structure of materials in extreme conditions and in studying antimatter and quantum physics.
    The data gathered during the optimisation process also provided new insight into the dynamics of the laser-plasma interaction inside the accelerator, potentially informing future designs to further improve accelerator performance. More

  • in

    Artificial visual system of record-low energy consumption for the next generation of AI

    A joint research led by City University of Hong Kong (CityU) has built an ultralow-power consumption artificial visual system to mimic the human brain, which successfully performed data-intensive cognitive tasks. Their experiment results could provide a promising device system for the next generation of artificial intelligence (AI) applications.
    The research team is led by Professor Johnny Chung-yin Ho, Associate Head and Professor of the Department of Materials Science and Engineering (MSE) at CityU. Their findings have been published in the scientific journal Science Advances, titled “Artificial visual system enabled by quasi-two-dimensional electron gases in oxide superlattice nanowires.”
    As the advances in semiconductor technologies used in digital computing are showing signs of stagnation, the neuromorphic (brain-like) computing systems have been regarded as one of the alternatives in future. Scientists have been trying to develop the next generation of advanced AI computers which can be as lightweight, energy-efficient and adaptable as the human brain.
    “Unfortunately, effectively emulating the brain’s neuroplasticity — the ability to change its neural network connections or re-wire itself — in existing artificial synapses through an ultralow-power manner is still challenging,” said Professor Ho.
    Enhancing energy efficiency of artificial synapses
    Artificial synapse is an artificial version of synapse — the gap across which the two neurons pass through electrical signals to communicate with each other in the brain. It is a device that mimics the brain’s efficient neural signal transmission and memory formation process.

    advertisement

    To enhance the energy efficiency of the artificial synapses, Professor Ho’s research team has introduced quasi-two-dimensional electron gases (quasi-2DEGs) into artificial neuromorphic systems for the first time. By utilising oxide superlattice nanowires — a kind of semiconductor with intriguing electrical properties — developed by them, they have designed the quasi-2DEG photonic synaptic devices which have achieved a record-low energy consumption down to sub-femtojoule (0.7fJ) per synaptic event. It means a decrease of 93% energy consumption when compared with synapses in the human brain.
    “Our experiments have demonstrated that the artificial visual system based on our photonic synapses could simultaneously perform light detection, brain-like processing and memory functions in an ultralow-power manner. We believe our findings can provide a promising strategy to build artificial neuromorphic systems for applications in bionic devices, electronic eyes, and multifunctional robotics in future,” said Professor Ho.
    Resembling conductance change in synapses
    He explained that a two-dimensional electron gas occurs when electrons are confined to a two-dimensional interface between two different materials. Since there are no electron-electron interactions and electron-ion interactions, the electrons move freely in the interface.
    Upon exposure to light pulse, a series of reactions between the oxygen molecules from environment absorbed onto the nanowire surface and the free electrons from the two-dimensional electron gases inside the oxide superlattice nanowires were induced. Hence the conductance of the photonic synapses would change. Given the outstanding charge carrier mobility and sensitivity to light stimuli of superlattice nanowires, the change of conductance in the photonic synapses resembles that in biological synapse. Hence the quasi-2DEG photonic synapses can mimic how the neurons in the human brain transmit and memorise signals.

    advertisement

    A combo of photo-detection and memory functions
    “The special properties of the superlattice nanowire materials enable our synapses to have both the photo-detecting and memory functions simultaneously. In a simple word, the nanowire superlattice cores can detect the light stimulus in a high-sensitivity way, and the nanowire shells promote the memory functions. So there is no need to construct additional memory modules for charge storage in an image sensing chip. As a result, our device can save energy,” explained Professor Ho.
    With this quasi-2DEG photonic synapse, they have built an artificial visual system which could accurately and efficiently detect a patterned light stimulus and “memorise” the shape of the stimuli for an hour. “It is just like our brain will remember what we saw for some time,” described Professor Ho.
    He added that the way the team synthesised the photonic synapses and the artificial visual system did not require complex equipment. And the devices could be made on flexible plastics in a scalable and low-cost manner.
    Professor Ho is the corresponding author of the paper. The co-first authors are Meng You and Li Fangzhou, PhD students from MSE at CityU. Other team members include Dr Bu Xiuming, Dr Yip Sen-po, Kang Xiaolin, Wei Renjie, Li Dapan and Wang Fei, who are all from CityU. Other collaborating researchers come from University of Electronic Science and Technology of China, Kyushu University, and University of Tokyo.
    The study received funding support from CityU, the Research Grants Council of Hong Kong SAR, the National Natural Science Foundation of China and the Science, Technology and Innovation Commission of Shenzhen Municipality. More

  • in

    Artificial Chemist 2.0: quantum dot R&D in less than an hour

    A new technology, called Artificial Chemist 2.0, allows users to go from requesting a custom quantum dot to completing the relevant R&D and beginning manufacturing in less than an hour. The tech is completely autonomous, and uses artificial intelligence (AI) and automated robotic systems to perform multi-step chemical synthesis and analysis.
    Quantum dots are colloidal semiconductor nanocrystals, which are used in applications such as LED displays and solar cells.
    “When we rolled out the first version of Artificial Chemist, it was a proof of concept,” says Milad Abolhasani, corresponding author of a paper on the work and an assistant professor of chemical and biomolecular engineering at North Carolina State University. “Artificial Chemist 2.0 is industrially relevant for both R&D and manufacturing.”
    From a user standpoint, the whole process essentially consists of three steps. First, a user tells Artificial Chemist 2.0 the parameters for the desired quantum dots. For example, what color light do you want to produce? The second step is effectively the R&D stage, where Artificial Chemist 2.0 autonomously conducts a series of rapid experiments, allowing it to identify the optimum material and the most efficient means of producing that material. Third, the system switches over to manufacturing the desired amount of the material.
    “Quantum dots can be divided up into different classes,” Abolhasani says. “For example, well-studied II-VI, IV-VI, and III-V materials, or the recently emerging metal halide perovskites, and so on. Basically, each class consists of a range of materials that have similar chemistries.
    “And the first time you set up Artificial Chemist 2.0 to produce quantum dots in any given class, the robot autonomously runs a set of active learning experiments. This is how the brain of the robotic system learns the materials chemistry,” Abolhasani says. “Depending on the class of material, this learning stage can take between one and 10 hours. After that one-time active learning period, Artificial Chemist 2.0 can identify the best possible formulation for producing the desired quantum dots from 20 million possible combinations with multiple manufacturing steps in 40 minutes or less.”
    The researchers note that the R&D process will almost certainly become faster every time people use it, since the AI algorithm that runs the system will learn more — and become more efficient — with every material that it is asked to identify.
    Artificial Chemist 2.0 incorporates two chemical reactors, which operate in a series. The system is designed to be entirely autonomous, and allows users to switch from one material to another without having to shut down the system. Video of how the system works can be found at https://youtu.be/e_DyV-hohLw.
    “In order to do this successfully, we had to engineer a system that leaves no chemical residues in the reactors and allows the AI-guided robotic system to add the right ingredients, at the right time, at any point in the multi-step material production process,” Abolhasani says. “So that’s what we did.
    “We’re excited about what this means for the specialty chemicals industry. It really accelerates R&D to warp speed, but it is also capable of making kilograms per day of high-value, precisely engineered quantum dots. Those are industrially relevant volumes of material.”

    Story Source:
    Materials provided by North Carolina State University. Note: Content may be edited for style and length. More

  • in

    Atom-thin transistor uses half the voltage of common semiconductors, boosts current density

    University at Buffalo researchers are reporting a new, two-dimensional transistor made of graphene and the compound molybdenum disulfide that could help usher in a new era of computing.
    As described in a paper accepted at the 2020 IEEE International Electron Devices Meeting, which is taking place virtually next week, the transistor requires half the voltage of current semiconductors. It also has a current density greater than similar transistors under development.
    This ability to operate with less voltage and handle more current is key to meet the demand for new power-hungry nanoelectronic devices, including quantum computers.
    “New technologies are needed to extend the performance of electronic systems in terms of power, speed, and density. This next-generation transistor can rapidly switch while consuming low amounts of energy,” says the paper’s lead author, Huamin Li, Ph.D., assistant professor of electrical engineering in the UB School of Engineering and Applied Sciences (SEAS).
    The transistor is composed of a single layer of graphene and a single layer of molybdenum disulfide, or MoS2, which is a part of a group of compounds known as transition metals chalcogenides. The graphene and MoS2 are stacked together, and the overall thickness of the device is roughly 1 nanometer — for comparison, a sheet of paper is about 100,000 nanometers.
    While most transistors require 60 millivolts for a decade of change in current, this new device operates at 29 millivolts.
    It’s able to do this because the unique physical properties of graphene keep electrons “cold” as they are injected from the graphene into the MoS2 channel. This process is called Dirac-source injection. The electrons are considered “cold” because they require much less voltage input and, thus, reduced power consumption to operate the transistor.
    An even more important characteristic of the transistor, Li says, is its ability to handle a greater current density compared to conventional transistor technologies based on 2D or 3D channel materials. As described in the study, the transistor can handle 4 microamps per micrometer.
    “The transistor illustrates the enormous potential 2D semiconductors and their ability to usher in energy-efficient nanoelectronic devices. This could ultimately lead to advancements in quantum research and development, and help extend Moore’s Law,” says co-lead author Fei Yao, PhD, assistant professor in the Department of Materials Design and Innovation, a joint program of SEAS and UB’s College of Arts of Sciences.
    The work was supported by the U.S. National Science Foundation, the New York State Energy Research and Development Authority, the New York State Center of Excellence in Materials Informatics at UB, and the Vice President for Research and Economic Development at UB.

    Story Source:
    Materials provided by University at Buffalo. Original written by Cory Nealon. Note: Content may be edited for style and length. More

  • in

    Faster and more efficient information transfer

    Be it with smartphones, laptops, or mainframes: The transmission, processing, and storage of information is currently based on a single class of material — as it was in the early days of computer science about 60 years ago. A new class of magnetic materials, however, could raise information technology to a new level. Antiferromagnetic insulators enable computing speeds that are a thousand times faster than conventional electronics, with significantly less heating. Components could be packed closer together and logic modules could thus become smaller, which has so far been limited due to the increased heating of current components.
    Information transfer at room temperature
    So far, the problem has been that the information transfer in antiferromagnetic insulators only worked at low temperatures. But who wants to put their smartphones in the freezer to be able to use it? Physicists at Johannes Gutenberg University Mainz (JGU) have now been able to eliminate this shortcoming, together with experimentalists from the CNRS/Thales lab, the CEA Grenoble, and the National High Field Laboratory in France as well as theorists from the Center for Quantum Spintronics (QuSpin) at the Norwegian University of Science and Technology. “We were able to transmit and process information in a standard antiferromagnetic insulator at room temperature — and to do so over long enough distances to enable information processing to occur,” said JGU scientist Andrew Ross. The researchers used iron oxide (α-Fe2O3), the main component of rust, as an antiferromagnetic insulator, because iron oxide is widespread and easy to manufacture.
    The transfer of information in magnetic insulators is made possible by excitations of magnetic order known as magnons. These move as waves through magnetic materials, similar to how waves move across the water surface of a pond after a stone has been thrown into it. Previously, it was believed that these waves must have circular polarization in order to efficiently transmit information. In iron oxide, such circular polarization occurs only at low temperatures. However, the international research team was able to transmit magnons over exceptionally long distances even at room temperature. But how did that work? “We realized that in antiferromagnets with a single plane, two magnons with linear polarization can overlap and migrate together. They complement each other to form an approximately circular polarization,” explained Dr. Romain Lebrun, researcher at the joint CNRS/Thales laboratory in Paris who previously worked in Mainz. “The possibility of using iron oxide at room temperature makes it an ideal playground for the development of ultra-fast spintronic devices based on antiferromagnetic insulators.”
    Extremely low attenuation allows for energy-efficient transmission
    An important question in the process of information transfer is how quickly the information is lost when moving through magnetic materials. This can be recorded quantitatively with the value of the magnetic damping. “The iron oxide examined has one of the lowest magnetic attenuations that has ever been reported in magnetic materials,” explained Professor Mathias Kläui from the JGU Institute of Physics. “We anticipate that high magnetic field techniques will show that other antiferromagnetic materials have similarly low attenuation, which is crucial for the development of a new generation of spintronic devices. We are pursuing such low power magnetic technologies in a long-term collaboration with our colleagues at QuSpin in Norway and I am happy to see that another piece of exciting work as come out of this collaboration.”

    Story Source:
    Materials provided by Johannes Gutenberg Universitaet Mainz. Note: Content may be edited for style and length. More

  • in

    A better kind of cybersecurity strategy

    During the opening ceremonies of the 2018 Winter Olympics, held in PyeongChang, South Korea, Russian hackers launched a cyberattack that disrupted television and internet systems at the games. The incident was resolved quickly, but because Russia used North Korean IP addresses for the attack, the source of the disruption was unclear in the event’s immediate aftermath.
    There is a lesson in that attack, and others like it, at a time when hostilities between countries increasingly occur online. In contrast to conventional national security thinking, such skirmishes call for a new strategic outlook, according to a new paper co-authored by an MIT professor.
    The core of the matter involves deterrence and retaliation. In conventional warfare, deterrence usually consists of potential retaliatory military strikes against enemies. But in cybersecurity, this is more complicated. If identifying cyberattackers is difficult, then retaliating too quickly or too often, on the basis of limited information such as the location of certain IP addresses, can be counterproductive. Indeed, it can embolden other countries to launch their own attacks, by leading them to think they will not be blamed.
    “If one country becomes more aggressive, then the equilibrium response is that all countries are going to end up becoming more aggressive,” says Alexander Wolitzky, an MIT economist who specializes in game theory. “If after every cyberattack my first instinct is to retaliate against Russia and China, this gives North Korea and Iran impunity to engage in cyberattacks.”
    But Wolitzky and his colleagues do think there is a viable new approach, involving a more judicious and well-informed use of selective retaliation.
    “Imperfect attribution makes deterrence multilateral,” Wolitzky says. “You have to think about everybody’s incentives together. Focusing your attention on the most likely culprits could be a big mistake.”
    The paper, “Deterrence with Imperfect Attribution,” appears in the latest issue of the American Political Science Review. In addition to Wolitzky, the authors are Sandeep Baliga, the John L. and Helen Kellogg Professor of Managerial Economics and Decision Sciences at Northwestern University’s Kellogg School of Management; and Ethan Bueno de Mesquita, the Sydney Stein Professor and deputy dean of the Harris School of Public Policy at the University of Chicago.

    advertisement

    The study is a joint project, in which Baliga added to the research team by contacting Wolitzky, whose own work applies game theory to a wide variety of situations, including war, international affairs, network behavior, labor relations, and even technology adoption.
    “In some sense this is a canonical kind of question for game theorists to think about,” Wolitzky says, noting that the development of game theory as an intellectual field stems from the study of nuclear deterrence during the Cold War. “We were interested in what’s different about cyberdeterrence, in contrast to conventional or nuclear deterrence. And of course there are a lot of differences, but one thing that we settled on pretty early is this attribution problem.” In their paper, the authors note that, as former U.S. Deputy Secretary of Defense William Lynn once put it, “Whereas a missile comes with a return address, a computer virus generally does not.”
    In some cases, countries are not even aware of major cyberattacks against them; Iran only belatedly realized it had been attacked by the Stuxnet worm over a period of years, damaging centrifuges being used in the country’s nuclear weapons program.
    In the paper, the scholars largely examined scenarios where countries are aware of cyberattacks against them but have imperfect information about the attacks and attackers. After modeling these events extensively, the researchers determined that the multilateral nature of cybersecurity today makes it markedly different than conventional security. There is a much higher chance in multilateral conditions that retaliation can backfire, generating additional attacks from multiple sources.
    “You don’t necessarily want to commit to be more aggressive after every signal,” Wolitzky says.

    advertisement

    What does work, however, is simultaneously improving detection of attacks and gathering more information about the identity of the attackers, so that a country can pinpoint the other nations they could meaningfully retaliate against.
    But even gathering more information to inform strategic decisions is a tricky process, as the scholars show. Detecting more attacks while being unable to identify the attackers does not clarify specific decisions, for instance. And gathering more information but having “too much certainty in attribution” can lead a country straight back into the problem of lashing out against some states, even as others are continuing to plan and commit attacks.
    “The optimal doctrine in this case in some sense will commit you to retaliate more after the clearest signals, the most unambiguous signals,” Wolitzky says. “If you blindly commit yourself more to retaliate after every attack, you increase the risk you’re going to be retaliating after false alarms.”
    Wolitzky points out that the paper’s model can apply to issues beyond cybersecurity. The problem of stopping pollution can have the same dynamics. If, for instance, numerous firms are polluting a river, singling just one out for punishment can embolden the others to continue.
    Still, the authors do hope the paper will generate discussion in the foreign-policy community, with cyberattacks continuing to be a significant source of national security concern.
    “People thought the possibility of failing to detect or attribute a cyberattack mattered, but there hadn’t [necessarily] been a recognition of the multilateral implications of this,” Wolitzky says. “I do think there is interest in thinking about the applications of that.” More