More stories

  • in

    Computer scientists: We wouldn't be able to control super intelligent machines

    We are fascinated by machines that can control cars, compose symphonies, or defeat people at chess, Go, or Jeopardy! While more progress is being made all the time in Artificial Intelligence (AI), some scientists and philosophers warn of the dangers of an uncontrollable superintelligent AI. Using theoretical calculations, an international team of researchers, including scientists from the Center for Humans and Machines at the Max Planck Institute for Human Development, shows that it would not be possible to control a superintelligent AI. The study was published in the Journal of Artificial Intelligence Research.
    Suppose someone were to program an AI system with intelligence superior to that of humans, so it could learn independently. Connected to the Internet, the AI may have access to all the data of humanity. It could replace all existing programs and take control all machines online worldwide. Would this produce a utopia or a dystopia? Would the AI cure cancer, bring about world peace, and prevent a climate disaster? Or would it destroy humanity and take over the Earth?
    Computer scientists and philosophers have asked themselves whether we would even be able to control a superintelligent AI at all, to ensure it would not pose a threat to humanity. An international team of computer scientists used theoretical calculations to show that it would be fundamentally impossible to control a super-intelligent AI.
    “A super-intelligent machine that controls the world sounds like science fiction. But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it. The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity,” says study co-author Manuel Cebrian, Leader of the Digital Mobilization Group at the Center for Humans and Machines, Max Planck Institute for Human Development.
    Scientists have explored two different ideas for how a superintelligent AI could be controlled. On one hand, the capabilities of superintelligent AI could be specifically limited, for example, by walling it off from the Internet and all other technical devices so it could have no contact with the outside world — yet this would render the superintelligent AI significantly less powerful, less able to answer humanities quests. Lacking that option, the AI could be motivated from the outset to pursue only goals that are in the best interests of humanity, for example by programming ethical principles into it. However, the researchers also show that these and other contemporary and historical ideas for controlling super-intelligent AI have their limits.
    In their study, the team conceived a theoretical containment algorithm that ensures a superintelligent AI cannot harm people under any circumstances, by simulating the behavior of the AI first and halting it if considered harmful. But careful analysis shows that in our current paradigm of computing, such algorithm cannot be built.
    “If you break the problem down to basic rules from theoretical computer science, it turns out that an algorithm that would command an AI not to destroy the world could inadvertently halt its own operations. If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI. In effect, this makes the containment algorithm unusable,” says Iyad Rahwan, Director of the Center for Humans and Machines.
    Based on these calculations the containment problem is incomputable, i.e. no single algorithm can find a solution for determining whether an AI would produce harm to the world. Furthermore, the researchers demonstrate that we may not even know when superintelligent machines have arrived, because deciding whether a machine exhibits intelligence superior to humans is in the same realm as the containment problem.

    Story Source:
    Materials provided by Max Planck Institute for Human Development. Note: Content may be edited for style and length. More

  • in

    Electrically switchable qubit can tune between storage and fast calculation modes

    To perform calculations, quantum computers need qubits to act as elementary building blocks that process and store information. Now, physicists have produced a new type of qubit that can be switched from a stable idle mode to a fast calculation mode. The concept would also allow a large number of qubits to be combined into a powerful quantum computer, as researchers from the University of Basel and TU Eindhoven have reported in the journal Nature Nanotechnology.
    Compared with conventional bits, quantum bits (qubits) are much more fragile and can lose their information content very quickly. The challenge for quantum computing is therefore to keep the sensitive qubits stable over a prolonged period of time, while at the same time finding ways to perform rapid quantum operations. Now, physicists from the University of Basel and TU Eindhoven have developed a switchable qubit that should allow quantum computers to do both.
    The new type of qubit has a stable but slow state that is suitable for storing quantum information. However, the researchers were also able to switch the qubit into a much faster but less stable manipulation mode by applying an electrical voltage. In this state, the qubits can be used to process information quickly.
    Selective coupling of individual spins
    In their experiment, the researchers created the qubits in the form of “hole spins.” These are formed when an electron is deliberately removed from a semiconductor, and the resulting hole has a spin that can adopt two states, up and down — analogous to the values 0 and 1 in classical bits. In the new type of qubit, these spins can be selectively coupled — via a photon, for example — to other spins by tuning their resonant frequencies.
    This capability is vital, since the construction of a powerful quantum computer requires the ability to selectively control and interconnect many individual qubits. Scalability is particularly necessary to reduce the error rate in quantum calculations.
    Ultrafast spin manipulation
    The researchers were also able to use the electrical switch to manipulate the spin qubits at record speed. “The spin can be coherently flipped from up to down in as little as a nanosecond,” says project leader Professor Dominik Zumbühl from the Department of Physics at the University of Basel. “That would allow up to a billion switches per second. Spin qubit technology is therefore already approaching the clock speeds of today’s conventional computers.”
    For their experiments, the researchers used a semiconductor nanowire made of silicon and germanium. Produced at TU Eindhoven, the wire has a tiny diameter of about 20 nanometers. As the qubit is therefore also extremely small, it should in principle be possible to incorporate millions or even billions of these qubits onto a chip.

    Story Source:
    Materials provided by University of Basel. Note: Content may be edited for style and length. More

  • in

    Engineers create hybrid chips with processors and memory to run AI on battery-powered devices

    Smartwatches and other battery-powered electronics would be even smarter if they could run AI algorithms. But efforts to build AI-capable chips for mobile devices have so far hit a wall — the so-called “memory wall” that separates data processing and memory chips that must work together to meet the massive and continually growing computational demands imposed by AI.
    “Transactions between processors and memory can consume 95 percent of the energy needed to do machine learning and AI, and that severely limits battery life,” said computer scientist Subhasish Mitra, senior author of a new study published in Nature Electronics.
    Now, a team that includes Stanford computer scientist Mary Wootters and electrical engineer H.-S. Philip Wong has designed a system that can run AI tasks faster, and with less energy, by harnessing eight hybrid chips, each with its own data processor built right next to its own memory storage.
    This paper builds on the team’s prior development of a new memory technology, called RRAM, that stores data even when power is switched off — like flash memory — only faster and more energy efficiently. Their RRAM advance enabled the Stanford researchers to develop an earlier generation of hybrid chips that worked alone. Their latest design incorporates a critical new element: algorithms that meld the eight, separate hybrid chips into one energy-efficient AI-processing engine.
    “If we could have built one massive, conventional chip with all the processing and memory needed, we’d have done so, but the amount of data it takes to solve AI problems makes that a dream,” Mitra said. “Instead, we trick the hybrids into thinking they’re one chip, which is why we call this the Illusion System.”
    The researchers developed Illusion as part of the Electronics Resurgence Initiative (ERI), a $1.5 billion program sponsored by the Defense Advanced Research Projects Agency. DARPA, which helped spawn the internet more than 50 years ago, is supporting research investigating workarounds to Moore’s Law, which has driven electronic advances by shrinking transistors. But transistors can’t keep shrinking forever.
    “To surpass the limits of conventional electronics, we’ll need new hardware technologies and new ideas about how to use them,” Wootters said.
    The Stanford-led team built and tested its prototype with help from collaborators at the French research institute CEA-Leti and at Nanyang Technological University in Singapore. The team’s eight-chip system is just the beginning. In simulations, the researchers showed how systems with 64 hybrid chips could run AI applications seven times faster than current processors, using one-seventh as much energy.
    Such capabilities could one day enable Illusion Systems to become the brains of augmented and virtual reality glasses that would use deep neural networks to learn by spotting objects and people in the environment, and provide wearers with contextual information — imagine an AR/VR system to help birdwatchers identify unknown specimens.
    Stanford graduate student Robert Radway, who is first author of the Nature Electronics study, said the team also developed new algorithms to recompile existing AI programs, written for today’s processors, to run on the new multi-chip systems. Collaborators from Facebook helped the team test AI programs that validated their efforts. Next steps include increasing the processing and memory capabilities of individual hybrid chips and demonstrating how to mass produce them cheaply.
    “The fact that our fabricated prototype is working as we expected suggests we’re on the right track,” said Wong, who believes Illusion Systems could be ready for marketability within three to five years.
    This research was supported by the Defense Advanced Research Projects Agency (DARPA), the National Science Foundation, the Semiconductor Research Corporation, the Stanford SystemX Alliance and Intel Corporation.

    Story Source:
    Materials provided by Stanford School of Engineering. Original written by Tom Abate. Note: Content may be edited for style and length. More

  • in

    Robot displays a glimmer of empathy to a partner robot

    Like a longtime couple who can predict each other’s every move, a Columbia Engineering robot has learned to predict its partner robot’s future actions and goals based on just a few initial video frames.
    When two primates are cooped up together for a long time, we quickly learn to predict the near-term actions of our roommates, co-workers or family members. Our ability to anticipate the actions of others makes it easier for us to successfully live and work together. In contrast, even the most intelligent and advanced robots have remained notoriously inept at this sort of social communication. This may be about to change.
    The study, conducted at Columbia Engineering’s Creative Machines Lab led by Mechanical Engineering Professor Hod Lipson, is part of a broader effort to endow robots with the ability to understand and anticipate the goals of other robots, purely from visual observations.
    The researchers first built a robot and placed it in a playpen roughly 3×2 feet in size. They programmed the robot to seek and move towards any green circle it could see. But there was a catch: Sometimes the robot could see a green circle in its camera and move directly towards it. But other times, the green circle would be occluded by a tall red carboard box, in which case the robot would move towards a different green circle, or not at all.
    After observing its partner puttering around for two hours, the observing robot began to anticipate its partner’s goal and path. The observing robot was eventually able to predict its partner’s goal and path 98 out of 100 times, across varying situations — without being told explicitly about the partner’s visibility handicap.
    “Our initial results are very exciting,” says Boyuan Chen, lead author of the study, which was conducted in collaboration with Carl Vondrick, assistant professor of computer science, and published today by Nature Scientific Reports. “Our findings begin to demonstrate how robots can see the world from another robot’s perspective. The ability of the observer to put itself in its partner’s shoes, so to speak, and understand, without being guided, whether its partner could or could not see the green circle from its vantage point, is perhaps a primitive form of empathy.”
    When they designed the experiment, the researchers expected that the Observer Robot would learn to make predictions about the Subject Robot’s near-term actions. What the researchers didn’t expect, however, was how accurately the Observer Robot could foresee its colleague’s future “moves” with only a few seconds of video as a cue.
    The researchers acknowledge that the behaviors exhibited by the robot in this study are far simpler than the behaviors and goals of humans. They believe, however, that this may be the beginning of endowing robots with what cognitive scientists call “Theory of Mind” (ToM). At about age three, children begin to understand that others may have different goals, needs and perspectives than they do. This can lead to playful activities such as hide and seek, as well as more sophisticated manipulations like lying. More broadly, ToM is recognized as a key distinguishing hallmark of human and primate cognition, and a factor that is essential for complex and adaptive social interactions such as cooperation, competition, empathy, and deception.
    In addition, humans are still better than robots at describing their predictions using verbal language. The researchers had the observing robot make its predictions in the form of images, rather than words, in order to avoid becoming entangled in the thorny challenges of human language. Yet, Lipson speculates, the ability of a robot to predict the future actions visually is not unique: “We humans also think visually sometimes. We frequently imagine the future in our mind’s eyes, not in words.”
    Lipson acknowledges that there are many ethical questions. The technology will make robots more resilient and useful, but when robots can anticipate how humans think, they may also learn to manipulate those thoughts.
    “We recognize that robots aren’t going to remain passive instruction-following machines for long,” Lipson says. “Like other forms of advanced AI, we hope that policymakers can help keep this kind of technology in check, so that we can all benefit.” More

  • in

    New statistical method exponentially increases ability to discover genetic insights

    Pleiotropy analysis, which provides insight on how individual genes result in multiple characteristics, has become increasingly valuable as medicine continues to lean into mining genetics to inform disease treatments. Privacy stipulations, though, make it difficult to perform comprehensive pleiotropy analysis because individual patient data often can’t be easily and regularly shared between sites. However, a statistical method called Sum-Share, developed at Penn Medicine, can pull summary information from many different sites to generate significant insights. In a test of the method, published in Nature Communications, Sum-Share’s developers were able to detect more than 1,700 DNA-level variations that could be associated with five different cardiovascular conditions. If patient-specific information from just one site had been used, as is the norm now, only one variation would have been determined.
    “Full research of pleiotropy has been difficult to accomplish because of restrictions on merging patient data from electronic health records at different sites, but we were able to figure out a method that turns summary-level data into results that are exponentially greater than what we could accomplish with individual-level data currently available,” said the one of the study’s senior authors, Jason Moore, PhD, director of the Institute for Biomedical Informatics and a professor of Biostatistics, Epidemiology and Informatics. “With Sum-Share, we greatly increase our abilities to unveil the genetic factors behind health conditions that range from those dealing with heart health, as was the case in this study, to mental health, with many different applications in between.”
    Sum-Share is powered by bio-banks that pool de-identified patient data, including genetic information, from electronic health records (EHRs) for research purposes. For their study, Moore, co-senior author Yong Chen, PhD, an associate professor of Biostatistics, lead author Ruowang Li, PhD, a post-doc fellow at Penn, and their colleagues used eMERGE to pull seven different sets of EHRs to run through Sum-Share in an attempt to detect the genetic effects between five cardiovascular-related conditions: obesity, hypothyroidism, type 2 diabetes, hypercholesterolemia, and hyperlipidemia.
    With Sum-Share, the researchers found 1,734 different single-nucleotide polymorphisms (SNPs, which are differences in the building blocks of DNA) that could be tied to the five conditions. Then, using results from just one site’s EHR, only one SNP was identified that could be tied to the conditions.
    Additionally, they determined that their findings were identical whether they used summary-level data or individual-level data in Sum-Share, making it a “lossless” system.
    To determine the effectiveness of Sum-Share, the team then compared their method’s results with the previous leading method, PheWAS. This method operates best when it pulls what individual-level data has been made available from different EHRs. But when putting the two on a level playing field, allowing both to use individual-level data, Sum-Share was statistically determined to be more powerful in its findings than PheWAS. So, since Sum-Share’s summary-level data findings have been determined to be as insightful as when it uses individual-level data, it appears to be the best method for determining genetic characteristics.
    “This was notable because Sum-Share enables loss-less data integration, while PheWAS loses some information when integrating information from multiple sites,” Li explained. “Sum-Share can also reduce the multiple hypothesis testing penalties by jointly modeling different characteristics at once.”
    Currently, Sum-Share is mainly designed to be used as a research tool, but there are possibilities for using its insights to improve clinical operations. And, moving forward, there is a chance to use it for some of the most pressing needs facing health care today.
    “Sum-Share could be used for COVID-19 with research consortia, such as the Consortium for Clinical Characterization of COVID-19 by EHR (4CE),” Yong said. “These efforts use a federated approach where the data stay local to preserve privacy.”
    This study was supported by the National Institutes of Health (grant number NIH LM010098).
    Co-authors on the study include Rui Duan, Xinyuan Zhang, Thomas Lumley, Sarah Pendergrass, Christopher Bauer, Hakon Hakonarson, David S. Carrell, Jordan W. Smoller, Wei-Qi Wei, Robert Carroll, Digna R. Velez Edwards, Georgia Wiesner, Patrick Sleiman, Josh C. Denny, Jonathan D. Mosley, and Marylyn D. Ritchie. More

  • in

    Entangling electrons with heat

    A joint group of scientists from Finland, Russia, China and the USA have demonstrated that temperature difference can be used to entangle pairs of electrons in superconducting structures. The experimental discovery, published in Nature Communications, promises powerful applications in quantum devices, bringing us one step closer towards applications of the second quantum revolution.
    The team, led by Professor Pertti Hakonen from Aalto University, has shown that the thermoelectric effect provides a new method for producing entangled electrons in a new device. “Quantum entanglement is the cornerstone of the novel quantum technologies. This concept, however, has puzzled many physicists over the years, including Albert Einstein who worried a lot about the spooky interaction at a distance that it causes,” says Prof. Hakonen.
    In quantum computing, entanglement is used to fuse individual quantum systems into one, which exponentially increases their total computational capacity. “Entanglement can also be used in quantum cryptography, enabling the secure exchange of information over long distances,” explains Prof. Gordey Lesovik, from the Moscow Institute of Physics and Technology, who has acted several times as a visiting professor at Aalto University School of Science. Given the significance of entanglement to quantum technology, the ability to create entanglement easily and controllably is an important goal for researchers.
    The researchers designed a device where a superconductor was layered withed graphene and metal electrodes. “Superconductivity is caused by entangled pairs of electrons called “cooper pairs.” Using a temperature difference, we cause them to split, with each electron then moving to different normal metal electrode,” explains doctoral candidate Nikita Kirsanov, from Aalto University. “The resulting electrons remain entangled despite being separated for quite long distances.”
    Along with the practical implications, the work has significant fundamental importance. The experiment has shown that the process of Cooper pair splitting works as a mechanism for turning temperature difference into correlated electrical signals in superconducting structures. The developed experimental scheme may also become a platform for original quantum thermodynamical experiments.

    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More

  • in

    For the right employees, even standard information technology can spur creativity

    In a money-saving revelation for organizations inclined to invest in specialized information technology to support the process of idea generation, new research suggests that even non-specialized, everyday organizational IT can encourage employees’ creativity.
    Recently published in the journal Information and Organization, these findings from Dorit Nevo, an associate professor in the Lally School of Management at Rensselaer Polytechnic Institute, show standard IT can be used for innovation. Furthermore, this is much more likely to happen when the technology is in the hands of employees who are motivated to master technology, understand their role in the organization, are recognized for their efforts, and are encouraged to develop their skills.
    “What this study reveals is that innovation is found not just by using technology specifically created to support idea-generation,” Nevo said. “Creativity comes from both the tool and the person who uses it.”
    Most businesses and organizations use common computer technologies, such as business analytics programs, knowledge management systems, and point-of-sale systems, to enable employees to complete basic job responsibilities. Nevo wanted to know if this standard IT could also be used by employees to create new ideas in the front end of the innovation process, where ideas are generated, developed, and then championed.
    By developing a theoretically grounded model to examine IT-enabled innovation in an empirical study, Nevo found that employees who are motivated to master IT can use even standard technology as a creativity tool, increasing the return on investment on the technologies companies already have in-house.
    “An organization can get a lot more value out of their IT technology if they let the right people use them and then support them,” Nevo said. “This added value will, in turn, save organizations money because they don’t always have to invest in specialized technology in order for their employees to generate solutions to work-related issues or ideas for improvement in the workplace. You just have to trust your employees to be able to innovate with the technologies you have.”

    Story Source:
    Materials provided by Rensselaer Polytechnic Institute. Original written by Jeanne Hedden Gallagher. Note: Content may be edited for style and length. More

  • in

    Patterns in primordial germ cell migration

    Whenever an organism develops and forms organs, a tumour creates metastases or the immune system becomes active in inflammation, cells migrate within the body. As they do, they interact with surrounding tissues which influence their function. The migrating cells react to biochemical signals, as well as to biophysical properties of their environment, for example whether a tissue is soft or stiff. Gaining detailed knowledge about such processes provides scientists with a basis for understanding medical conditions and developing treatment approaches.
    A team of biologists and mathematicians at the Universities of Münster and Erlangen-Nürnberg has now developed a new method for analysing cell migration processes in living organisms. The researchers investigated how primordial germ cells whose mode of locomotion is similar to other migrating cell types, including cancer cells, behave in zebrafish embryos when deprived of their biochemical guidance cue. The team developed new software that makes it possible to merge three-dimensional microscopic images of multiple embryos in order to recognise patterns in the distribution of cells and thus highlight tissues that influence cell migration. With the help of the software, researchers determined domains that the cells either avoided, to which they responded by clustering, or in which they maintained their normal distribution. In this way, they identified a physical barrier at the border of the organism’s future backbone where the cells changed their path. “We expect that our experimental approach and the newly developed tools will be of great benefit in research on developmental biology, cell biology and biomedicine,” explains Prof Dr Erez Raz, a cell biologist and project director at the Center for Molecular Biology of Inflammation at Münster University. The study has been published in the journal Science Advances.
    Details on methods and results
    For their investigations, the researchers made use of primordial germ cells in zebrafish embryos. Primordial germ cells are the precursors of sperm and egg cells and, during the development of many organisms, they migrate to the place where the reproductive organs form. Normally, these cells are guided by chemokines — i.e. attractants produced by surrounding cells that initiate signalling pathways by binding to receptors on the primordial germ cells. By genetically modifying the cells, the scientists deactivated the chemokine receptor Cxcr4b so that the cells remained motile but no longer migrated in a directional manner. “Our idea was that the distribution of the cells within the organism — when not being controlled by guidance cues — can provide clues as to which tissues influence cell migration, and then we can analyse the properties of these tissues,” explains ?ukasz Truszkowski, one of the three lead authors of the study.
    “To obtain statistically significant data on the spatial distribution of the migrating cells, we needed to study several hundred zebrafish embryos, because at the developmental stage at which the cells are actively migrating, a single embryo has only around 20 primordial germ cells,” says Sargon Groß-Thebing, also a first author and, like his colleague, a PhD student in the graduate programme of the Cells in Motion Interfaculty Centre at the University of Münster. In order to digitally merge the three-dimensional data of multiple embryos, the biology researchers joined forces with a team led by the mathematician Prof Dr Martin Burger, who was also conducting research at the University of Münster at that time and is now continuing the collaboration from the University of Erlangen-Nürnberg. The team developed a new software tool that pools the data automatically and recognises patterns in the distribution of primordial germ cells. The challenge was to account for the varying sizes and shapes of the individual zebrafish embryos and their precise three-dimensional orientation in the microscope images.
    The software named “Landscape” aligns the images captured from all the embryos with each other. “Based on a segmentation of the cell nuclei, we can estimate the shape of the embryos and correct for their size. Afterwards, we adjust the orientation of the organisms,” says mathematician Dr Daniel Tenbrinck, the third lead author of the study. In doing so, a tissue in the midline of the embryos serves as a reference structure which is marked by a tissue-specific expression of the so-called green fluorescent protein (GFP). In technical jargon the whole process is called image registration. The scientists verified the reliability of their algorithms by capturing several images of the same embryo, manipulating them with respect to size and image orientation, and testing the ability of the software to correct for the manipulations. To evaluate the ability of the software to recognise cell-accumulation patterns, they used microscopic images of normally developing embryos, in which the migrating cells accumulate at a known specific location in the embryo. The researchers also demonstrated that the software can be applied to embryos of another experimental model, embryos of the fruit fly Drosophila, which have a shape that is different from that of zebrafish embryos.
    Using the new method, the researchers analysed the distribution of 21,000 primordial germ cells in 900 zebrafish embryos. As expected, the cells lacking a chemokine receptor were distributed in a pattern that differs from that observed in normal embryos. However, the cells were distributed in a distinct pattern that could not be recognised by monitoring single embryos. For example, in the midline of the embryo, the cells were absent. The researchers investigated that region more closely and found it to function as a physical barrier for the cells. When the cells came in contact with this border, they changed the distribution of actin protein within them, which in turn led to a change of cell migration direction and movement away from the barrier. A deeper understanding of how cells respond to physical barriers may be relevant in metastatic cancer cells that invade neighbouring tissues and where this process may be disrupted.

    Story Source:
    Materials provided by University of Münster. Note: Content may be edited for style and length. More