More stories

  • in

    Humans in the loop help robots find their way

    Just like us, robots can’t see through walls. Sometimes they need a little help to get where they’re going.
    Engineers at Rice University have developed a method that allows humans to help robots “see” their environments and carry out tasks.
    The strategy called Bayesian Learning IN the Dark — BLIND, for short — is a novel solution to the long-standing problem of motion planning for robots that work in environments where not everything is clearly visible all the time.
    The peer-reviewed study led by computer scientists Lydia Kavraki and Vaibhav Unhelkar and co-lead authors Carlos Quintero-Peña and Constantinos Chamzas of Rice’s George R. Brown School of Engineering was presented at the Institute of Electrical and Electronics Engineers’ International Conference on Robotics and Automation in late May.
    The algorithm developed primarily by Quintero-Peña and Chamzas, both graduate students working with Kavraki, keeps a human in the loop to “augment robot perception and, importantly, prevent the execution of unsafe motion,” according to the study.
    To do so, they combined Bayesian inverse reinforcement learning (by which a system learns from continually updated information and experience) with established motion planning techniques to assist robots that have “high degrees of freedom” — that is, a lot of moving parts. More

  • in

    Processing photons in picoseconds

    Light has long been used to transmit information in many of our everyday electronic devices. Because light is made of quantum particles called photons, it will also play an important role in information processing in the coming generation of quantum devices. But first, researchers need to gain control of individual photons. Writing in Optica, Columbia Engineers propose using a time lens.
    “Just like an ordinary magnifying glass can zoom in on some spatial phenomena you wouldn’t otherwise be able to see, a time lens lets you resolve details on a temporal scale,” said Chaitali Joshi, the first author of the work and former PhD student in Alexander Gaeta’s lab. A laser is a focused beam of many, many, many photons oscillating through space at a particular frequency; the team’s time lens lets them pick out individual particles of light faster than ever before.
    The experimental set-up consists of two laser beams that “mix” with a signal photon to create another packet of light at a different frequency. With their time lens, Joshi and her colleagues were able to identify single photons from a larger beam with picosecond resolution. That’s 10-12 of a second and about 70x faster than has been observed with other single-photon detectors said Joshi, now a postdoc at CalTech. Such a time lens allows for temporally resolving individual photons with a precision that can’t be achieved with current photon detectors.
    In addition to seeing single photons, the team can also manipulate their spectra (i.e., their composite colors), reshaping the path along which they traveled. This is an important step for building quantum information networks. “In such a network, all the nodes need to be able to talk to each other. When those nodes are photons, you need their spectral and temporal bandwidths to match, which we can achieve with a time lens,” said Joshi.
    With future tweaks, the Gaeta lab hopes to further reduce the time resolution by more than a factor of three and will continue exploring how they can control individual photons. “With our time lens, the bandwidth is tunable to whatever application you have in mind: you just need to adjust the magnification,” said Joshi. Potential applications include information processing, quantum key distribution, quantum sensing, and more.
    For now, the work is done with optical fibers, though the lab hopes to one day incorporate time lenses into integrated photonic chips, like electronic chips, and scale the system to many devices on a chip to allow for processing many photons simultaneously.
    Story Source:
    Materials provided by Columbia University School of Engineering and Applied Science. Original written by Ellen Neff. Note: Content may be edited for style and length. More

  • in

    Making dark semiconductors shine

    Whether or not a solid can emit light, for instance as a light-emitting diode (LED), depends on the energy levels of the electrons in its crystalline lattice. An international team of researchers led by University of Oldenburg physicists Dr Hangyong Shan and Prof. Dr Christian Schneider has succeeded in manipulating the energy-levels in an ultra-thin sample of the semiconductor tungsten diselenide in such a way that this material, which normally has a low luminescence yield, began to glow. The team has now published an article on its research in the science journal Nature Communications.
    According to the researchers, their findings constitute a first step towards controlling the properties of matter through light fields. “The idea has been discussed for years, but had not yet been convincingly implemented,” said Schneider. The light effect could be used to optimize the optical properties of semiconductors and thus contribute to the development of innovative LEDs, solar cells, optical components and other applications. In particular the optical properties of organic semiconductors — plastics with semiconducting properties that are used in flexible displays and solar cells or as sensors in textiles — could be enhanced in this way.
    Tungsten diselenide belongs to an unusual class of semiconductors consisting of a transition metal and one of the three elements sulphur, selenium or tellurium. For their experiments the researchers used a sample that consisted of a single crystalline layer of tungsten and selenium atoms with a sandwich-like structure. In physics, such materials, which are only a few atoms thick, are also known as two-dimensional (2D) materials. They often have unusual properties because the charge carriers they contain behave in a completely different manner to those in thicker solids and are sometimes referred to as “quantum materials.”
    The team led by Shan and Schneider placed the tungsten diselenide sample between two specially prepared mirrors and used a laser to excite the material. With this method they were able to create a coupling between light particles (photons) and excited electrons. “In our study, we demonstrate that via this coupling the structure of the electronic transitions can be rearranged such that a dark material effectively behaves like a bright one,” Schneider explained. “The effect in our experiment is so strong that the lower state of tungsten diselenide becomes optically active.” The team was also able to show that the experimental results matched the predictions of a theoretical model to a high degree.
    The current study is the result of a collaboration between the researchers at the Carl von Ossietzky University of Oldenburg (Germany) and colleagues from Reykjavik University (Iceland), the University of Würzburg (Germany), Friedrich Schiller University (Germany), Arizona State University (USA) and the National Institute for Materials Science in Tsukuba (Japan). Parts of the theory were developed by colleagues at ITMO University in St. Petersburg (Russia) before the universities terminated their collaboration.
    Story Source:
    Materials provided by University of Oldenburg. Note: Content may be edited for style and length. More

  • in

    Who's really in control?

    Humans have long been known to sympathize with the machines or computer representations they operate. Whether driving a car or directing a video game avatar, people are more likely to identify with something that they perceive to be in control of. However, it remains unknown how the attitudes represented in the autonomous behavior of the robots affects their operators. Now, researchers from Japan have found that when a person controls only a part of the body of a semi-autonomous robot, they are influenced by the robot’s expressed “attitudes.”
    Researchers at the Department of Systems Innovation at Osaka University tested the psychological impact of remotely operating certain semi-autonomous robots on humans. These “telepresence” robots are designed to transmit the human voice and mannerisms as a way of alleviating labor shortages and minimizing commuting costs. For example, a human operator may control the voice, while the body movements are handled automatically by a computer. “Semi-autonomous robots have shown potential for practical applications in which a robot’s autonomous actions and human teleoperation are jointly used to accomplish difficult tasks. A system that combines the ‘intentions’ of different agents, such as an algorithm and a human user, that are collectively used to operate a single robot is called collaborative control,” first author Tomonori Kubota says.
    In the experiment, the team investigated whether the attitude of the teleoperator would align more with that expressed by the semi-autonomous robot when they controlled a part of the robot’s body. Beforehand, experimental participants were asked to rank a set of 10 paintings. They were then assigned to one of three conditions for controlling a human-like robot. Either they operated the robot’s hand movement, ability to smile, or did not control the robot at all. They were then shown the android speaking to another participant who was actually collaborating with the experimenters. The android recommended the painting that had been ranked sixth, and the experimenters recorded how much this influenced the robot operator’s subsequent ranking of that painting. “This study reveals that when a person operates a part of the body of an android robot that autonomously interacts with a human, the person’s attitudes come to closely align with the robot’s attitudes,” senior author Hiroshi Ishiguro says.
    This research indicates that in future implementations of “human-robot collaborations,” designers need to be mindful of the ways operators may be influenced by their role with subconscious changes in attitude.
    Story Source:
    Materials provided by Osaka University. Note: Content may be edited for style and length. More

  • in

    Supernumerary virtual robotic arms can feel like part of our body

    Research teams at the University of Tokyo, Keio University and Toyohashi University of Technology in Japan have developed a virtual robotic limb system which can be operated by users’ feet in a virtual environment as extra, or supernumerary, limbs. After training, users reported feeling like the virtual robotic arms had become part of their own body. This study focused on the perceptual changes of the participants, understanding of which can contribute to designing real physical robotic supernumerary limb systems that people can use naturally and freely just like our own bodies.
    What would you do with an extra arm, or if like Spider-Man’s nemesis Doctor Octopus, you could have an extra four? Research into extra, or supernumerary, robotic limbs looks at how we might adapt, mentally and physically, to having additional limbs added to our bodies.
    Doctoral student Ken Arai from the Research Center for Advanced Science and Technology (RCAST) at the University of Tokyo became interested in this research as a way to explore the limits of human “plasticity” — in other words, our brain’s ability to alter and adapt to external and internal changes. One example of plasticity is the way that we can learn to use new tools and sometimes even come to see them as extensions of ourselves, referred to as “tool embodiment,” whether it’s an artist’s paintbrush or hairdresser’s scissors.
    To explore these concepts in action, teams at the University of Tokyo, Keio University and Toyohashi University of Technology in Japan collaborated to create a virtual robotic limb system. They then asked participants to perform tasks in virtual reality (VR) using the virtual limbs.
    “We investigated whether virtual robotic arms, as supernumerary limbs, could be perceived as part of one’s own body, and whether perceptual changes would occur regarding the proximal space around the robotic arm,” said Arai.
    Participants wore a head-mounted display to give them a first-person view of their own arms represented in VR, as well as the additional virtual robotic arms. They then had to perform tasks using only the virtual robotic arms, which were controlled by moving their toes. Tactile devices returned sensations from the virtual robotic arms to the tops and soles of their feet when they touched an object, like a virtual ball.
    Once the participants learned how to use the virtual system, they reported feeling like the virtual robotic arms had become their own extra arms and not just extensions of their real arms or feet. “The scores of subjective evaluation statistically became significantly higher for ‘sense of body ownership,’ ‘sense of agency’ and ‘sense of self-location,’ which are important measures of embodiment, where the supernumerary robotic limb is able to become part of the body,” said Arai.
    The team also found that the participant’s “peripersonal space” (the area around our bodies which we perceive as being our personal space) extended to include the area around the virtual robotic arms. As Arai explained, “We succeeded in capturing the positive association between the perceptual change in visuo-tactile integration around the supernumerary robotic limbs (peripersonal space), and the score change of subjective evaluation of feeling the number of one’s arms increased (supernumerary limb sensation).”
    Next, the team wants to look at the potential for cooperative behavior between participants’ own arms in virtual reality and the virtual robotic arms. “Investigating the mechanisms and dynamics of the supernumerary limb sensation reported here from the standpoint of cognitive neuroscience will be important in exploring human plasticity limits and the design of supernumerary robotic limb systems,” said Arai. The hope is that by understanding the perceptual changes and cognitive effort required to operate a supernumerary robotic limb system in VR, this will aid in designing real-life systems in future which people can use naturally just like their own body.
    Story Source:
    Materials provided by University of Tokyo. Note: Content may be edited for style and length. More

  • in

    The heat is on: Traces of fire uncovered dating back at least 800,000 years

    They say that where there’s smoke, there’s fire, and Weizmann Institute of Science researchers are working hard to investigate that claim, or at least elucidate what constitutes “smoke.” In an article published today in PNAS, the scientists reveal an advanced, innovative method that they have developed and used to detect nonvisual traces of fire dating back at least 800,000 years — one of the earliest known pieces of evidence for the use of fire. The newly developed technique may provide a push toward a more scientific, data-driven type of archaeology, but — perhaps more importantly — it could help us better understand the origins of the human story, our most basic traditions and our experimental and innovative nature.
    The controlled use of fire by ancient hominins — a group that includes humans and some of our extinct family members — is hypothesized to date back at least a million years, to around the time that archaeologists believe Homo habilis began its transition to Homo erectus. That is no coincidence, as the working theory, called the “cooking hypothesis,” is that the use of fire was instrumental in our evolution, not only for allowing hominins to stay warm, craft advanced tools and ward off predators but also for acquiring the ability to cook. Cooking meat not only eliminates pathogens but increases efficient protein digestion and nutritional value, paving the way for the growth of the brain. The only problem with this hypothesis is a lack of data: since finding archaeological evidence of pyrotechnology primarily relies on visual identification of modifications resulting from the combustion of objects (mainly, a color change), traditional methods have managed to find widespread evidence of fire use no older than 200,000 years. While there is some evidence of fire dating back to 500,000 years ago, it remains sparse, with only five archaeological sites around the world providing reliable evidence of ancient fire.
    “We may have just found the sixth site,” says Dr. Filipe Natalio of Weizmann’s Plant and Environmental Sciences Department, whose previous collaboration with Dr. Ido Azuri, of Weizmann’s Life Core Facilities Department, and colleagues provided the basis for this project. Together they pioneered the application of AI and spectroscopy in archaeology to find indications of controlled burning of stone tools dating back to between 200,000 and 420,000 years ago in Israel. Now they’re back, joined by PhD student Zane Stepka, Dr. Liora Kolska Horwitz from the Hebrew University of Jerusalem and Prof. Michael Chazan from the University of Toronto, Canada. The team upped the ante by taking a “fishing expedition” — casting far out into the water and seeing what they could reel back in. “When we started this project,” says Natalio, “the archaeologists who’ve been analyzing the findings from Evron Quarry told us we wouldn’t find anything. We should have made a bet.”
    Evron Quarry, located in the Western Galilee, is an open-air archaeological site that was first discovered in the mid-1970s. During a series of excavations that took place at that time and were led by Prof. Avraham Ronen, archaeologists dug down 14 meters and uncovered a large array of animal fossils and Paleolithic tools dating back to between 800,000 and 1 million years ago, making it one of the oldest sites in Israel. None of the finds from the site or the soil in which they were found had any visual evidence of heat: ash and charcoal degrade over time, eliminating the chances of finding visual evidence of burning. Thus, if the Weizmann scientists wanted to find evidence of fire, they had to search farther afield.
    The “fishing” expedition began with the development of a more advanced AI model than they had previously used. “We tested a variety of methods, among them traditional data analysis methods, machine learning modeling and more advanced deep learning models,” says Azuri, who headed the development of the models. “The deep learning models that prevailed had a specific architecture that outperformed the others and successfully gave us the confidence we needed to further use this tool in an archaeological context having no visual signs of fire use.” The advantage of AI is that it can find hidden patterns across a multitude of scales. By pinpointing the chemical composition of materials down to the molecular level, the output of the model can estimate the temperature to which the stone tools were heated, ultimately providing information about past human behaviors.
    With an accurate AI method in hand, the team could start fishing for molecular signals from the stone tools used by the inhabitants of the Evron Quarry almost a million years ago. To this end, the team assessed the heat exposure of 26 flint tools found at the site almost half a century ago. The results revealed that the tools had been heated to a wide range of temperatures — some exceeding 600°C. In addition, using a different spectroscopic technique, they analyzed 87 faunal remains and discovered that the tusk of an extinct elephant also exhibited structural changes resulting from heating. While cautious in their claim, the presence of hidden heat suggests that our ancient ancestors, not unlike the scientists themselves, were experimentalists.
    According to the research team, by looking at the archaeology from a different perspective, using new tools, we may find much more than we initially thought. The methods they’ve developed could be applied, for example, at other Lower Paleolithic sites to identify nonvisual evidence of fire use. Furthermore, this method could perhaps offer a renewed spatiotemporal perspective on the origins and controlled use of fire, helping us to better understand how hominin’s pyrotechnology-related behaviors evolved and drove other behaviors. “Especially in the case of early fire,” says Stepka, “if we use this method at archaeological sites that are one or two million years old, we might learn something new.”
    By all accounts, the fishing expedition was a resounding success. “It was not only a demonstration of exploration and being rewarded in terms of the knowledge gained,” says Natalio, “but of the potential that lies in combining different disciplines: Ido has a background in quantum chemistry, Zane is a scientific archaeologist, and Liora and Michael are prehistorians. By working together, we have learned from each other. For me, it’s a demonstration of how scientific research across the humanities and science should work.”
    Dr. Natalio’s research is supported by the Yeda-Sela Center for Basic Research. More

  • in

    People less outraged by gender discrimination caused by algorithms

    People are less morally outraged when gender discrimination occurs because of an algorithm rather than direct human involvement, according to research published by the American Psychological Association.
    In the study, researchers coined the phrase “algorithmic outrage deficit” to describe their findings from eight experiments conducted with a total of more than 3,900 participants from the United States, Canada and Norway.
    When presented with various scenarios about gender discrimination in hiring decisions caused by algorithms and humans, participants were less morally outraged about those caused by algorithms. Participants also believed companies were less legally liable for discrimination when it was due to an algorithm.
    “It’s concerning that companies could use algorithms to shield themselves from blame and public scrutiny over discriminatory practices,” said lead researcher Yochanan Bigman, PhD, a post-doctoral research fellow at Yale University and incoming assistant professor at Hebrew University. The findings could have broader implications and affect efforts to combat discrimination, Bigman said. The research was published online in the Journal of Experimental Psychology: General.
    “People see humans who discriminate as motivated by prejudice, such as racism or sexism, but they see algorithms that discriminate as motivated by data, so they are less morally outraged,” Bigman said. “Moral outrage is an important societal mechanism to motivate people to address injustices. If people are less morally outraged about discrimination, then they might be less motivated to do something about it.”
    Some of the experiments used a scenario based on a real-life example of alleged algorithm-based gender discrimination by Amazon that penalized female job applicants. While the research focused on gender discrimination, one of the eight experiments was replicated to examine racial and age discrimination and had similar findings.
    Knowledge about artificial intelligence didn’t appear to make a difference. In one experiment with more than 150 tech workers in Norway, participants who reported greater knowledge about artificial intelligence were still less outraged by gender discrimination caused by algorithms.
    When people learn more about a specific algorithm it may affect their outlook, the researchers found. In another study, participants were more outraged when a hiring algorithm that caused gender discrimination was created by male programmers at a company known for sexist practices.
    Programmers should be aware of the possibility of unintended discrimination when designing new algorithms, Bigman said. Public education campaigns also could stress that discrimination caused by algorithms may be a result of existing inequities, he said.
    Story Source:
    Materials provided by American Psychological Association. Note: Content may be edited for style and length. More

  • in

    Flexing the power of a conductive polymer

    For decades, field-effect transistors enabled by silicon-based semiconductors have powered the electronics revolution. But in recent years, manufacturers have come up against hard physical limits to further size reductions and efficiency gains of silicon chips. That has scientists and engineers looking for alternatives to conventional metal-oxide semiconductor (CMOS) transistors.
    “Organic semiconductors offer several distinct advantages over conventional silicon-based semiconducting devices: they are made from abundantly available elements, such as carbon, hydrogen and nitrogen; they offer mechanical flexibility and low cost of manufacture; and they can be fabricated easily at scale,” notes UC Santa Barbara engineering professor Yon Visell, part of a group of researchers working with the new materials. “Perhaps more importantly, the polymers themselves can be crafted using a wide variety of chemistry methods to endow the resulting semiconducting devices with interesting optical and electrical properties. These properties can be designed, tuned or selected in many more ways than can inorganic (e.g., silicon-based) transistors.”
    The design flexibility that Visell describes is exemplified in the reconfigurability of the devices reported by UCSB researchers and others in the journal Advanced Materials.
    Reconfigurable logic circuits are of particular interest as candidates for post-CMOS electronics, because they make it possible to simplify circuit design while increasing energy efficiency. One recently developed class of carbon-based (as opposed to, say, silicon- or gallium-nitride-based) transistors), called organic electrochemical transistors(OECTs), have been shown to be well-suited for reconfigurable electronics.
    In the recent paper, chemistry professorThuc-Quyen Nguyen,who leads the UCSB Center for Polymers and Organic Solids, and co-authors including Visell describe a breakthrough material — a soft, semiconducting carbon-based polymer — that can provide unique advantages over the inorganic semiconductors currently found in conventional silicon transistors.
    “Reconfigurable organic logic devices are promising candidates for the next generations of efficient computing systems and adaptive electronics,” the researchers write. “Ideally, such devices would be of simple structure and design, [as well as] power-efficient and compatible with high-throughput microfabrication techniques.”
    Conjugating for Conductivity More