More stories

  • in

    Investigating positron scattering from giant molecular targets

    Particle scattering is an important test of the quantum properties of atoms and larger molecules. While electrons have historically dominated these experiments, their positively charged antimatter counterparts? — ?positrons? — ?can be used in promising applications when the negatively charged particles aren’t suitable.
    A new paper published in EPJ D examines the scattering of positrons from rare gas atoms stuffed inside the fullerenes — so-called “rare gas endohedrals.” The paper is authored by Km Akanksha Dubey from the Indian Institute of Technology Patna, Patna, Bihta, India, and Marcelo Ciappina, Guangdong Technion-Israel Institute of Technology, Shantou, China.
    “Our focus was to investigate positron scattering processes with rare gas endohedrals. As a reference to the endohedral system, we also considered positron scattering from bare C60 targets,” Ciappina says. “”In our study, we chose rare gas atoms for encapsulation inside carbon 60 (C60), as they are probably the most popular and studied endohedrals. Rare gas endohedrals are very stable formations; the encapsulated atoms find their equilibrium position at almost the geometrical centre of the C60.”
    The study builds upon the findings of previous studies involving the collision of positrons with giant targets like C60 and rare gas endohedrals. The major difference being that the resonance scattering with different sizes of the encaged atoms is elucidated in comparison to the bare C60 scattering; resonances are also tested under the different scattering fields of the projectile-target complex.
    “To our surprise, resonance formations in the rare gas endohedrals are altered as compared to the case of positron-C60 collision, despite the dominant scattering field in positron scattering being repulsive in nature,” Ciappina says. The resonances at the lower energy are significantly affected by various scattering fields considered alternatively.
    “Thus, scattering resonances in the positron scattering find their natural abode in the C60 and rare gas endohedrals, and the resonance states can be favourably manipulated by keeping the rare gas atoms inside it.”
    With insights into many aspects of such collision processes, potential applications for the findings of the paper could range from fields such as positron beam spectroscopy and the investigation of nanomaterials.
    Story Source:
    Materials provided by Springer. Note: Content may be edited for style and length. More

  • in

    Is AI good or bad for the climate? It's complicated

    As the world fights climate change, will the increasingly widespread use of artificial intelligence (AI) be a help or a hindrance? In a paper published this week in Nature Climate Change, a team of experts in AI, climate change, and public policy present a framework for understanding the complex and multifaceted relationship of AI with greenhouse gas emissions, and suggest ways to better align AI with climate change goals.
    “AI affects the climate in many ways, both positive and negative, and most of these effects are poorly quantified,” said David Rolnick, Assistant Professor of Computer Science at McGill University and a Core Academic Member of Mila — Quebec AI Institute, who co-authored the paper. “For example, AI is being used to track and reduce deforestation, but AI-based advertising systems are likely making climate change worse by increasing the amount that people buy.”
    The paper divides the impacts of AI on greenhouse gas emissions into three categories: 1) Impacts from the computational energy and hardware used to develop, train, and run AI algorithms, 2) immediate impacts caused by the applications of AI — such as optimizing energy use in buildings (which decreases emissions) or accelerating fossil fuel exploration (which increases emissions), and 3) system-level impacts caused by the ways in which AI applications affect behaviour patterns and society more broadly, such as via advertising systems and self-driving cars.
    “Climate change should be a key consideration when developing and assessing AI technologies,” said Lynn Kaack, Assistant Professor of Computer Science and Public Policy at the Hertie School, and lead author on the report. “We find that those impacts that are easiest to measure are not necessarily those with the largest impacts. So, evaluating the effect of AI on the climate holistically is important.”
    AI’s impacts on greenhouse gas emissions — a matter of choice
    The authors emphasize the ability of researchers, engineers, and policymakers to shape the impacts of AI, writing that its “… ultimate effect on the climate is far from predestined, and societal decisions will play a large role in shaping its overall impacts.” For example, the paper notes that AI-enabled autonomous vehicle technologies can help lower emissions if they are designed to facilitate public transportation, but they can increase emissions if they are used in personal cars and result in people driving more.
    The researchers also note that machine learning expertise is often concentrated among a limited set of actors. This raises potential challenges with respect to the governance and implementation of machine learning in the context of climate change, since it may create or widen the digital divide, or shift power from public to large private entities by virtue of who controls relevant data or intellectual capital.
    “The choices that we make implicitly as technologists can matter a lot,” said Prof. Rolnick. “Ultimately, AI for Good shouldn’t just be about adding beneficial applications on top of business as usual, it should be about shaping all the applications of AI to achieve the impact we want to see.”
    Story Source:
    Materials provided by McGill University. Note: Content may be edited for style and length. More

  • in

    Microfluidic-based soft robotic prosthetics promise relief for diabetic amputees

    Every 30 seconds, a leg is amputated somewhere in the world due to diabetes. These patients often suffer from neuropathy, a loss of sensation in the lower extremities, and are therefore unable to detect damage resulting from an ill-fitting prosthesis, which leads to the amputation of a limb.
    In Biomicrofluidics, by AIP Publishing, Canadian scientists reveal their development of a new type of prosthetic using microfluidics-enabled soft robotics that promises to greatly reduce skin ulcerations and pain in patients who have had an amputation between the ankle and knee.
    More than 80% of lower-limb amputations in the world are the result of diabetic foot ulcers, and the lower limb is known to swell at unpredictable times, resulting in volume changes of 10% or more.
    Typically, the prosthesis used after amputation includes fabric and silicone liners that can be added or removed to improve fit. The amputee needs to manually change the liners, but neuropathy leading to poor sensation makes this difficult and can lead to more damage to the remaining limb.
    “Rather than creating a new type of prosthetic socket, the typical silicon/fabric limb liner is replaced with a single layer of liner with integrated soft fluidic actuators as an interfacing layer,” said author Carolyn Ren, from the University of Waterloo. “These actuators are designed to be inflated to varying pressures based on the anatomy of the residual limb to reduce pain and prevent pressure ulcerations.”
    The scientists started with a recently developed device using pneumatic actuators to adjust the pressure of the prosthetic socket. This initial device was quite heavy, limiting its use in real-world situations.
    To address this problem, the group developed a way to miniaturize the actuators. They designed a microfluidic chip with 10 integrated pneumatic valves to control each actuator. The full system is controlled by a miniature air pump and two solenoid valves that provide air to the microfluidic chip. The control box is small and light enough to be worn as part of the prosthesis.
    Medical personnel with extensive experience in prosthetic devices were part of the team and provided a detailed map of desired pressures for the prosthetic socket. The group carried out extensive measurements of the contact pressure provided by each actuator and compared these to the desired pressure for a working prosthesis.
    All 10 actuators were found to produce pressures in the desired range, suggesting the new device will work well in the field. Future research will test the approach on a more accurate biological model.
    The group plans additional research to integrate pressure sensors directly into the prosthetic liner, perhaps using newly available knitted soft fabric that incorporates pressure sensing material.
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Electrospinning promises major improvements in wearable technology

    Wearable technology has exploded in recent years. Spurred by advances in flexible sensors, transistors, energy storage, and harvesting devices, wearables encompass miniaturized electronic devices worn directly on the human skin for sensing a range of biophysical and biochemical signals or, as with smart watches, for providing convenient human-machine interfaces.
    Engineering wearables for optimal skin conformity, breathability, and biocompatibility without compromising the tunability of their mechanical, electrical, and chemical properties is no small task. The emergence of electrospinning — the fabrication of nanofibers with tunable properties from a polymer base — is an exciting development in the field.
    In APL Bioengineering, by AIP Publishing, researchers from Tufts University examined some of the latest advances in wearable electronic devices and systems being developed using electrospinning.
    “We show how the scientific community has realized many remarkable things using electrospun nanomaterials,” said author Sameer Sonkusale. “They have applied them for physical activity monitoring, motion tracking, measuring biopotentials, chemical and biological sensing, and even batteries, transistors, and antennas, among others.”
    Sonkusale and his colleagues showcase the many advantages electrospun materials have over conventional bulk materials.
    Their high surface-to-volume ratio endows them with enhanced porosity and breathability, which is important for long-term wearability. Also, with the appropriate blend of polymers, they can achieve superior biocompatibility.
    Conductive electrospun nanofibers provide high surface area electrodes, enabling both flexibility and performance improvements, including rapid charging and high energy storage capacities.
    “Also, their nanoscale features mean they adhere well to the skin without need for chemical adhesives, which is important if you are interested in measuring biopotentials, like heart activity using electrocardiography or brain activity using electroencephalography,” said Sonkusale.
    Electrospinning is considerably less expensive and more user-friendly than photolithography for realizing nanoscale transistor morphologies with superior electronic transport.
    The researchers are confident electrospinning will further establish its claim as a versatile, feasible, and inexpensive technique for the fabrication of wearable devices in the coming years. They note there are areas for improvement to be considered, including broadening the choice for materials and improving the ease of integration with human physiology.
    They suggest the aesthetics of wearables may be improved by making them smaller and, perhaps, with the incorporation of transparent materials, “almost invisible.”
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Humans in the loop help robots find their way

    Just like us, robots can’t see through walls. Sometimes they need a little help to get where they’re going.
    Engineers at Rice University have developed a method that allows humans to help robots “see” their environments and carry out tasks.
    The strategy called Bayesian Learning IN the Dark — BLIND, for short — is a novel solution to the long-standing problem of motion planning for robots that work in environments where not everything is clearly visible all the time.
    The peer-reviewed study led by computer scientists Lydia Kavraki and Vaibhav Unhelkar and co-lead authors Carlos Quintero-Peña and Constantinos Chamzas of Rice’s George R. Brown School of Engineering was presented at the Institute of Electrical and Electronics Engineers’ International Conference on Robotics and Automation in late May.
    The algorithm developed primarily by Quintero-Peña and Chamzas, both graduate students working with Kavraki, keeps a human in the loop to “augment robot perception and, importantly, prevent the execution of unsafe motion,” according to the study.
    To do so, they combined Bayesian inverse reinforcement learning (by which a system learns from continually updated information and experience) with established motion planning techniques to assist robots that have “high degrees of freedom” — that is, a lot of moving parts. More

  • in

    Processing photons in picoseconds

    Light has long been used to transmit information in many of our everyday electronic devices. Because light is made of quantum particles called photons, it will also play an important role in information processing in the coming generation of quantum devices. But first, researchers need to gain control of individual photons. Writing in Optica, Columbia Engineers propose using a time lens.
    “Just like an ordinary magnifying glass can zoom in on some spatial phenomena you wouldn’t otherwise be able to see, a time lens lets you resolve details on a temporal scale,” said Chaitali Joshi, the first author of the work and former PhD student in Alexander Gaeta’s lab. A laser is a focused beam of many, many, many photons oscillating through space at a particular frequency; the team’s time lens lets them pick out individual particles of light faster than ever before.
    The experimental set-up consists of two laser beams that “mix” with a signal photon to create another packet of light at a different frequency. With their time lens, Joshi and her colleagues were able to identify single photons from a larger beam with picosecond resolution. That’s 10-12 of a second and about 70x faster than has been observed with other single-photon detectors said Joshi, now a postdoc at CalTech. Such a time lens allows for temporally resolving individual photons with a precision that can’t be achieved with current photon detectors.
    In addition to seeing single photons, the team can also manipulate their spectra (i.e., their composite colors), reshaping the path along which they traveled. This is an important step for building quantum information networks. “In such a network, all the nodes need to be able to talk to each other. When those nodes are photons, you need their spectral and temporal bandwidths to match, which we can achieve with a time lens,” said Joshi.
    With future tweaks, the Gaeta lab hopes to further reduce the time resolution by more than a factor of three and will continue exploring how they can control individual photons. “With our time lens, the bandwidth is tunable to whatever application you have in mind: you just need to adjust the magnification,” said Joshi. Potential applications include information processing, quantum key distribution, quantum sensing, and more.
    For now, the work is done with optical fibers, though the lab hopes to one day incorporate time lenses into integrated photonic chips, like electronic chips, and scale the system to many devices on a chip to allow for processing many photons simultaneously.
    Story Source:
    Materials provided by Columbia University School of Engineering and Applied Science. Original written by Ellen Neff. Note: Content may be edited for style and length. More

  • in

    Making dark semiconductors shine

    Whether or not a solid can emit light, for instance as a light-emitting diode (LED), depends on the energy levels of the electrons in its crystalline lattice. An international team of researchers led by University of Oldenburg physicists Dr Hangyong Shan and Prof. Dr Christian Schneider has succeeded in manipulating the energy-levels in an ultra-thin sample of the semiconductor tungsten diselenide in such a way that this material, which normally has a low luminescence yield, began to glow. The team has now published an article on its research in the science journal Nature Communications.
    According to the researchers, their findings constitute a first step towards controlling the properties of matter through light fields. “The idea has been discussed for years, but had not yet been convincingly implemented,” said Schneider. The light effect could be used to optimize the optical properties of semiconductors and thus contribute to the development of innovative LEDs, solar cells, optical components and other applications. In particular the optical properties of organic semiconductors — plastics with semiconducting properties that are used in flexible displays and solar cells or as sensors in textiles — could be enhanced in this way.
    Tungsten diselenide belongs to an unusual class of semiconductors consisting of a transition metal and one of the three elements sulphur, selenium or tellurium. For their experiments the researchers used a sample that consisted of a single crystalline layer of tungsten and selenium atoms with a sandwich-like structure. In physics, such materials, which are only a few atoms thick, are also known as two-dimensional (2D) materials. They often have unusual properties because the charge carriers they contain behave in a completely different manner to those in thicker solids and are sometimes referred to as “quantum materials.”
    The team led by Shan and Schneider placed the tungsten diselenide sample between two specially prepared mirrors and used a laser to excite the material. With this method they were able to create a coupling between light particles (photons) and excited electrons. “In our study, we demonstrate that via this coupling the structure of the electronic transitions can be rearranged such that a dark material effectively behaves like a bright one,” Schneider explained. “The effect in our experiment is so strong that the lower state of tungsten diselenide becomes optically active.” The team was also able to show that the experimental results matched the predictions of a theoretical model to a high degree.
    The current study is the result of a collaboration between the researchers at the Carl von Ossietzky University of Oldenburg (Germany) and colleagues from Reykjavik University (Iceland), the University of Würzburg (Germany), Friedrich Schiller University (Germany), Arizona State University (USA) and the National Institute for Materials Science in Tsukuba (Japan). Parts of the theory were developed by colleagues at ITMO University in St. Petersburg (Russia) before the universities terminated their collaboration.
    Story Source:
    Materials provided by University of Oldenburg. Note: Content may be edited for style and length. More

  • in

    Who's really in control?

    Humans have long been known to sympathize with the machines or computer representations they operate. Whether driving a car or directing a video game avatar, people are more likely to identify with something that they perceive to be in control of. However, it remains unknown how the attitudes represented in the autonomous behavior of the robots affects their operators. Now, researchers from Japan have found that when a person controls only a part of the body of a semi-autonomous robot, they are influenced by the robot’s expressed “attitudes.”
    Researchers at the Department of Systems Innovation at Osaka University tested the psychological impact of remotely operating certain semi-autonomous robots on humans. These “telepresence” robots are designed to transmit the human voice and mannerisms as a way of alleviating labor shortages and minimizing commuting costs. For example, a human operator may control the voice, while the body movements are handled automatically by a computer. “Semi-autonomous robots have shown potential for practical applications in which a robot’s autonomous actions and human teleoperation are jointly used to accomplish difficult tasks. A system that combines the ‘intentions’ of different agents, such as an algorithm and a human user, that are collectively used to operate a single robot is called collaborative control,” first author Tomonori Kubota says.
    In the experiment, the team investigated whether the attitude of the teleoperator would align more with that expressed by the semi-autonomous robot when they controlled a part of the robot’s body. Beforehand, experimental participants were asked to rank a set of 10 paintings. They were then assigned to one of three conditions for controlling a human-like robot. Either they operated the robot’s hand movement, ability to smile, or did not control the robot at all. They were then shown the android speaking to another participant who was actually collaborating with the experimenters. The android recommended the painting that had been ranked sixth, and the experimenters recorded how much this influenced the robot operator’s subsequent ranking of that painting. “This study reveals that when a person operates a part of the body of an android robot that autonomously interacts with a human, the person’s attitudes come to closely align with the robot’s attitudes,” senior author Hiroshi Ishiguro says.
    This research indicates that in future implementations of “human-robot collaborations,” designers need to be mindful of the ways operators may be influenced by their role with subconscious changes in attitude.
    Story Source:
    Materials provided by Osaka University. Note: Content may be edited for style and length. More