More stories

  • in

    A simpler way to connect quantum computers

    Researchers have a new way to connect quantum devices over long distances, a necessary step toward allowing the technology to play a role in future communications systems.
    While today’s classical data signals can get amplified across a city or an ocean, quantum signals cannot. They must be repeated in intervals — that is, stopped, copied and passed on by specialized machines called quantum repeaters. Many experts believe these quantum repeaters will play a key role in future communication networks, allowing enhanced security and enabling connections between remote quantum computers.
    The Princeton study, published Aug. 30 in Nature, details the basis for a new approach to building quantum repeaters. It sends telecom-ready light emitted from a single ion implanted in a crystal. The effort was many years in the making, according to Jeff Thompson, the study’s principal author. The work combined advances in photonic design and materials science.
    Other leading quantum repeater designs emit light in the visible spectrum, which degrades quickly over optical fiber and must be converted before traveling long distances. The new device is based on a single rare earth ion implanted in a host crystal. And because this ion emits light at an ideal infrared wavelength, it requires no such signal conversion, which can lead to simpler and more robust networks.
    The device has two parts: a calcium tungstate crystal doped with just a handful of erbium ions, and a nanoscopic piece of silicon etched into a J-shaped channel. Pulsed with a special laser, the ion emits light up through the crystal. But the silicon piece, a whisp of a semiconductor stuck onto the top of the crystal, catches and guides individual photons out into the fiber optic cable.
    Ideally, this photon would be encoded with information from the ion, Thompson said. Or more specifically, from a quantum property of the ion called spin. In a quantum repeater, collecting and interfering the signals from distant nodes would create entanglement between their spins, allowing end-to-end transmission of quantum states despite losses along the way.
    Thompson’s team first started working with erbium ions several years before, but first versions used different crystals that harbored too much noise. In particular, this noise caused the frequency of the emitted photons to jump around randomly in a process known as spectral diffusion. This prevented the delicate quantum interference that is necessary to operate quantum networks. To solve this problem, his lab started working with Nathalie de Leon, associate professor of electrical and computer engineering, and Robert Cava, a leading solid-state materials scientist and Princeton’s Russell Wellman Moore Professor of Chemistry, to explore new materials that could host single erbium ions with much less noise. More

  • in

    Unveiling global warming’s impact on daily precipitation with deep learning

    A collaborative international research team led by Professor Yoo-Geun Ham from Chonnam National University and Professor Seung-Ki Min from Pohang University of Science and Technology (POSTECH) has made a discovery on the impact of global warming on global daily precipitation. Using a deep learning approach, they have unveiled a significant change in the characteristics of global daily precipitation for the first time. Their research findings were published on August 30 in the online version of Nature.
    The research team devised a deep learning model to quantify the relationship between the intensity of global warming and global daily precipitation patterns. They then applied this model to data obtained from satellite-based precipitation observations. The results revealed that on more than 50% of all days, there was a clear deviation from natural variability in the daily precipitation pattern since 2015, influenced by human-induced global warming.
    In contrast to conventional studies, which primarily focus on long-term trends in monthly or annual precipitation, the researchers employed explainable artificial intelligence to demonstrate that changes in daily precipitation variations were gradually intensifying upon weather timescales. These fluctuations in rainfall at this weather time scale served as the most conspicuous indicators of global warming. The study further affirmed that the most evident changes in daily precipitation variability were observed over the sub-tropical East Pacific and mid-altitude storm track regions.
    The researchers explained that traditional linear statistical methods used in previous climate change detection research had limitations in discerning non-linear reactions such as the intensified variability in daily precipitation. Deep learning, however, overcame these limitations by employing non-linear activation functions. Moreover, while previous research methods primarily investigated global precipitation change patterns due to global warming, convolutional deep learning offered a distinct advantage in effectively detecting regional change patterns resulting from global warming.
    Professor Yoo-Geun Ham explained, “Intensification of day-to-day precipitation variability implies an increase in the frequency of extreme precipitation events as well as a higher occurrence of heatwaves during the summer due to extended dry spells.” Professor Seung-Ki Min added, “Given the ongoing trajectory of global warming, it is imperative to develop countermeasures as the consecutive occurrence of extreme precipitation and heatwaves are likely to become more frequent in the future.”
    This study was conducted with the support from the Ministry of Environment and the National Research Foundation of Korea. More

  • in

    Challenge accepted: High-speed AI drone overtakes world-champion drone racers

    Remember when IBM’s Deep Blue won against Gary Kasparov at chess in 1996, or Google’s AlphaGo crushed the top champion Lee Sedol at Go, a much more complex game, in 2016? These competitions where machines prevailed over human champions are key milestones in the history of artificial intelligence. Now a group of researchers from the University of Zurich and Intel has set a new milestone with the first autonomous system capable of beating human champions at a physical sport: drone racing.
    The AI system, called Swift, won multiple races against three world-class champions in first-person view (FPV) drone racing, where pilots fly quadcopters at speeds exceeding 100 km/h, controlling them remotely while wearing a headset linked to an onboard camera.
    Learning by interacting with the physical world
    “Physical sports are more challenging for AI because they are less predictable than board or video games. We don’t have a perfect knowledge of the drone and environment models, so the AI needs to learn them by interacting with the physical world,” says Davide Scaramuzza, head of the Robotics and Perception Group at the University of Zurich — and newly minted drone racing team captain.
    Until very recently, autonomous drones took twice as long as those piloted by humans to fly through a racetrack, unless they relied on an external position-tracking system to precisely control their trajectories. Swift, however, reacts in real time to the data collected by an onboard camera, like the one used by human racers. Its integrated inertial measurement unit measures acceleration and speed while an artificial neural network uses data from the camera to localize the drone in space and detect the gates along the racetrack. This information is fed to a control unit, also based on a deep neural network that chooses the best action to finish the circuit as fast as possible.
    Training in an optimised simulation environment
    Swift was trained in a simulated environment where it taught itself to fly by trial and error, using a type of machine learning called reinforcement learning. The use of simulation helped avoid destroying multiple drones in the early stages of learning when the system often crashes. “To make sure that the consequences of actions in the simulator were as close as possible to the ones in the real world, we designed a method to optimize the simulator with real data,” says Elia Kaufmann, first author of the paper. In this phase, the drone flew autonomously thanks to very precise positions provided by an external position-tracking system, while also recording data from its camera. This way it learned to autocorrect errors it made interpreting data from the onboard sensors. More

  • in

    Surpassing the human eye: Machine learning image analysis rapidly determines chemical mixture composition

    Machine learning model provides quick method for determining the composition of solid chemical mixtures using only photographs of the sample.
    Have you ever accidentally ruined a recipe in the kitchen by adding salt instead of sugar? Due to their similar appearance, it’s an easy mistake to make. Similarly, checking with the naked eye is also used in chemistry labs to provide quick, initial assessments of reactions; however, just like in the kitchen, the human eye has its limitations and can be unreliable.
    To address this, researchers at the Institute of Chemical Reaction Design and Discovery (WPI-ICReDD), Hokkaido University led by Professor Yasuhide Inokuma have developed a machine learning model that can distinguish the composition ratio of solid mixtures of chemical compounds using only photographs of the samples.
    The model was designed and developed using mixtures of sugar and salt as a test case. The team employed a combination of random cropping, flipping and rotating of the original photographs in order to create a larger number of sub images for training and testing. This enabled the model to be developed using only 300 original images for training. The trained model was roughly twice as accurate as the naked eye of even the most expert member of the team.
    “I think it’s fascinating that with machine learning we have been able to reproduce and even exceed the accuracy of the eyes of experienced chemists,” commented Inokuma. “This tool should be able to help new chemists achieve an experienced eye more quickly.”
    After the successful test case, researchers applied this model to the evaluation of different chemical mixtures. The model successfully distinguished different polymorphs and enantiomers, both of which are extremely similar versions of the same molecule with subtle differences in atomic or molecular arrangement. Distinguishing these subtle differences is important in the pharmaceutical industry and normally requires a more time-consuming process.
    The model was even able to handle more complex mixtures, accurately assessing the percentage of a target molecule in a four-component mixture. Reaction yield was also analyzed, determining the progress of a thermal decarboxylation reaction.
    The team further demonstrated the versatility of their model, showing that it could accurately analyze images taken with a mobile phone, after supplemental training was performed. The researchers anticipate a wide variety of applications, both in the research lab and in industry.
    “We see this as being applicable in situations where constant, rapid evaluation is required, such as monitoring reactions at a chemical plant or as an analysis step in an automated process using a synthesis robot,” explained Specially Appointed Assistant Professor Yuki Ide. “Additionally, this could act as an observation tool for those who have impaired vision.” More

  • in

    No worries: Online course to help you stop ruminating

    An online course designed to curb negative thinking has had strong results in helping people reduce the time they spend ruminating and worrying, a new study from UNSW Sydney has shown.
    And researchers say the online course, which will soon be hosted on the Australian Government funded online clinic This Way Up and is free with a prescription from a clinician, was found to significantly improve the mental health of the people who participated in the study. The trial was part of a collaboration between UNSW, the Black Dog Institute and The Clinical Research Unit for Anxiety and Depression at St Vincent’s Health Network.
    The Managing Rumination and Worry Program features three lessons to be completed over a six-week period. It aims to help participants reduce their levels of rumination, which is dwelling on past negative experiences, and worry, which is thinking over and over about bad things happening in future.
    Professor Jill Newby, who is a clinical psychologist with UNSW’s School of Psychology and the affiliated Black Dog Institute, says when the call went out to recruit people for the randomised controlled trial, the team was inundated with applications.
    “Out of all the research we’ve done on online therapies, this is by far the most popular program we’ve done,” Prof. Newby says.
    “We got way more applicants for what we could manage in a very quick timeframe. So it’s clear there is a community need for help with rumination and worry.”
    The researchers recruited 137 adults who were experiencing elevated levels of repetitive negative thinking. They were randomly allocated to one of three groups: a clinician-guided, three-lesson online course delivered over six weeks; the same course but without the assistance of a clinician; or a control group who received the online course after an 18-week waiting period. More

  • in

    People hold smart AI assistants responsible for outcomes

    Even when humans see AI-based assistants purely as tools, they ascribe partial responsibility for decisions to them, as a new study shows.
    Future AI-based systems may navigate autonomous vehicles through traffic with no human input. Research has shown that people judge such futuristic AI systems to be just as responsible as humans when they make autonomous traffic decisions. However, real-life AI assistants are far removed from this kind of autonomy. They provide human users with supportive information such as navigation and driving aids. So, who is responsible in these real-life cases when something goes right or wrong? The human user? Or the AI assistant? A team led by Louis Longin from the Chair of Philosophy of Mind has now investigated how people assess responsibility in these cases.
    “We all have smart assistants in our pockets,” says Longin. “Yet a lot of the experimental evidence we have on responsibility gaps focuses on robots or autonomous vehicles where AI is literally in the driver’s seat, deciding for us. Investigating cases where we are still the ones making the final decision, but use AI more like a sophisticated instrument, is essential.”
    A philosopher specialized in the interaction between humans and AI, Longin, working in collaboration with his colleague Dr. Bahador Bahrami and Prof. Ophelia Deroy, Chair of Philosophy of Mind, investigated how 940 participants judged a human driver using either a smart AI-powered verbal assistant, a smart AI-powered tactile assistant, or a non-AI navigation instrument. Participants also indicated whether they saw the navigation aid as responsible, and to which degree it was a tool.
    Ambivalent status of smart assistants
    The results reveal an ambivalence: Participants strongly asserted that smart assistants were just tools, yet they saw them as partly responsible for the success or failures of the human drivers who consulted them. No such division of responsibility occurred for the non-AI powered instrument.
    No less surprising for the authors was that the smart assistants were also considered more responsible for positive rather than negative outcomes. “People might apply different moral standards for praise and blame. When a crash is averted and no harm ensues, standards are relaxed, making it easier for people to assign credit than blame to non-human systems” suggests Dr. Bahrami, who is an expert on collective responsibility. More

  • in

    Tiny, shape-shifting robot can squish itself into tight spaces

    Coming to a tight spot near you: CLARI, the little, squishable robot that can passively change its shape to squeeze through narrow gaps — with a bit of inspiration from the world of bugs.
    CLARI, which stands for Compliant Legged Articulated Robotic Insect, comes from a team of engineers at the University of Colorado Boulder. It also has the potential to aid first responders after major disasters in an entirely new way.
    Several of these robots can easily fit in the palm of your hand, and each weighs less than a Ping Pong ball. CLARI can transform its shape from square to long and slender when its surroundings become cramped, said Heiko Kabutz, a doctoral student in the Paul M. Rady Department of Mechanical Engineering.
    Kabutz and his colleagues introduced the miniature robot in a study published Aug. 30 in the journal “Advanced Intelligent Systems.”
    Right now, CLARI has four legs. But the machine’s design allows engineers to mix and match its appendages, potentially giving rise to some wild and wriggly robots.
    “It has a modular design, which means it’s very easy to customize and add more legs,” Kabutz said. “Eventually, we’d like to build an eight-legged, spider-style robot that could walk over a web.”
    CLARI is still in its infancy, added Kaushik Jayaram, co-author of the study and an assistant professor of mechanical engineering at CU Boulder. The robot, for example, is tethered to wires, which supply it with power and send it basic commands. But he hopes that, one day, these petite machines could crawl independently into spaces where no robot has crawled before — like the insides of jet engines or the rubble of collapsed buildings. More

  • in

    Brain tumors ‘hack’ the communication between neurons, pioneering study finds

    Nearly half of all patients with brain metastasis experience cognitive impairment. Until now, it was thought that this was due to the physical presence of the tumour pressing on neural tissue. But this ‘mass effect’ hypothesis is flawed because there is often no relationship between the size of the tumour and its cognitive impact. Small tumours can cause significant changes, and large tumours can produce mild effects. Why is this?
    The explanation may lie in the fact that brain metastasis hacks the brain’s activity, a study featured on Cancer Cell’s cover shows for the first time.
    The authors, from the Spanish National Research Council (CSIC) and the Spanish National Cancer Research Centre (CNIO), have discovered that when cancer spreads (metastasises) in the brain, it changes the brain’s chemistry and disrupts neuronal communication — neurons communicate through electrical impulses generated and transmitted by biochemical changes in the cells and their surroundings.
    In this study, the laboratories of Manuel Valiente (CNIO) and Liset Menéndez de La Prida (Cajal Institute CSIC) have collaborated within the EU-funded NanoBRIGHT project, aimed at developing new technologies for the study of the brain, and with the participation of other funding agencies such as MICINN, AECC, ERC, NIH and EMBO.
    Demonstration with artificial intelligence
    The researchers measured the electrical activity of the brains of mice with and without metastases and observed that the electrophysiological recordings of the two groups of animals with cancer were different from each other. To be sure that this difference was attributable to metastases, they turned to artificial intelligence. They trained an automatic algorithm with numerous electrophysiological recordings, and the model was indeed able to identify the presence of metastases. The system was even able to distinguish metastases from different primary tumours — skin, lung and breast cancer.
    These results show that metastasis does indeed affect the brain’s electrical activity in a specific way, leaving clear and recognizable signatures. More