More stories

  • in

    Challenge accepted: High-speed AI drone overtakes world-champion drone racers

    Remember when IBM’s Deep Blue won against Gary Kasparov at chess in 1996, or Google’s AlphaGo crushed the top champion Lee Sedol at Go, a much more complex game, in 2016? These competitions where machines prevailed over human champions are key milestones in the history of artificial intelligence. Now a group of researchers from the University of Zurich and Intel has set a new milestone with the first autonomous system capable of beating human champions at a physical sport: drone racing.
    The AI system, called Swift, won multiple races against three world-class champions in first-person view (FPV) drone racing, where pilots fly quadcopters at speeds exceeding 100 km/h, controlling them remotely while wearing a headset linked to an onboard camera.
    Learning by interacting with the physical world
    “Physical sports are more challenging for AI because they are less predictable than board or video games. We don’t have a perfect knowledge of the drone and environment models, so the AI needs to learn them by interacting with the physical world,” says Davide Scaramuzza, head of the Robotics and Perception Group at the University of Zurich — and newly minted drone racing team captain.
    Until very recently, autonomous drones took twice as long as those piloted by humans to fly through a racetrack, unless they relied on an external position-tracking system to precisely control their trajectories. Swift, however, reacts in real time to the data collected by an onboard camera, like the one used by human racers. Its integrated inertial measurement unit measures acceleration and speed while an artificial neural network uses data from the camera to localize the drone in space and detect the gates along the racetrack. This information is fed to a control unit, also based on a deep neural network that chooses the best action to finish the circuit as fast as possible.
    Training in an optimised simulation environment
    Swift was trained in a simulated environment where it taught itself to fly by trial and error, using a type of machine learning called reinforcement learning. The use of simulation helped avoid destroying multiple drones in the early stages of learning when the system often crashes. “To make sure that the consequences of actions in the simulator were as close as possible to the ones in the real world, we designed a method to optimize the simulator with real data,” says Elia Kaufmann, first author of the paper. In this phase, the drone flew autonomously thanks to very precise positions provided by an external position-tracking system, while also recording data from its camera. This way it learned to autocorrect errors it made interpreting data from the onboard sensors. More

  • in

    Surpassing the human eye: Machine learning image analysis rapidly determines chemical mixture composition

    Machine learning model provides quick method for determining the composition of solid chemical mixtures using only photographs of the sample.
    Have you ever accidentally ruined a recipe in the kitchen by adding salt instead of sugar? Due to their similar appearance, it’s an easy mistake to make. Similarly, checking with the naked eye is also used in chemistry labs to provide quick, initial assessments of reactions; however, just like in the kitchen, the human eye has its limitations and can be unreliable.
    To address this, researchers at the Institute of Chemical Reaction Design and Discovery (WPI-ICReDD), Hokkaido University led by Professor Yasuhide Inokuma have developed a machine learning model that can distinguish the composition ratio of solid mixtures of chemical compounds using only photographs of the samples.
    The model was designed and developed using mixtures of sugar and salt as a test case. The team employed a combination of random cropping, flipping and rotating of the original photographs in order to create a larger number of sub images for training and testing. This enabled the model to be developed using only 300 original images for training. The trained model was roughly twice as accurate as the naked eye of even the most expert member of the team.
    “I think it’s fascinating that with machine learning we have been able to reproduce and even exceed the accuracy of the eyes of experienced chemists,” commented Inokuma. “This tool should be able to help new chemists achieve an experienced eye more quickly.”
    After the successful test case, researchers applied this model to the evaluation of different chemical mixtures. The model successfully distinguished different polymorphs and enantiomers, both of which are extremely similar versions of the same molecule with subtle differences in atomic or molecular arrangement. Distinguishing these subtle differences is important in the pharmaceutical industry and normally requires a more time-consuming process.
    The model was even able to handle more complex mixtures, accurately assessing the percentage of a target molecule in a four-component mixture. Reaction yield was also analyzed, determining the progress of a thermal decarboxylation reaction.
    The team further demonstrated the versatility of their model, showing that it could accurately analyze images taken with a mobile phone, after supplemental training was performed. The researchers anticipate a wide variety of applications, both in the research lab and in industry.
    “We see this as being applicable in situations where constant, rapid evaluation is required, such as monitoring reactions at a chemical plant or as an analysis step in an automated process using a synthesis robot,” explained Specially Appointed Assistant Professor Yuki Ide. “Additionally, this could act as an observation tool for those who have impaired vision.” More

  • in

    No worries: Online course to help you stop ruminating

    An online course designed to curb negative thinking has had strong results in helping people reduce the time they spend ruminating and worrying, a new study from UNSW Sydney has shown.
    And researchers say the online course, which will soon be hosted on the Australian Government funded online clinic This Way Up and is free with a prescription from a clinician, was found to significantly improve the mental health of the people who participated in the study. The trial was part of a collaboration between UNSW, the Black Dog Institute and The Clinical Research Unit for Anxiety and Depression at St Vincent’s Health Network.
    The Managing Rumination and Worry Program features three lessons to be completed over a six-week period. It aims to help participants reduce their levels of rumination, which is dwelling on past negative experiences, and worry, which is thinking over and over about bad things happening in future.
    Professor Jill Newby, who is a clinical psychologist with UNSW’s School of Psychology and the affiliated Black Dog Institute, says when the call went out to recruit people for the randomised controlled trial, the team was inundated with applications.
    “Out of all the research we’ve done on online therapies, this is by far the most popular program we’ve done,” Prof. Newby says.
    “We got way more applicants for what we could manage in a very quick timeframe. So it’s clear there is a community need for help with rumination and worry.”
    The researchers recruited 137 adults who were experiencing elevated levels of repetitive negative thinking. They were randomly allocated to one of three groups: a clinician-guided, three-lesson online course delivered over six weeks; the same course but without the assistance of a clinician; or a control group who received the online course after an 18-week waiting period. More

  • in

    People hold smart AI assistants responsible for outcomes

    Even when humans see AI-based assistants purely as tools, they ascribe partial responsibility for decisions to them, as a new study shows.
    Future AI-based systems may navigate autonomous vehicles through traffic with no human input. Research has shown that people judge such futuristic AI systems to be just as responsible as humans when they make autonomous traffic decisions. However, real-life AI assistants are far removed from this kind of autonomy. They provide human users with supportive information such as navigation and driving aids. So, who is responsible in these real-life cases when something goes right or wrong? The human user? Or the AI assistant? A team led by Louis Longin from the Chair of Philosophy of Mind has now investigated how people assess responsibility in these cases.
    “We all have smart assistants in our pockets,” says Longin. “Yet a lot of the experimental evidence we have on responsibility gaps focuses on robots or autonomous vehicles where AI is literally in the driver’s seat, deciding for us. Investigating cases where we are still the ones making the final decision, but use AI more like a sophisticated instrument, is essential.”
    A philosopher specialized in the interaction between humans and AI, Longin, working in collaboration with his colleague Dr. Bahador Bahrami and Prof. Ophelia Deroy, Chair of Philosophy of Mind, investigated how 940 participants judged a human driver using either a smart AI-powered verbal assistant, a smart AI-powered tactile assistant, or a non-AI navigation instrument. Participants also indicated whether they saw the navigation aid as responsible, and to which degree it was a tool.
    Ambivalent status of smart assistants
    The results reveal an ambivalence: Participants strongly asserted that smart assistants were just tools, yet they saw them as partly responsible for the success or failures of the human drivers who consulted them. No such division of responsibility occurred for the non-AI powered instrument.
    No less surprising for the authors was that the smart assistants were also considered more responsible for positive rather than negative outcomes. “People might apply different moral standards for praise and blame. When a crash is averted and no harm ensues, standards are relaxed, making it easier for people to assign credit than blame to non-human systems” suggests Dr. Bahrami, who is an expert on collective responsibility. More

  • in

    Tiny, shape-shifting robot can squish itself into tight spaces

    Coming to a tight spot near you: CLARI, the little, squishable robot that can passively change its shape to squeeze through narrow gaps — with a bit of inspiration from the world of bugs.
    CLARI, which stands for Compliant Legged Articulated Robotic Insect, comes from a team of engineers at the University of Colorado Boulder. It also has the potential to aid first responders after major disasters in an entirely new way.
    Several of these robots can easily fit in the palm of your hand, and each weighs less than a Ping Pong ball. CLARI can transform its shape from square to long and slender when its surroundings become cramped, said Heiko Kabutz, a doctoral student in the Paul M. Rady Department of Mechanical Engineering.
    Kabutz and his colleagues introduced the miniature robot in a study published Aug. 30 in the journal “Advanced Intelligent Systems.”
    Right now, CLARI has four legs. But the machine’s design allows engineers to mix and match its appendages, potentially giving rise to some wild and wriggly robots.
    “It has a modular design, which means it’s very easy to customize and add more legs,” Kabutz said. “Eventually, we’d like to build an eight-legged, spider-style robot that could walk over a web.”
    CLARI is still in its infancy, added Kaushik Jayaram, co-author of the study and an assistant professor of mechanical engineering at CU Boulder. The robot, for example, is tethered to wires, which supply it with power and send it basic commands. But he hopes that, one day, these petite machines could crawl independently into spaces where no robot has crawled before — like the insides of jet engines or the rubble of collapsed buildings. More

  • in

    Brain tumors ‘hack’ the communication between neurons, pioneering study finds

    Nearly half of all patients with brain metastasis experience cognitive impairment. Until now, it was thought that this was due to the physical presence of the tumour pressing on neural tissue. But this ‘mass effect’ hypothesis is flawed because there is often no relationship between the size of the tumour and its cognitive impact. Small tumours can cause significant changes, and large tumours can produce mild effects. Why is this?
    The explanation may lie in the fact that brain metastasis hacks the brain’s activity, a study featured on Cancer Cell’s cover shows for the first time.
    The authors, from the Spanish National Research Council (CSIC) and the Spanish National Cancer Research Centre (CNIO), have discovered that when cancer spreads (metastasises) in the brain, it changes the brain’s chemistry and disrupts neuronal communication — neurons communicate through electrical impulses generated and transmitted by biochemical changes in the cells and their surroundings.
    In this study, the laboratories of Manuel Valiente (CNIO) and Liset Menéndez de La Prida (Cajal Institute CSIC) have collaborated within the EU-funded NanoBRIGHT project, aimed at developing new technologies for the study of the brain, and with the participation of other funding agencies such as MICINN, AECC, ERC, NIH and EMBO.
    Demonstration with artificial intelligence
    The researchers measured the electrical activity of the brains of mice with and without metastases and observed that the electrophysiological recordings of the two groups of animals with cancer were different from each other. To be sure that this difference was attributable to metastases, they turned to artificial intelligence. They trained an automatic algorithm with numerous electrophysiological recordings, and the model was indeed able to identify the presence of metastases. The system was even able to distinguish metastases from different primary tumours — skin, lung and breast cancer.
    These results show that metastasis does indeed affect the brain’s electrical activity in a specific way, leaving clear and recognizable signatures. More

  • in

    New ‘droplet battery’ could pave the way for miniature bio-integrated devices

    University of Oxford researchers have made a significant step towards realising miniature bio-integrated devices, capable of directly stimulating cells. The work has been published today in the journal Nature.
    Small bio-integrated devices that can interact with and stimulate cells could have important therapeutic applications, including the delivery of targeted drug therapies and the acceleration of wound healing. However, such devices all need a power source to operate. To date, there has been no efficient means to provide power at the microscale level.
    To address this, researchers fromthe University of Oxford’s Department of Chemistry have developed a miniature power source capable of altering the activity of cultured human nerve cells. Inspired by how electric eels generate electricity, the device uses internal ion gradients to generate energy.
    The miniaturized soft power source is produced by depositing a chain of five nanolitre-sized droplets of a conductive hydrogel (a 3D network of polymer chains containing a large quantity of absorbed water). Each droplet has a different composition so that a salt concentration gradient is created across the chain. The droplets are separated from their neighbours by lipid bilayers, which provide mechanical support while preventing ions from flowing between the droplets.
    The power source is turned on by cooling the structure to 4°C and changing the surrounding medium: this disrupts the lipid bilayers and causes the droplets to form a continuous hydrogel. This allows the ions to move through the conductive hydrogel, from the high-salt droplets at the two ends to the low-salt droplet in the middle. By connecting the end droplets to electrodes, the energy released from the ion gradients is transformed into electricity, enabling the hydrogel structure to act as a power source for external components.
    In the study, the activated droplet power source produced a current which persisted for over 30 minutes. The maximum output power of a unit made of 50 nanolitre droplets was around 65 nanowatts (nW). The devices produced a similar amount of current after being stored for 36 hours.
    The research team then demonstrated how living cells could be attached to one end of the device so that their activity could be directly regulated by the ionic current. The team attached the device to droplets containing human neural progenitor cells, which had been stained with a fluorescent dye to indicate their activity. When the power source was turned on, time-lapse recording demonstrated waves of intercellular calcium signalling* in the neurons, induced by the local ionic current. More

  • in

    Paving the way for advanced quantum sensors

    Quantum physics has allowed for the creation of sensors far surpassing the precision of classical devices. Now, several studies in Nature show that the precision of these quantum sensors can be significantly improved using entanglement produced by finite-range interactions. Innsbruck researchers led by Christian Roos were able to demonstrate this enhancement using entangled ion-chains with up to 51 particles.
    Metrological institutions around the world administer our time, using atomic clocks based on the natural oscillations of atoms. These clocks, pivotal for applications like satellite navigation or data transfer, have recently been improved by using ever higher oscillation frequencies in optical atomic clocks. Now, scientists at the University of Innsbruck and the Institute of Quantum Optics and Quantum Information (IQOQI) of the Austrian Academy of Sciences led by Christian Roos show how a particular way of creating entanglement can be used to further improve the accuracy of measurements integral to an optical atomic clock’s function.
    Measurement error halved in experiment
    Observations of quantum systems are always subject to a certain statistical uncertainty. “This is due to the nature of the quantum world,” explains Johannes Franke from Christian Roos’ team. “Entanglement can help us reduce these errors.” With the support of theorist Ana Maria Rey from JILA in Boulder, USA, the Innsbruck physicists tested the measurement accuracy on an entangled ensemble of particles in the laboratory. The researchers used lasers to tune the interaction of ions lined up in a vacuum chamber and entangled them. “The interaction between neighboring particles decreases with the distance between the particles. Therefore, we used spin-exchange interactions to allow the system to behave more collectively,” explains Raphael Kaubrügger from the Department of Theoretical Physics at the University of Innsbruck. Thus, all particles in the chain were entangled with each other and produced a so-called squeezed quantum state. Using this, the physicists were able to show that measurement errors can be roughly halved by entangling 51 ions in relation to individual particles. Previously, entanglement-enhanced sensing mainly relied on infinite interactions, limiting its applicability to only certain quantum platforms.
    Even more accurate clocks
    With their experiments, the Innsbruck quantum physicists were able to show that quantum entanglement makes sensors even more sensitive. “We used an optical transition in our experiments that is also employed in atomic clocks,” says Christian Roos. This technology could improve areas where atomic clocks are currently used, such as satellite-based navigation or data transfer. Moreover, these advanced clocks could open new possibilities in pursuits like the search for dark matter or the determination of time-variations of fundamental constants.
    Christian Roos and his team now want to test the new method in two-dimensional ion ensembles. The current results were published in the journal Nature. In the same issue, researchers published very similar results using neutral atoms. The research in Innsbruck was financially supported by the Austrian Science Fund FWF and the Federation of Austrian Industries Tyrol, among others. More