More stories

  • in

    Machining the heart: New predictor for helping to beat chronic heart failure

    Tens of millions of people worldwide have chronic heart failure, and only a little over half of them survive 5 years beyond their diagnosis. Now, researchers from Japan are helping doctors to assign patients into groups based on their specific needs, to improve medical outcomes.
    In a study recently published in the Journal of Nuclear Cardiology, researchers from Kanazawa University have used computer science to disentangle patients most at risk of sudden arrhythmic cardiac death from patients most at risk of heart failure death.
    Doctors have many methods at their disposal for diagnosing chronic heart failure. However, there’s a need to better identify what treatment to pursue, in accordance with the risks of each approach. When combined with conventional clinical tests, a molecule known as iodine-123 labelled MIBG can help discriminate between high-risk and low-risk patients. However, there is no way to assess the risk of arrhythmic death separately from the risk of heart failure death, something the researchers at Kanazawa University aimed to address.
    “We used artificial intelligence to show that numerous variables work in synergy to better predict chronic heart failure outcomes,” explains lead author of the study Kenichi Nakajima. “Neither variable, in and of itself, is quite up to the task.”
    To do this, the researchers examined the medical records of 526 patients with chronic heart failure and who underwent consecutive iodine-123-MIBG imaging and standard clinical testing. Conventional medical care proceeded as normal after imaging.
    “The results were clear,” says Nakajima. “Heart failure death was most common in older adult patients with very low MIBG activity, worse New York Heart Association class, and comorbidities.”
    Furthermore, arrhythmia was most common in younger patients with moderately low iodine-123-MIBG activity and less serious heart failure. Doctors can use the Kanazawa University researchers’ results to tailor medical care; for example, the type of implantable defibrillator most likely to meet the needs of the patient.
    “It’s important to note that our results need to be confirmed in a larger study,” explains Nakajima. “In particular, the arrhythmia outcomes were perhaps too infrequent to be clinically reliable.”
    Given that chronic heart failure is a global problem that frequently kills within a few years after diagnosis, if not treated appropriately, it’s essential to start the most appropriate medical care as soon as possible. With a reliable test that predicts which patients most likely need which treatments, a greater number of patients are likely to live longer.

    Story Source:
    Materials provided by Kanazawa University. Note: Content may be edited for style and length. More

  • in

    Recognizing fake images using frequency analysis

    They look deceptively real, but they are made by computers: so-called deep-fake images are generated by machine learning algorithms, and humans are pretty much unable to distinguish them from real photos. Researchers at the Horst Görtz Institute for IT Security at Ruhr-Universität Bochum and the Cluster of Excellence “Cyber Security in the Age of Large-Scale Adversaries” (Casa) have developed a new method for efficiently identifying deep-fake images. To this end, they analyse the objects in the frequency domain, an established signal processing technique.
    The team presented their work at the International Conference on Machine Learning (ICML) on 15 July 2020, one of the leading conferences in the field of machine learning. Additionally, the researchers make their code freely available online at https://github.com/RUB-SysSec/GANDCTAnalysis, so that other groups can reproduce their results.
    Interaction of two algorithms results in new images
    Deep-fake images — a portmanteau word from “deep learning” for machine learning and “fake” — are generated with the help of computer models, so-called Generative Adversarial Networks, GANs for short. Two algorithms work together in these networks: the first algorithm creates random images based on certain input data. The second algorithm needs to decide whether the image is a fake or not. If the image is found to be a fake, the second algorithm gives the first algorithm the command to revise the image — until it no longer recognises it as a fake.
    In recent years, this technique has helped make deep-fake images more and more authentic. On the website www.whichfaceisreal.com, users can check if they’re able to distinguish fakes from original photos. “In the era of fake news, it can be a problem if users don’t have the ability to distinguish computer-generated images from originals,” says Professor Thorsten Holz from the Chair for Systems Security.
    For their analysis, the Bochum-based researchers used the data sets that also form the basis of the above-mentioned page “Which face is real.” In this interdisciplinary project, Joel Frank, Thorsten Eisenhofer and Professor Thorsten Holz from the Chair for Systems Security cooperated with Professor Asja Fischer from the Chair of Machine Learning as well as Lea Schönherr and Professor Dorothea Kolossa from the Chair of Digital Signal Processing.
    Frequency analysis reveals typical artefacts
    To date, deep-fake images have been analysed using complex statistical methods. The Bochum group chose a different approach by converting the images into the frequency domain using the discrete cosine transform. The generated image is thus expressed as the sum of many different cosine functions. Natural images consist mainly of low-frequency functions.
    The analysis has shown that images generated by GANs exhibit artefacts in the high-frequency range. For example, a typical grid structure emerges in the frequency representation of fake images. “Our experiments showed that these artefacts do not only occur in GAN generated images. They are a structural problem of all deep learning algorithms,” explains Joel Frank from the Chair for Systems Security. “We assume that the artefacts described in our study will always tell us whether the image is a deep-fake image created by machine learning,” adds Frank. “Frequency analysis is therefore an effective way to automatically recognise computer-generated images.”

    Story Source:
    Materials provided by Ruhr-University Bochum. Note: Content may be edited for style and length. More

  • in

    Marine drifters: Interdisciplinary study explores plankton diversity

    Ocean plankton are the drifters of the marine world. They’re algae, animals, bacteria, or protists that are at the mercy of the tide and currents. Many are microscopic and hidden from view, barely observable with the naked eye. Though others, like jellyfish, can grow relatively large.
    There’s one thing about these drifting critters that has puzzled ecologists for decades — the diversity among ocean plankton is much higher than expected. Generally, in any given ocean sample, there are many rare species of plankton and a small number of abundant species. Researchers from the Okinawa Institute of Science and Technology Graduate University (OIST) have published a paper in Science Advances that combines mathematical models with metagenomics and marine science to uncover why this might be the case.
    “For years, scientists have been asking why there are so many species in the ocean,” said Professor Simone Pigolotti, who leads OIST’s Biological Complexity Unit. Professor Pigolotti explained that plankton can be transported across very large distances by currents, so they don’t seem to be limited by dispersal. This would suggest that niche preference is the factor that determines species diversity — in other words, a single species will outcompete all other species if the environment suits them best, leading to communities with only a few, highly abundant species.
    “Our research explored the theory that ocean currents promote species diversity, not because they help plankton to disperse, but because they can actually limit dispersal by creating barriers,” said Professor Pigolotti. “In contrast, when we looked at samples from lakes, where there are little or no currents, we found more abundant species, but fewer species altogether.”
    At first glance, this might seem counter-intuitive. But while currents may carry plankton from one area to another, they also prevent the plankton from crossing to the other side of the current. Thus, these currents reduce competition and force each species of plankton to coexist with other species, albeit in small numbers.
    Combining DNA tests with mathematical models
    For over a century, ecologists have measured diversity by counting the number of species, such as birds or insects, in an area. This allowed them to find the proportions of abundant species versus rare species. Today, the task is streamlined through both quantitative modelling that can predict species distributions and metagenomics — instead of just counting species, researchers can efficiently collect all the DNA in a sample.

    advertisement

    “Simply counting the amount of species in a sample is very time consuming,” said Professor Tom Bourguignon, who leads OIST’s Evolutionary Genomics Unit. “With advancements in sequencing technologies, we can run just one test and have several thousand DNA sequences that represents a good estimation of planktonic diversity.”
    For this study, the researchers were particularly interested in protists — microscopic, usually single-celled, planktonic organisms. The group created a mathematical model that considered the role of oceanic currents in determining the genealogy of protists through simulations. They couldn’t just simulate a protist community at the DNA level because there would be a huge number of individuals. So, instead, they simulated the individuals in a given sample from the ocean.
    To find out how closely related the individuals were, and whether they were of the same species, the researchers then looked back in time. “We created a trajectory that went back one hundred years,” said Professor Pigolotti. “If two individuals came from a common ancestor in the timescale of our simulation, then we classed them as the same species.”
    What they were specifically measuring was the number of species, and the number of individuals per species. The model was simulated with and without ocean currents. As the researchers had hypothesized, it showed that the presence of ocean currents caused a sharp increase in the number of protist species, but a decline in the number of individuals per species.
    To confirm the results of this model, the researchers then analyzed datasets from two studies of aquatic protists. The first dataset was of oceanic protists’ DNA sequences and the second, freshwater protists’ DNA sequences. They found that, on average, oceanic samples contained more rare species and less abundant species and, overall, had a larger number of species. This agreed with the model’s predictions.
    “Our results support the theory that ocean currents positively impact the diversity of rare aquatic protists by creating these barriers,” said Professor Pigolotti. “The project was very interdisciplinary. By combining theoretical physics, marine science, and metagenomics, we’ve shed new light on a classic problem in ecology, which is of relevance for marine biodiversity.” More

  • in

    Scientists identify new material with potential for brain-like computing

    The most powerful and advanced computing is still primitive compared to the power of the human brain, says Chinedu E. Ekuma, Assistant Professor in Lehigh University’s Department of Physics.
    Ekuma’s lab, which aims to gain an understanding of the physical properties of materials, develops models at the interface of computation, theory, and experiment. One area of focus: 2-Dimensional (2D) materials. Also dubbed low-dimensional, these are crystalline nanomaterials that consist of a single layer of atoms. Their novel properties make them especially useful for the next-generation of AI-powered electronics, known as neuromorphic, or brain-like devices.
    Neuromorphic devices attempt to better mimic how the human brain processes information than current computing methods. A key challenge in neuromorphic research is matching the human brain’s flexibility, and its ability to learn from unstructured inputs with energy efficiency. According to Ekuma, early successes in neuromorphic computing relied mainly on conventional silicon-based materials that are energy inefficient.
    “Neuromorphic materials have a combination of computing memory capabilities and energy efficiency for brain-like applications,” he says.
    Now Ekuma and his colleagues at the Sensor and Electrons Devices Directorate at the U.S. Army Research Laboratory have developed a new complex material design strategy for potential use in neuromorphic computing, using metallocene intercalation in hafnium disulfide (HfS2). The work is the first to demonstrate the effectiveness of a design strategy that functionalizes a 2D material with an organic molecule. It has been published in an article called “Dynamically reconfigurable electronic and phononic properties in intercalated HfS2” in Materials Today. Additional authors: Sina Najmaei, Adam A.Wilson Asher C. Leff and Madan Dubey of the United States Army Research Laboratory.
    “We knew that low-dimensional materials showed novel properties, but we did not expect such high tunability of the HfS2-based system,” says Ekuma. “The strategy was a concerted effort and synergy between experiment and computation. It started with an afternoon coffee chat where my colleagues and I discussed exploring the possibility of introducing organic molecules into a gap, known as van der Waals gap, in 2D materials. This was followed by the material design and rigorous computations to test the feasibility. Based on the encouraging computational data, we proceeded to make the sample, characterize the properties, and then made a prototype device with the designed material.”
    Scholars in search of energy-efficient materials may be particularly interested in this research, as well as industry, especially semiconductor industries designing logic gates and other electronic devices.
    “The key takeaway here is that complex materials design based on 2D materials is a promising route to achieving high performing and energy-efficient materials,” says Ekuma.

    Story Source:
    Materials provided by Lehigh University. Note: Content may be edited for style and length. More

  • in

    A GoPro for beetles: Researchers create a robotic camera backpack for insects

    In the movie “Ant-Man,” the title character can shrink in size and travel by soaring on the back of an insect. Now researchers at the University of Washington have developed a tiny wireless steerable camera that can also ride aboard an insect, giving everyone a chance to see an Ant-Man view of the world.
    The camera, which streams video to a smartphone at 1 to 5 frames per second, sits on a mechanical arm that can pivot 60 degrees. This allows a viewer to capture a high-resolution, panoramic shot or track a moving object while expending a minimal amount of energy. To demonstrate the versatility of this system, which weighs about 250 milligrams — about one-tenth the weight of a playing card — the team mounted it on top of live beetles and insect-sized robots.
    The results will be published July 15 in Science Robotics.
    “We have created a low-power, low-weight, wireless camera system that can capture a first-person view of what’s happening from an actual live insect or create vision for small robots,” said senior author Shyam Gollakota, a UW associate professor in the Paul G. Allen School of Computer Science & Engineering. “Vision is so important for communication and for navigation, but it’s extremely challenging to do it at such a small scale. As a result, prior to our work, wireless vision has not been possible for small robots or insects.”
    Typical small cameras, such as those used in smartphones, use a lot of power to capture wide-angle, high-resolution photos, and that doesn’t work at the insect scale. While the cameras themselves are lightweight, the batteries they need to support them make the overall system too big and heavy for insects — or insect-sized robots — to lug around. So the team took a lesson from biology.
    “Similar to cameras, vision in animals requires a lot of power,” said co-author Sawyer Fuller, a UW assistant professor of mechanical engineering. “It’s less of a big deal in larger creatures like humans, but flies are using 10 to 20% of their resting energy just to power their brains, most of which is devoted to visual processing. To help cut the cost, some flies have a small, high-resolution region of their compound eyes. They turn their heads to steer where they want to see with extra clarity, such as for chasing prey or a mate. This saves power over having high resolution over their entire visual field.”
    To mimic an animal’s vision, the researchers used a tiny, ultra-low-power black-and-white camera that can sweep across a field of view with the help of a mechanical arm. The arm moves when the team applies a high voltage, which makes the material bend and move the camera to the desired position. Unless the team applies more power, the arm stays at that angle for about a minute before relaxing back to its original position. This is similar to how people can keep their head turned in one direction for only a short period of time before returning to a more neutral position.

    advertisement

    “One advantage to being able to move the camera is that you can get a wide-angle view of what’s happening without consuming a huge amount of power,” said co-lead author Vikram Iyer, a UW doctoral student in electrical and computer engineering. “We can track a moving object without having to spend the energy to move a whole robot. These images are also at a higher resolution than if we used a wide-angle lens, which would create an image with the same number of pixels divided up over a much larger area.”
    The camera and arm are controlled via Bluetooth from a smartphone from a distance up to 120 meters away, just a little longer than a football field.
    The researchers attached their removable system to the backs of two different types of beetles — a death-feigning beetle and a Pinacate beetle. Similar beetles have been known to be able to carry loads heavier than half a gram, the researchers said.
    “We made sure the beetles could still move properly when they were carrying our system,” said co-lead author Ali Najafi, a UW doctoral student in electrical and computer engineering. “They were able to navigate freely across gravel, up a slope and even climb trees.”
    The beetles also lived for at least a year after the experiment ended.

    advertisement

    “We added a small accelerometer to our system to be able to detect when the beetle moves. Then it only captures images during that time,” Iyer said. “If the camera is just continuously streaming without this accelerometer, we could record one to two hours before the battery died. With the accelerometer, we could record for six hours or more, depending on the beetle’s activity level.”
    The researchers also used their camera system to design the world’s smallest terrestrial, power-autonomous robot with wireless vision. This insect-sized robot uses vibrations to move and consumes almost the same power as low-power Bluetooth radios need to operate.
    The team found, however, that the vibrations shook the camera and produced distorted images. The researchers solved this issue by having the robot stop momentarily, take a picture and then resume its journey. With this strategy, the system was still able to move about 2 to 3 centimeters per second — faster than any other tiny robot that uses vibrations to move — and had a battery life of about 90 minutes.
    While the team is excited about the potential for lightweight and low-power mobile cameras, the researchers acknowledge that this technology comes with a new set of privacy risks.
    “As researchers we strongly believe that it’s really important to put things in the public domain so people are aware of the risks and so people can start coming up with solutions to address them,” Gollakota said.
    Applications could range from biology to exploring novel environments, the researchers said. The team hopes that future versions of the camera will require even less power and be battery free, potentially solar-powered.
    “This is the first time that we’ve had a first-person view from the back of a beetle while it’s walking around. There are so many questions you could explore, such as how does the beetle respond to different stimuli that it sees in the environment?” Iyer said. “But also, insects can traverse rocky environments, which is really challenging for robots to do at this scale. So this system can also help us out by letting us see or collect samples from hard-to-navigate spaces.”
    This research was funded by a Microsoft fellowship and the National Science Foundation.
    Video: https://www.youtube.com/watch?v=115BGUZopHs&feature=emb_logo More

  • in

    Love-hate relationship of solvent and water leads to better biomass breakup

    Scientists at the Department of Energy’s Oak Ridge National Laboratory used neutron scattering and supercomputing to better understand how an organic solvent and water work together to break down plant biomass, creating a pathway to significantly improve the production of renewable biofuels and bioproducts.
    The discovery, published in the Proceedings of the National Academy of Sciences, sheds light on a previously unknown nanoscale mechanism that occurs during biomass deconstruction and identifies optimal temperatures for the process.
    “Understanding this fundamental mechanism can aid in the rational design of even more efficient technologies for processing biomass,” said Brian Davison, ORNL chief scientist for systems biology and biotechnology.
    Producing biofuels from plant material requires breaking its polymeric cellulose and hemicellulose components into fermentable sugars while removing the intact lignin — a structural polymer also found in plant cell walls — for use in value-added bioproducts such as plastics. Liquid chemicals known as solvents are often employed in this process to dissolve the biomass into its molecular components.
    Paired with water, a solvent called tetrahydrofuran, or THF, is particularly effective at breaking down biomass. Discovered by Charles Wyman and Charles Cai of the University of California, Riverside, during a study supported by DOE’s BioEnergy Science Center at ORNL, the THF-water mixture produces high yields of sugars while preserving the structural integrity of lignin for use in bioproducts. The success of these cosolvents intrigued ORNL scientists.
    “Using THF and water to pretreat biomass was a very important technological advance,” said ORNL’s Loukas Petridis of the University of Tennessee/ORNL Center for Molecular Biophysics. “But the science behind it was not known.”
    Petridis and his colleagues first ran a series of molecular dynamics simulations on the Titan and Summit supercomputers at the Oak Ridge Leadership Computing Facility, a DOE Office of Science user facility at ORNL. Their simulations showed that THF and water, which stay mixed in bulk, separate at the nanoscale to form clusters on biomass.

    advertisement

    THF selectively forms nanoclusters around the hydrophobic, or water-repelling, portions of lignin and cellulose while complementary water-rich nanoclusters form on the hydrophilic, or water-loving, portions. This dual action drives the deconstruction of biomass as each of the solvents dissolves portions of the cellulose while preventing lignin from forming clumps that would limit access to the cellulosic sugars — a common occurrence when biomass is mixed in water alone.
    “This was an interesting finding,” Petridis said. “But it is always important to validate simulations with experiments to make sure that what the simulations report corresponds to reality.”
    This phenomenon occurs at the tiny scale of three to four nanometers. For comparison, a human hair is typically 80,000 to 100,000 nanometers wide. These reactions presented a significant challenge to demonstrate in a physical experiment.
    Scientists at the High Flux Isotope Reactor, a DOE Office of Science user facility at ORNL, overcame this challenge using neutron scattering and a technique called contrast matching. This technique selectively replaces hydrogen atoms with deuterium, a form of hydrogen with an added neutron, to make certain components of the complex mixture in the experiment more visible to neutrons than others.
    “Neutrons see a hydrogen atom and a deuterium atom very differently,” said ORNL’s Sai Venkatesh Pingali, a Bio-SANS instrument scientist who performed the neutron scattering experiments. “We use this approach to selectively highlight parts of the whole system, which otherwise would not be visible, especially when they’re really small.”
    The use of deuterium rendered the cellulose invisible to neutrons and made the THF nanoclusters visually pop out against the cellulose like the proverbial needle in a haystack.
    To mimic biorefinery processing, researchers developed an experimental setup to heat the mixture of biomass and solvents and observe the changes in real time. The team found the action of the THF-water mix on biomass effectively kept lignin from clumping at all temperatures, enabling easier deconstruction of the cellulose. Increasing the temperature to 150 degrees Celsius triggered cellulose microfibril breakdown. These data provide new insights into the ideal processing temperature for these cosolvents to deconstruct biomass.
    “This was a collaborative effort with biologists, computational experts and neutron scientists working in tandem to answer the scientific challenge and provide industry-relevant knowledge,” Davison said. “The method could fuel further discoveries about other solvents and help grow the bioeconomy.” More

  • in

    Giving robots human-like perception of their physical environments

    Wouldn’t we all appreciate a little help around the house, especially if that help came in the form of a smart, adaptable, uncomplaining robot? Sure, there are the one-trick Roombas of the appliance world. But MIT engineers are envisioning robots more like home helpers, able to follow high-level, Alexa-type commands, such as “Go to the kitchen and fetch me a coffee cup.”
    To carry out such high-level tasks, researchers believe robots will have to be able to perceive their physical environment as humans do.
    “In order to make any decision in the world, you need to have a mental model of the environment around you,” says Luca Carlone, assistant professor of aeronautics and astronautics at MIT. “This is something so effortless for humans.
    But for robots it’s a painfully hard problem, where it’s about transforming pixel values that they see through a camera, into an understanding of the world.” Now Carlone and his students have developed a representation of spatial perception for robots that is modeled after the way humans perceive and navigate the world.
    The new model, which they call 3D Dynamic Scene Graphs, enables a robot to quickly generate a 3D map of its surroundings that also includes objects and their semantic labels (a chair versus a table, for instance), as well as people, rooms, walls, and other structures that the robot is likely seeing in its environment.
    The model also allows the robot to extract relevant information from the 3D map, to query the location of objects and rooms, or the movement of people in its path.

    advertisement

    “This compressed representation of the environment is useful because it allows our robot to quickly make decisions and plan its path,” Carlone says. “This is not too far from what we do as humans. If you need to plan a path from your home to MIT, you don’t plan every single position you need to take. You just think at the level of streets and landmarks, which helps you plan your route faster.”
    Beyond domestic helpers, Carlone says robots that adopt this new kind of mental model of the environment may also be suited for other high-level jobs, such as working side by side with people on a factory floor or exploring a disaster site for survivors.
    He and his students, including lead author and MIT graduate student Antoni Rosinol, will present their findings this week at the Robotics: Science and Systems virtual conference.
    A mapping mix
    At the moment, robotic vision and navigation has advanced mainly along two routes: 3D mapping that enables robots to reconstruct their environment in three dimensions as they explore in real time; and semantic segmentation, which helps a robot classify features in its environment as semantic objects, such as a car versus a bicycle, which so far is mostly done on 2D images.

    advertisement

    Carlone and Rosinol’s new model of spatial perception is the first to generate a 3D map of the environment in real-time, while also labeling objects, people (which are dynamic, contrary to objects), and structures within that 3D map.
    The key component of the team’s new model is Kimera, an open-source library that the team previously developed to simultaneously construct a 3D geometric model of an environment, while encoding the likelihood that an object is, say, a chair versus a desk.
    “Like the mythical creature that is a mix of different animals, we wanted Kimera to be a mix of mapping and semantic understanding in 3D,” Carlone says.
    Kimera works by taking in streams of images from a robot’s camera, as well as inertial measurements from onboard sensors, to estimate the trajectory of the robot or camera and to reconstruct the scene as a 3D mesh, all in real-time.
    To generate a semantic 3D mesh, Kimera uses an existing neural network trained on millions of real-world images, to predict the label of each pixel, and then projects these labels in 3D using a technique known as ray-casting, commonly used in computer graphics for real-time rendering.
    The result is a map of a robot’s environment that resembles a dense, three-dimensional mesh, where each face is color-coded as part of the objects, structures, and people within the environment.
    A layered scene
    If a robot were to rely on this mesh alone to navigate through its environment, it would be a computationally expensive and time-consuming task. So the researchers built off Kimera, developing algorithms to construct 3D dynamic “scene graphs” from Kimera’s initial, highly dense, 3D semantic mesh.
    Scene graphs are popular computer graphics models that manipulate and render complex scenes, and are typically used in video game engines to represent 3D environments.
    In the case of the 3D dynamic scene graphs, the associated algorithms abstract, or break down, Kimera’s detailed 3D semantic mesh into distinct semantic layers, such that a robot can “see” a scene through a particular layer, or lens. The layers progress in hierarchy from objects and people, to open spaces and structures such as walls and ceilings, to rooms, corridors, and halls, and finally whole buildings.
    Carlone says this layered representation avoids a robot having to make sense of billions of points and faces in the original 3D mesh.
    Within the layer of objects and people, the researchers have also been able to develop algorithms that track the movement and the shape of humans in the environment in real time.
    The team tested their new model in a photo-realistic simulator, developed in collaboration with MIT Lincoln Laboratory, that simulates a robot navigating through a dynamic office environment filled with people moving around.
    “We are essentially enabling robots to have mental models similar to the ones humans use,” Carlone says. “This can impact many applications, including self-driving cars, search and rescue, collaborative manufacturing, and domestic robotics.
    Another domain is virtual and augmented reality (AR). Imagine wearing AR goggles that run our algorithm: The goggles would be able to assist you with queries such as ‘Where did I leave my red mug?’ and ‘What is the closest exit?’
    You can think about it as an Alexa which is aware of the environment around you and understands objects, humans, and their relations.”
    “Our approach has just been made possible thanks to recent advances in deep learning and decades of research on simultaneous localization and mapping,” Rosinol says. “With this work, we are making the leap toward a new era of robotic perception called spatial-AI, which is just in its infancy but has great potential in robotics and large-scale virtual and augmented reality.”
    This research was funded, in part, by the Army Research Laboratory, the Office of Naval Research, and MIT Lincoln Laboratory.
    Paper: “3D Dynamic scene graphs: Actionable spatial perception with places, objects, and humans” https://roboticsconference.org/program/papers/79/
    Video: https://www.youtube.com/watch?v=SWbofjhyPzI More

  • in

    Researchers gives robots intelligent sensing abilities to carry out complex tasks

    Picking up a can of soft drink may be a simple task for humans, but this is a complex task for robots — it has to locate the object, deduce its shape, determine the right amount of strength to use, and grasp the object without letting it slip. Most of today’s robots operate solely based on visual processing, which limits their capabilities. In order to perform more complex tasks, robots have to be equipped with an exceptional sense of touch and the ability to process sensory information quickly and intelligently.
    A team of computer scientists and materials engineers from the National University of Singapore (NUS) has recently demonstrated an exciting approach to make robots smarter. They developed a sensory integrated artificial brain system that mimics biological neural networks, which can run on a power-efficient neuromorphic processor, such as Intel’s Loihi chip. This novel system integrates artificial skin and vision sensors, equipping robots with the ability to draw accurate conclusions about the objects they are grasping based on the data captured by the vision and touch sensors in real-time.
    “The field of robotic manipulation has made great progress in recent years. However, fusing both vision and tactile information to provide a highly precise response in milliseconds remains a technology challenge. Our recent work combines our ultra-fast electronic skins and nervous systems with the latest innovations in vision sensing and AI for robots so that they can become smarter and more intuitive in physical interactions,” said Assistant Professor Benjamin Tee from the NUS Department of Materials Science and Engineering. He co-leads this project with Assistant Professor Harold Soh from the Department of Computer Science at the NUS School of Computing.
    The findings of this cross-disciplinary work were presented at the conference Robotics: Science and Systems conference in July 2020.
    Human-like sense of touch for robots
    Enabling a human-like sense of touch in robotics could significantly improve current functionality, and even lead to new uses. For example, on the factory floor, robotic arms fitted with electronic skins could easily adapt to different items, using tactile sensing to identify and grip unfamiliar objects with the right amount of pressure to prevent slipping.

    advertisement

    In the new robotic system, the NUS team applied an advanced artificial skin known as Asynchronous Coded Electronic Skin (ACES) developed by Asst Prof Tee and his team in 2019. This novel sensor detects touches more than 1,000 times faster than the human sensory nervous system. It can also identify the shape, texture and hardness of objects 10 times faster than the blink of an eye.
    “Making an ultra-fast artificial skin sensor solves about half the puzzle of making robots smarter. They also need an artificial brain that can ultimately achieve perception and learning as another critical piece in the puzzle,” added Asst Prof Tee, who is also from the NUS Institute for Health Innovation & Technology.
    A human-like brain for robots
    To break new ground in robotic perception, the NUS team explored neuromorphic technology — an area of computing that emulates the neural structure and operation of the human brain — to process sensory data from the artificial skin. As Asst Prof Tee and Asst Prof Soh are members of the Intel Neuromorphic Research Community (INRC), it was a natural choice to use Intel’s Loihi neuromorphic research chip for their new robotic system.
    In their initial experiments, the researchers fitted a robotic hand with the artificial skin, and used it to read braille, passing the tactile data to Loihi via the cloud to convert the micro bumps felt by the hand into a semantic meaning. Loihi achieved over 92 per cent accuracy in classifying the Braille letters, while using 20 times less power than a normal microprocessor.

    advertisement

    Asst Prof Soh’s team improved the robot’s perception capabilities by combining both vision and touch data in a spiking neural network. In their experiments, the researchers tasked a robot equipped with both artificial skin and vision sensors to classify various opaque containers containing differing amounts of liquid. They also tested the system’s ability to identify rotational slip, which is important for stable grasping.
    In both tests, the spiking neural network that used both vision and touch data was able to classify objects and detect object slippage. The classification was 10 per cent more accurate than a system that used only vision. Moreover, using a technique developed by Asst Prof Soh’s team, the neural networks could classify the sensory data while it was being accumulated, unlike the conventional approach where data is classified after it has been fully gathered. In addition, the researchers demonstrated the efficiency of neuromorphic technology: Loihi processed the sensory data 21 per cent faster than a top performing graphics processing unit (GPU), while using more than 45 times less power.
    Asst Prof Soh shared, “We’re excited by these results. They show that a neuromorphic system is a promising piece of the puzzle for combining multiple sensors to improve robot perception. It’s a step towards building power-efficient and trustworthy robots that can respond quickly and appropriately in unexpected situations.”
    “This research from the National University of Singapore provides a compelling glimpse to the future of robotics where information is both sensed and processed in an event-driven manner combining multiple modalities. The work adds to a growing body of results showing that neuromorphic computing can deliver significant gains in latency and power consumption once the entire system is re-engineered in an event-based paradigm spanning sensors, data formats, algorithms, and hardware architecture,” said Mr Mike Davies, Director of Intel’s Neuromorphic Computing Lab.
    This research was supported by the National Robotics R&D Programme Office (NR2PO), a set-up that nurtures the robotics ecosystem in Singapore through funding research and development (R&D) to enhance the readiness of robotics technologies and solutions. Key considerations for NR2PO’s R&D investments include the potential for impactful applications in the public sector, and the potential to create differentiated capabilities for our industry.
    Next steps
    Moving forward, Asst Prof Tee and Asst Prof Soh plan to further develop their novel robotic system for applications in the logistics and food manufacturing industries where there is a high demand for robotic automation, especially moving forward in the post-COVID era.
    Video: https://www.youtube.com/watch?v=08XyaXlxWno&feature=emb_logo More