More stories

  • in

    Artificial intelligence that understands object relationships

    When humans look at a scene, they see objects and the relationships between them. On top of your desk, there might be a laptop that is sitting to the left of a phone, which is in front of a computer monitor.
    Many deep learning models struggle to see the world this way because they don’t understand the entangled relationships between individual objects. Without knowledge of these relationships, a robot designed to help someone in a kitchen would have difficulty following a command like “pick up the spatula that is to the left of the stove and place it on top of the cutting board.”
    In an effort to solve this problem, MIT researchers have developed a model that understands the underlying relationships between objects in a scene. Their model represents individual relationships one at a time, then combines these representations to describe the overall scene. This enables the model to generate more accurate images from text descriptions, even when the scene includes several objects that are arranged in different relationships with one another.
    This work could be applied in situations where industrial robots must perform intricate, multistep manipulation tasks, like stacking items in a warehouse or assembling appliances. It also moves the field one step closer to enabling machines that can learn from and interact with their environments more like humans do.
    “When I look at a table, I can’t say that there is an object at XYZ location. Our minds don’t work like that. In our minds, when we understand a scene, we really understand it based on the relationships between the objects. We think that by building a system that can understand the relationships between objects, we could use that system to more effectively manipulate and change our environments,” says Yilun Du, a PhD student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-lead author of the paper.
    Du wrote the paper with co-lead authors Shuang Li, a CSAIL PhD student, and Nan Liu, a graduate student at the University of Illinois at Urbana-Champaign; as well as Joshua B. Tenenbaum, the Paul E. Newton Career Development Professor of Cognitive Science and Computation in the Department of Brain and Cognitive Sciences and a member of CSAIL; and senior author Antonio Torralba, the Delta Electronics Professor of Electrical Engineering and Computer Science and a member of CSAIL. The research will be presented at the Conference on Neural Information Processing Systems in December. More

  • in

    Team builds first living robots that can reproduce

    To persist, life must reproduce. Over billions of years, organisms have evolved many ways of replicating, from budding plants to sexual animals to invading viruses.
    Now scientists at the University of Vermont, Tufts University, and the Wyss Institute for Biologically Inspired Engineering at Harvard University have discovered an entirely new form of biological reproduction — and applied their discovery to create the first-ever, self-replicating living robots.
    The same team that built the first living robots (“Xenobots,” assembled from frog cells — reported in 2020) has discovered that these computer-designed and hand-assembled organisms can swim out into their tiny dish, find single cells, gather hundreds of them together, and assemble “baby” Xenobots inside their Pac-Man-shaped “mouth” — that, a few days later, become new Xenobots that look and move just like themselves.
    And then these new Xenobots can go out, find cells, and build copies of themselves. Again and again.
    “With the right design — they will spontaneously self-replicate,” says Joshua Bongard, Ph.D., a computer scientist and robotics expert at the University of Vermont who co-led the new research.
    The results of the new research were published November 29, 2021, in the Proceedings of the National Academy of Sciences.
    Into the Unknown More

  • in

    'Transformational' approach to machine learning could accelerate search for new disease treatments

    Researchers have developed a new approach to machine learning that ‘learns how to learn’ and out-performs current machine learning methods for drug design, which in turn could accelerate the search for new disease treatments.
    The method, called transformational machine learning (TML), was developed by a team from the UK, Sweden, India and Netherlands. It learns from multiple problems and improves performance while it learns.
    TML could accelerate the identification and production of new drugs by improving the machine learning systems which are used to identify them. The results are reported in the Proceedings of the National Academy of Sciences.
    Most types of machine learning (ML) use labelled examples, and these examples are almost always represented in the computer using intrinsic features, such as the colour or shape of an object. The computer then forms general rules that relate the features to the labels.
    “It’s sort of like teaching a child to identify different animals: this is a rabbit, this is a donkey and so on,” said Professor Ross King from Cambridge’s Department of Chemical Engineering and Biotechnology, who led the research. “If you teach a machine learning algorithm what a rabbit looks like, it will be able to tell whether an animal is or isn’t a rabbit. This is the way that most machine learning works — it deals with problems one at a time.”
    However, this is not the way that human learning works: instead of dealing with a single issue at a time, we get better at learning because we have learned things in the past.
    “To develop TML, we applied this approach to machine learning, and developed a system that learns information from previous problems it has encountered in order to better learn new problems,” said King, who is also a Fellow at The Alan Turing Institute. “Where a typical ML system has to start from scratch when learning to identify a new type of animal — say a kitten — TML can use the similarity to existing animals: kittens are cute like rabbits, but don’t have long ears like rabbits and donkeys. This makes TML a much more powerful approach to machine learning.”
    The researchers demonstrated the effectiveness of their idea on thousands of problems from across science and engineering. They say it shows particular promise in the area of drug discovery, where this approach speeds up the process by checking what other ML models say about a particular molecule. A typical ML approach will search for drug molecules of a particular shape, for example. TML instead uses the connection of the drugs to other drug discovery problems.
    “I was surprised how well it works — better than anything else we know for drug design,” said King. “It’s better at choosing drugs than humans are — and without the best science, we won’t get the best results.”
    Story Source:
    Materials provided by University of Cambridge. The original text of this story is licensed under a Creative Commons License. Note: Content may be edited for style and length. More

  • in

    New discovery opens the way for brain-like computers

    Research has long strived to develop computers to work as energy efficiently as our brains. A study, led by researchers at the University of Gothenburg, has succeeded for the first time in combining a memory function with a calculation function in the same component. The discovery opens the way for more efficient technologies, everything from mobile phones to self-driving cars.
    In recent years, computers have been able to tackle advanced cognitive tasks, like language and image recognition or displaying superhuman chess skills, thanks in large part to artificial intelligence (AI). At the same time, the human brain is still unmatched in its ability to perform tasks effectively and energy efficiently.
    “Finding new ways of performing calculations that resemble the brain’s energy-efficient processes has been a major goal of research for decades. Cognitive tasks, like image and voice recognition, require significant computer power, and mobile applications, in particular, like mobile phones, drones and satellites, require energy efficient solutions,” says Johan Åkerman, professor of applied spintronics at the University of Gothenburg.
    Important breakthrough
    Working with a research team at Tohoko University, Åkerman led a study that has now taken an important step forward in achieving this goal. In the study, now published in the highly ranked journal Nature Materials, the researchers succeeded for the first time in linking the two main tools for advanced calculations: oscillator networks and memristors.
    Åkerman describes oscillators as oscillating circuits that can perform calculations and that are comparable to human nerve cells. Memristors are programable resistors that can also perform calculations and that have integrated memory. This makes them comparable to memory cells. Integrating the two is a major advancement by the researchers. More

  • in

    A new book shows how animals are already coping with climate change

    Hurricane Lizards and Plastic SquidThor HansonBasic Books, $28

    As a conservation biologist, Thor Hanson has seen firsthand the effects of climate change on plants and animals in the wild: the green macaws of Central America migrating along with their food sources, the brown bears of Alaska fattening up on early-ripening berry crops, the conifers of New England seeking refuge from vanishing habitats. And as an engaging author who has celebrated the wonders of nature in books about feathers, seeds, forests and bees (SN: 7/21/18, p. 28), he’s an ideal guide to a topic that might otherwise send readers down a well of despair.

    Hanson does not despair in his latest book, Hurricane Lizards and Plastic Squid. Though he outlines the many ways that global warming is changing life on our planet, his tone is not one of hand-wringing. Instead, Hanson invites the reader into the stories of particular people, places and creatures of all sorts. He draws these tales from his own experiences and those of other scientists, combining reporting with narrative tales of species that serve as examples of broader trends in the natural world.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    A trip to La Selva Biological Station in Costa Rica, for example, has Hanson reliving the experience of tropical ecologist and climatologist Leslie Holdridge, who founded the research station in the 1950s and described, among other things, how climate creates different habitats, or life zones, as elevation increases. As Hanson sweats his way up a tropical mountainside so he can witness a shift in life zones, he notes, “I had to earn every foot of elevation gain the hard way.” I could almost feel the heat that he describes as “a steaming towel draped over my head.” His vivid descriptions bring home the reason why so many species have now been documented moving upslope to cooler climes.

    Hanson doesn’t waste much breath trying to convince doubters of the reality of climate change, instead showing by example after example how it is already playing out. The book moves quickly from the basic science of climate change to the challenges and opportunities that species face — from shifts in seasonal timing to ocean acidification — and the ways that species are responding.

    As Hanson notes, the acronym MAD, for “move, adapt or die,” is often used to describe species’ options for responding. But that pithy phrase doesn’t capture the complexity of the situation. For instance, one of his titular characters, a lizard slammed by back-to-back Caribbean hurricanes in 2017, illustrates a different response. Instead of individual lizards adjusting, or adapting, to increasingly stormy conditions, the species evolved through natural selection. Biologists monitoring the lizards on two islands noticed that after the hurricanes, the lizard populations had longer front legs, shorter back legs and grippier toe pads on average than they had before. An experiment with a leaf blower showed that these traits help the lizards cling to branches better — survival of the fittest in action.

    In the end, the outcomes for species will probably be as varied as their circumstances. Some organisms have already moved, adapted or died as a result of the warming, and many more will face challenges from changes that are yet to come. But Hanson hasn’t given up hope. When it comes to preventing the worst-case scenarios, he quotes ecologist Gordon Orians, who is in the seventh decade of a career witnessing environmental change. When asked what a concerned citizen should do to combat climate change, he responded succinctly: “Everything you can.” And as Hanson points out, this is exactly how plants and animals are responding to climate change: by doing everything they can. The challenge feels overwhelming, and as a single concerned citizen, much feels out of my hands. Yet Hanson’s words did inspire me to take a cue from the rest of the species on this warming world to do what I can.

    Buy Hurricane Lizards and Plastic Squid from Bookshop.org. Science News is a Bookshop.org affiliate and will earn a commission on purchases made from links in this article. More

  • in

    Development of an artificial vision device capable of mimicking human optical illusions

    NIMS has developed an ionic artificial vision device capable of increasing the edge contrast between the darker and lighter areas of an mage in a manner similar to that of human vision. This first-ever synthetic mimicry of human optical illusions was achieved using ionic migration and interaction within solids. It may be possible to use the device to develop compact, energy-efficient visual sensing and image processing hardware systems capable of processing analog signals.
    Numerous artificial intelligence (AI) systems developers have recently shown a great deal of interest in research on various sensors and analog information processing systems inspired by human sensory mechanisms. Most AI systems on which research is being conducted require sophisticated software/programs and complex circuit configurations, including custom-designed processing modules equipped with arithmetic circuits and memory. These systems have disadvantages, however, in that they are large and consume a great deal of power.
    The NIMS research team recently developed an ionic artificial vision device composed of an array of mixed conductor channels placed on a solid electrolyte at regular intervals. This device simulates the way in which human retinal neurons (i.e., photoreceptors, horizontal cells and bipolar cells) process visual signals by responding to input voltage pulses (equivalent to electrical signals from photoreceptors). This causes ions within the solid electrolyte (equivalent to a horizontal cell) to migrate across the mixed conductor channels, which then changes the output channel current (equivalent to a bipolar cell response). By employing such steps, the device, independent of software, was able to process input image signals and produce an output image with increased edge contrast between darker and lighter areas in a manner similar to the way in which the human visual system can increase edge contrast between different colors and shapes by means of visual lateral inhibition.
    The human eye produces various optical illusions associated with tilt angle, size, color and movement, in addition to darkness/lightness, and this process is believed to play a crucial role in the visual identification of different objects. The ionic artificial vision device described here may potentially be used to reproduce these other types of optical illusions. The research team involved hopes to develop visual sensing systems capable of performing human retinal functions by integrating the subject device with other components, including photoreceptor circuits.
    Story Source:
    Materials provided by National Institute for Materials Science, Japan. Note: Content may be edited for style and length. More

  • in

    Corals may store a surprising amount of microplastics in their skeletons

    A surprising amount of plastic pollution in the ocean may wind up in a previously overlooked spot: the skeletons of living corals. 

    Up to about 20,000 metric tons of tiny fragments called microplastics may be stored in coral skeletons worldwide every year, says ecologist Jessica Reichert of Justus Liebig University Giessen in Germany. That corresponds to nearly 3 percent of the microplastics estimated to be in the shallow, tropical waters where corals thrive.

    Corals have been observed eating or otherwise incorporating microplastics into their bodies (SNS: 3/18/15). But scientists don’t know how much of the debris reefs take up globally. So Reichert and colleagues exposed corals in the lab to microplastics to find out where the particles are stored inside corals and estimate how much is tucked away.

    Corals consumed some of the trash, or grew their skeletons over particles. After 18 months, most of the debris inside corals was in their skeletons rather than tissues, the researchers report October 28 in Global Change Biology. After counting the number of trapped particles, the researchers estimate that between nearly 6 billion and 7 quadrillion microplastic particles may be permanently stored in corals worldwide annually.

    Tiny plastic particles (black spots in this image of coral that has had its tissue removed) end up trapped in coral skeletons when corals grow over the fragments or ingest them.J. Reichert

    It’s the first time that a living microplastic “sink,” or long-term storage site, has been quantified, Reichert says.

    Scientists are learning how much microplastic is being introduced to the oceans. But researchers don’t know where it all ends up (SN: 6/6/19). Other known microplastic sinks, such as sea ice and seafloor sediments, need better quantification, and other sinks may not yet be known.

    Reefs are typically found near coasts where polluted waterways can drain to the sea, placing corals in potential microplastic hot spots.

    “We don’t know what consequences this [storage] might have for the coral organisms, [or for] reef stability and integrity,” Reichert says. It “might pose an additional threat to coral reefs worldwide.”  

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up. More

  • in

    'Magic wand' reveals a colorful nano-world

    Scientists have developed new materials for next-generation electronics so tiny that they are not only indistinguishable when closely packed, but they also don’t reflect enough light to show fine details, such as colors, with even the most powerful optical microscopes. Under an optical microscope, carbon nanotubes, for example, look grayish. The inability to distinguish fine details and differences between individual pieces of nanomaterials makes it hard for scientists to study their unique properties and discover ways to perfect them for industrial use.
    In a new report in Nature Communications, researchers from UC Riverside describe a revolutionary imaging technology that compresses lamp light into a nanometer-sized spot. It holds that light at the end of a silver nanowire like a Hogwarts student practicing the “Lumos” spell, and uses it to reveal previously invisible details, including colors.
    The advance, improving color-imaging resolution to an unprecedented 6 nanometer level, will help scientists see nanomaterials in enough detail to make them more useful in electronics and other applications.
    Ming Liu and Ruoxue Yan, associate professors in UC Riverside’s Marlan and Rosemary Bourns College of Engineering, developed this unique tool with a superfocusing technique developed by the team. The technique has been used in previous work to observe the vibration of molecular bonds at 1-nanometer spatial resolution without the need of any focusing lens.
    In the new report, Liu and Yan modified the tool to measure signals spanning the whole visible wavelength range, which can be used to render the color and depict the electronic band structures of the object instead of only molecule vibrations. The tool squeezes the light from a tungsten lamp into a silver nanowire with near-zero scattering or reflection, where light is carried by the oscillation wave of free electrons at the silver surface.
    The condensed light leaves the silver nanowire tip, which has a radius of just 5 nanometers, in a conical path, like the light beam from a flashlight. When the tip passes over an object, its influence on the beam shape and color is detected and recorded.
    “It is like using your thumb to control the water spray from a hose,” Liu said, “You know how to get the desired spraying pattern by changing the thumb position, and likewise, in the experiment, we read the light pattern to retrieve the details of the object blocking the 5 nm-sized light nozzle.”
    The light is then focused into a spectrometer, where it forms a tiny ring shape. By scanning the probe over an area and recording two spectra for each pixel, the researchers can formulate the absorption and scattering images with colors. The originally grayish carbon nanotubes receive their first color photograph, and an individual carbon nanotube now has the chance to exhibit its unique color.
    “The atomically smooth sharp-tip silver nanowire and its nearly scatterless optical coupling and focusing is critical for the imaging,” Yan said. “Otherwise there would be intense stray light in the background that ruins the whole effort. ”
    The researchers expect that the new technology can be an important tool to help the semiconductor industry make uniform nanomaterials with consistent properties for use in electronic devices. The new full-color nano-imaging technique could also be used to improve understanding of catalysis, quantum optics, and nanoelectronics.
    Liu, Yan, and Ma were joined in the research by Xuezhi Ma, a postdoctoral scholar at Temple University who worked on the project as part of his doctoral research at UCR Riverside. Researchers also included UCR students Qiushi Liu, Ning Yu, Da Xu, Sanggon Kim, Zebin Liu, Kaili Jiang, and professor Bryan Wong. The paper, titled “6 nm super-resolution optical transmission and scattering spectroscopic imaging of carbon nanotubes using a nanometer-scale white light source,” is available here.
    Story Source:
    Materials provided by University of California – Riverside. Original written by Holly Ober. Note: Content may be edited for style and length. More