More stories

  • in

    Nonverbal social interactions – even with unfriendly avatars – boost cooperation

    Scientists used animated humanoid avatars to study how nonverbal cues influence people’s behavior. Reported in the Journal of Cognitive Neuroscience, the research offers insight into the brain mechanisms that drive social and economic decision-making.
    The study revealed that participants were more willing to cooperate with animated avatars than with static figures representing their negotiation partners. It also found — somewhat surprisingly — that people were more willing to accept unfair offers from unfriendly avatars than from friendly ones.
    “This work is an extension of previous studies exploring how nonverbal cues influence people’s perceptions of one another,” said Matthew Moore, who led the research at the University of Illinois Urbana-Champaign with psychology professors Florin Dolcos and Sanda Dolcos. The new research was conducted at the U. of I.’s Beckman Institute for Advanced Science and Technology, where Moore was a postdoctoral fellow.
    “Nonverbal interactions represent a huge part of human communication,” Sanda Dolcos said. “We might not be aware of this, but much of the information that we take in is through these nonverbal channels.”
    Previous studies often used still photos or other static representations of people engaged in social interactions to study how people form opinions or make decisions, Florin Dolcos said.
    “By animating the avatars, we’re capturing interactions that are much closer to what happens in real-life situations,” he said. More

  • in

    Quantum computers getting connected

    A promising route towards larger quantum computers is to orchestrate multiple task-optimised smaller systems. To dynamically connect and entangle any two systems, photonic interference emerges as a powerful method, due to its compatibility with on-chip devices and long-distance propagation in quantum networks.
    One of the main obstacles towards the commercialization of quantum photonics remains the nanoscale fabrication and integration of scalable quantum systems due to their notorious sensitivity to the smallest disturbances in the close environment. This has made it an extraordinary challenge to develop systems that can be used for quantum computing while simultaneously offering an efficient optical interface.
    A recent result published in Nature Materials shows how the integration obstacle can be overcome. The work is based on a multi-national collaboration with researchers from Universities of Stuttgart (Physics 3), California — Davis, Linköping and Kyoto, as well as the Fraunhofer Institute at Erlangen, the Helmholtz Centre at Dresden and the Leibniz-Institute at Leipzig.
    The researchers followed a two-step approach. First, their quantum system of choice is the so-called silicon vacancy centre in silicon carbide, which is known to possess particularly robust spin-optical properties. Second, they fabricated nanophotonic waveguides around these colour centres using gentle processing methods that keep the host material essentially free of damage.
    “With our approach, we could demonstrate that the excellent spin-optical properties of our colour centres are maintained after nanophotonic integration.” says Florian Kaiser, Assistant Professor at the University of Stuttgart, the supervisor of this project. “Thanks to the robustness of our quantum devices, we gained enough headroom to perform quantum gates on multiple nuclear spin qubits. As these spins show very long coherence times, they are excellent for implementing small quantum computers.”
    “In this project, we explored the peculiar triangular shape of photonic devices. While this geometry is of commercial appeal because it provides versatility needed for scalable production, little has been known about its utility for high performing quantum hardware. Our studies reveal that light emitted by the colour centre, which carries quantum information across the chip, can be efficiently propagated through a single optical mode. This is a key conclusion for viability of integration of colour centres with other photonic devices, such as nanocavities, optical fibre and single-photon detectors, needed to realize full functionalities of quantum networking and computing.” — says Marina Radulaski, Assistant Professor at the University of California — Davis.
    What makes the silicon carbide platform particularly interesting are its CMOS compatibility and its heavy usage as high-power semiconductor in electric mobility. The researchers now want to benefit from these aspects to leverage the scalable production of spin-photonics chips. Additionally, they want to implement semiconductor circuitry to electrically initialise and readout the quantum states of their spin qubits. “Maximising electrical control — instead of traditional optical control via lasers — is an important step towards system simplification. The combination of efficient nanophotonics with electrical control will allow us to reliably integrate more quantum systems on one chip, which will result in significant performance gains.,” adds Florian Kaiser, “In this sense, we are only at the dawn of quantum technologies with colour centres in silicon carbide. Our successful nanophotonic integration is not only an exciting enabler for distributed quantum computing, but it can also boost the performance of compact quantum sensors.”
    Story Source:
    Materials provided by Universitaet Stuttgart. Note: Content may be edited for style and length. More

  • in

    Fungi may be crucial to storing carbon in soil as the Earth warms

    When it comes to storing carbon in the ground, fungi may be key.

    Soils are a massive reservoir of carbon, holding about three times as much carbon as Earth’s atmosphere. The secret behind this carbon storage are microbes, such as bacteria and some fungi, which transform dead and decaying matter into carbon-rich soil.

    But not all carbon compounds made by soil microbes are equal. Some can last for decades or even centuries in the soil, while others are quickly consumed by microbes and converted into carbon dioxide that’s lost to the atmosphere. Now, a study shows that fungi-rich soils grown in laboratory experiments released less carbon dioxide when heated than other soils.

    The result suggests that fungi are essential for making soil that sequesters carbon in the earth, microecologist Luiz Domeignoz-Horta and colleagues report November 6 in ISME Communications.

    Who is making soil matters, Domeignoz-Horta says.

    The study comes as some scientists warn that climate change threatens to release more carbon out of the ground and into the atmosphere, further worsening global warming. Researchers have found that rising temperatures can lead to population booms in soil microbes, which quickly exhaust easily digestible carbon compounds. This forces the organisms to turn to older, more resilient carbon stores, converting carbon stored away long ago into carbon dioxide.

    With the combined threat of rising temperatures and damage to soil microbe communities from intensive farming and disappearing forests, some computer models indicate that 40 percent less carbon will stick in the soil by 2100 than previous simulations have anticipated (SN: 9/22/16).

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    To see if scientists can coax soils to store more carbon, researchers need to understand what makes soil microbes tick. But that is no simple task. “Some say soil is the most complex matrix on the planet,” says Kirsten Hofmockel, an ecologist at the Pacific Northwest National Laboratory in Richland, Wash., who was not involved in the research.

    To simplify matters, Domeignoz-Horta, of the University of Zurich, and colleagues grew their own dirt in the lab. The researchers separated fungi and bacteria from forest soil and grew five combinations of these communities in petri dishes, including some that were home only to bacteria or fungi. The researchers sustained the microbes on a diet of simple sugar and left them to churn out soil for four months. The team then heated the different soils to see how much carbon dioxide was produced.

    Bacteria were the main drivers behind making soil, but fungi-rich soils produced less carbon dioxide when heated than soils made solely by bacteria, the researchers found. Why is still unclear. One possibility is that fungi could be producing enzymes — proteins that build or break up other molecules — that bacteria aren’t capable of making on their own, Domeignoz-Horta says. These fungi-derived compounds may provide bacteria with different building blocks with which to build soil, which may end up creating carbon compounds with a longer shelf life in soils.

    What happens in lab-grown soil may not play out the same in the real world. But the new research is an important step in understanding how carbon is locked away long-term, Hofmockel says. This kind of information could one day help researchers develop techniques to ensure that more carbon stays in the ground for longer, which could help mitigate the amount of carbon dioxide in the atmosphere.

    “If we can get carbon in the ground for five years, that’s a step in the right direction,” Hofmockel says. “But if we can have stable carbon in the soil for centuries or even millennia, that’s a solution.” More

  • in

    Artificial intelligence that understands object relationships

    When humans look at a scene, they see objects and the relationships between them. On top of your desk, there might be a laptop that is sitting to the left of a phone, which is in front of a computer monitor.
    Many deep learning models struggle to see the world this way because they don’t understand the entangled relationships between individual objects. Without knowledge of these relationships, a robot designed to help someone in a kitchen would have difficulty following a command like “pick up the spatula that is to the left of the stove and place it on top of the cutting board.”
    In an effort to solve this problem, MIT researchers have developed a model that understands the underlying relationships between objects in a scene. Their model represents individual relationships one at a time, then combines these representations to describe the overall scene. This enables the model to generate more accurate images from text descriptions, even when the scene includes several objects that are arranged in different relationships with one another.
    This work could be applied in situations where industrial robots must perform intricate, multistep manipulation tasks, like stacking items in a warehouse or assembling appliances. It also moves the field one step closer to enabling machines that can learn from and interact with their environments more like humans do.
    “When I look at a table, I can’t say that there is an object at XYZ location. Our minds don’t work like that. In our minds, when we understand a scene, we really understand it based on the relationships between the objects. We think that by building a system that can understand the relationships between objects, we could use that system to more effectively manipulate and change our environments,” says Yilun Du, a PhD student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-lead author of the paper.
    Du wrote the paper with co-lead authors Shuang Li, a CSAIL PhD student, and Nan Liu, a graduate student at the University of Illinois at Urbana-Champaign; as well as Joshua B. Tenenbaum, the Paul E. Newton Career Development Professor of Cognitive Science and Computation in the Department of Brain and Cognitive Sciences and a member of CSAIL; and senior author Antonio Torralba, the Delta Electronics Professor of Electrical Engineering and Computer Science and a member of CSAIL. The research will be presented at the Conference on Neural Information Processing Systems in December. More

  • in

    Team builds first living robots that can reproduce

    To persist, life must reproduce. Over billions of years, organisms have evolved many ways of replicating, from budding plants to sexual animals to invading viruses.
    Now scientists at the University of Vermont, Tufts University, and the Wyss Institute for Biologically Inspired Engineering at Harvard University have discovered an entirely new form of biological reproduction — and applied their discovery to create the first-ever, self-replicating living robots.
    The same team that built the first living robots (“Xenobots,” assembled from frog cells — reported in 2020) has discovered that these computer-designed and hand-assembled organisms can swim out into their tiny dish, find single cells, gather hundreds of them together, and assemble “baby” Xenobots inside their Pac-Man-shaped “mouth” — that, a few days later, become new Xenobots that look and move just like themselves.
    And then these new Xenobots can go out, find cells, and build copies of themselves. Again and again.
    “With the right design — they will spontaneously self-replicate,” says Joshua Bongard, Ph.D., a computer scientist and robotics expert at the University of Vermont who co-led the new research.
    The results of the new research were published November 29, 2021, in the Proceedings of the National Academy of Sciences.
    Into the Unknown More

  • in

    'Transformational' approach to machine learning could accelerate search for new disease treatments

    Researchers have developed a new approach to machine learning that ‘learns how to learn’ and out-performs current machine learning methods for drug design, which in turn could accelerate the search for new disease treatments.
    The method, called transformational machine learning (TML), was developed by a team from the UK, Sweden, India and Netherlands. It learns from multiple problems and improves performance while it learns.
    TML could accelerate the identification and production of new drugs by improving the machine learning systems which are used to identify them. The results are reported in the Proceedings of the National Academy of Sciences.
    Most types of machine learning (ML) use labelled examples, and these examples are almost always represented in the computer using intrinsic features, such as the colour or shape of an object. The computer then forms general rules that relate the features to the labels.
    “It’s sort of like teaching a child to identify different animals: this is a rabbit, this is a donkey and so on,” said Professor Ross King from Cambridge’s Department of Chemical Engineering and Biotechnology, who led the research. “If you teach a machine learning algorithm what a rabbit looks like, it will be able to tell whether an animal is or isn’t a rabbit. This is the way that most machine learning works — it deals with problems one at a time.”
    However, this is not the way that human learning works: instead of dealing with a single issue at a time, we get better at learning because we have learned things in the past.
    “To develop TML, we applied this approach to machine learning, and developed a system that learns information from previous problems it has encountered in order to better learn new problems,” said King, who is also a Fellow at The Alan Turing Institute. “Where a typical ML system has to start from scratch when learning to identify a new type of animal — say a kitten — TML can use the similarity to existing animals: kittens are cute like rabbits, but don’t have long ears like rabbits and donkeys. This makes TML a much more powerful approach to machine learning.”
    The researchers demonstrated the effectiveness of their idea on thousands of problems from across science and engineering. They say it shows particular promise in the area of drug discovery, where this approach speeds up the process by checking what other ML models say about a particular molecule. A typical ML approach will search for drug molecules of a particular shape, for example. TML instead uses the connection of the drugs to other drug discovery problems.
    “I was surprised how well it works — better than anything else we know for drug design,” said King. “It’s better at choosing drugs than humans are — and without the best science, we won’t get the best results.”
    Story Source:
    Materials provided by University of Cambridge. The original text of this story is licensed under a Creative Commons License. Note: Content may be edited for style and length. More

  • in

    New discovery opens the way for brain-like computers

    Research has long strived to develop computers to work as energy efficiently as our brains. A study, led by researchers at the University of Gothenburg, has succeeded for the first time in combining a memory function with a calculation function in the same component. The discovery opens the way for more efficient technologies, everything from mobile phones to self-driving cars.
    In recent years, computers have been able to tackle advanced cognitive tasks, like language and image recognition or displaying superhuman chess skills, thanks in large part to artificial intelligence (AI). At the same time, the human brain is still unmatched in its ability to perform tasks effectively and energy efficiently.
    “Finding new ways of performing calculations that resemble the brain’s energy-efficient processes has been a major goal of research for decades. Cognitive tasks, like image and voice recognition, require significant computer power, and mobile applications, in particular, like mobile phones, drones and satellites, require energy efficient solutions,” says Johan Åkerman, professor of applied spintronics at the University of Gothenburg.
    Important breakthrough
    Working with a research team at Tohoko University, Åkerman led a study that has now taken an important step forward in achieving this goal. In the study, now published in the highly ranked journal Nature Materials, the researchers succeeded for the first time in linking the two main tools for advanced calculations: oscillator networks and memristors.
    Åkerman describes oscillators as oscillating circuits that can perform calculations and that are comparable to human nerve cells. Memristors are programable resistors that can also perform calculations and that have integrated memory. This makes them comparable to memory cells. Integrating the two is a major advancement by the researchers. More

  • in

    A new book shows how animals are already coping with climate change

    Hurricane Lizards and Plastic SquidThor HansonBasic Books, $28

    As a conservation biologist, Thor Hanson has seen firsthand the effects of climate change on plants and animals in the wild: the green macaws of Central America migrating along with their food sources, the brown bears of Alaska fattening up on early-ripening berry crops, the conifers of New England seeking refuge from vanishing habitats. And as an engaging author who has celebrated the wonders of nature in books about feathers, seeds, forests and bees (SN: 7/21/18, p. 28), he’s an ideal guide to a topic that might otherwise send readers down a well of despair.

    Hanson does not despair in his latest book, Hurricane Lizards and Plastic Squid. Though he outlines the many ways that global warming is changing life on our planet, his tone is not one of hand-wringing. Instead, Hanson invites the reader into the stories of particular people, places and creatures of all sorts. He draws these tales from his own experiences and those of other scientists, combining reporting with narrative tales of species that serve as examples of broader trends in the natural world.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    A trip to La Selva Biological Station in Costa Rica, for example, has Hanson reliving the experience of tropical ecologist and climatologist Leslie Holdridge, who founded the research station in the 1950s and described, among other things, how climate creates different habitats, or life zones, as elevation increases. As Hanson sweats his way up a tropical mountainside so he can witness a shift in life zones, he notes, “I had to earn every foot of elevation gain the hard way.” I could almost feel the heat that he describes as “a steaming towel draped over my head.” His vivid descriptions bring home the reason why so many species have now been documented moving upslope to cooler climes.

    Hanson doesn’t waste much breath trying to convince doubters of the reality of climate change, instead showing by example after example how it is already playing out. The book moves quickly from the basic science of climate change to the challenges and opportunities that species face — from shifts in seasonal timing to ocean acidification — and the ways that species are responding.

    As Hanson notes, the acronym MAD, for “move, adapt or die,” is often used to describe species’ options for responding. But that pithy phrase doesn’t capture the complexity of the situation. For instance, one of his titular characters, a lizard slammed by back-to-back Caribbean hurricanes in 2017, illustrates a different response. Instead of individual lizards adjusting, or adapting, to increasingly stormy conditions, the species evolved through natural selection. Biologists monitoring the lizards on two islands noticed that after the hurricanes, the lizard populations had longer front legs, shorter back legs and grippier toe pads on average than they had before. An experiment with a leaf blower showed that these traits help the lizards cling to branches better — survival of the fittest in action.

    In the end, the outcomes for species will probably be as varied as their circumstances. Some organisms have already moved, adapted or died as a result of the warming, and many more will face challenges from changes that are yet to come. But Hanson hasn’t given up hope. When it comes to preventing the worst-case scenarios, he quotes ecologist Gordon Orians, who is in the seventh decade of a career witnessing environmental change. When asked what a concerned citizen should do to combat climate change, he responded succinctly: “Everything you can.” And as Hanson points out, this is exactly how plants and animals are responding to climate change: by doing everything they can. The challenge feels overwhelming, and as a single concerned citizen, much feels out of my hands. Yet Hanson’s words did inspire me to take a cue from the rest of the species on this warming world to do what I can.

    Buy Hurricane Lizards and Plastic Squid from Bookshop.org. Science News is a Bookshop.org affiliate and will earn a commission on purchases made from links in this article. More