More stories

  • in

    Taking lessons from a sea slug, study points to better hardware for artificial intelligence

    For artificial intelligence to get any smarter, it needs first to be as intelligent as one of the simplest creatures in the animal kingdom: the sea slug.
    A new study has found that a material can mimic the sea slug’s most essential intelligence features. The discovery is a step toward building hardware that could help make AI more efficient and reliable for technology ranging from self-driving cars and surgical robots to social media algorithms.
    The study, publishing this week in the Proceedings of the National Academy of Sciences, was conducted by a team of researchers from Purdue University, Rutgers University, the University of Georgia and Argonne National Laboratory.
    “Through studying sea slugs, neuroscientists discovered the hallmarks of intelligence that are fundamental to any organism’s survival,” said Shriram Ramanathan, a Purdue professor of materials engineering. “We want to take advantage of that mature intelligence in animals to accelerate the development of AI.”
    Two main signs of intelligence that neuroscientists have learned from sea slugs are habituation and sensitization. Habituation is getting used to a stimulus over time, such as tuning out noises when driving the same route to work every day. Sensitization is the opposite — it’s reacting strongly to a new stimulus, like avoiding bad food from a restaurant.
    AI has a really hard time learning and storing new information without overwriting information it has already learned and stored, a problem that researchers studying brain-inspired computing call the “stability-plasticity dilemma.” Habituation would allow AI to “forget” unneeded information (achieving more stability) while sensitization could help with retaining new and important information (enabling plasticity). More

  • in

    How AI can help forecast how much Arctic sea ice will shrink

    In the next week or so, the sea ice floating atop the Arctic Ocean will shrink to its smallest size this year, as summer-warmed waters eat away at the ice’s submerged edges.

    Record lows for sea ice levels will probably not be broken this year, scientists say. In 2020, the ice covered 3.74 million square kilometers of the Arctic at its lowest point, coming nail-bitingly close to an all-time record low. Currently, sea ice is present in just under 5 million square kilometers of Arctic waters, putting it on track to become the 10th-lowest extent of sea ice in the area since satellite record keeping began in 1979. It’s an unexpected finish considering that in early summer, sea ice hit a record low for that time of year.

    The surprise comes in part because the best current statistical- and physics-based forecasting tools can closely predict sea ice extent only a few weeks in advance, but the accuracy of long-range forecasts falters. Now, a new tool that uses artificial intelligence to create sea ice forecasts promises to boost their accuracy — and can do the analysis relatively quickly, researchers report August 26 in Nature Communications.

    IceNet, a sea ice forecasting system developed by the British Antarctic Survey, or BAS, is “95 percent accurate in forecasting sea ice two months ahead — higher than the leading physics-based model SEAS5 — while running 2,000 times faster,” says Tom Andersson, a data scientist with BAS’s Artificial Intelligence lab. Whereas SEAS5 takes about six hours on a supercomputer to produce a forecast, IceNet can do the same in less than 10 seconds on a laptop. The system also shows a surprising ability to predict anomalous ice events — unusual highs or lows — up to four months in advance, Andersson and his colleagues found.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Tracking sea ice is crucial to keeping tabs on the impacts of climate change. While that’s more of a long game, the advanced notice provided by IceNet could have more immediate benefits, too. For instance, it could give scientists the lead time needed to assess, and plan for, the risks of Arctic fires or wildlife-human conflicts, and it could provide data that Indigenous communities need to make economic and environmental decisions.

    Arctic sea ice extent has steadily declined in all seasons since satellite records began in 1979 (SN: 9/25/19). Scientists have been trying to improve sea ice forecasts for decades, but success has proved elusive. “Forecasting sea ice is really hard because sea ice interacts in complex ways with the atmosphere above and ocean below,” Andersson says.

    [embedded content]
    In 2020, the sea ice in the Arctic shrank to its second lowest extent since satellite monitoring began in 1979. This animation uses those observations to show the change in sea ice coverage from March 5, when the ice was at its maximum, through September 15, when the ice reached its lowest point. The yellow line represents the average minimum extent from 1981 to 2010. Current forecasting tools can accurately predict these changes weeks in advance. A new AI-based tool can predict these changes with nearly 95 percent accuracy several months in advance.

    Existing forecast tools put the laws of physics into computer code to predict how sea ice will change in the future. But partly due to uncertainties in the physical systems governing sea ice, these models struggle to produce accurate long-range forecasts.

    Using a process called deep learning, Andersson and his colleagues loaded observational sea ice data from 1979 to 2011 and climate simulations covering 1850 to 2100 to train IceNet how to predict the state of future sea ice by processing the data from the past.

    To determine the accuracy of its forecasts, the team compared IceNet’s outputs to the observed sea ice extent from 2012 to 2020, and to the forecasts made by SEAS5, the widely cited tool used by the European Centre for Medium-Range Weather Forecasts. IceNet was as much as 2.9 percent more accurate than SEAS5, corresponding to a further 360,000 square kilometers of ocean being correctly labeled as “ice” or “no ice.”

    What’s more, in 2012, a sudden crash in summer sea ice extent heralded a new record low extent in September of that year. In running through past data, IceNet saw the dip coming months in advance. SEAS5 had inklings too but its projections that far out were off by a few hundred thousand square kilometers.

    “This is a significant step forward in sea ice forecasting, boosting our ability to produce accurate forecasts that were typically not thought possible and run them thousands of times faster,” says Andersson. He believes it’s possible that IceNet has better learned the physical processes that determine the evolution of sea ice from the training data while physics-based models still struggle to understand this information.

    “These machine learning techniques have only begun contributing to [forecasting] in the last couple years, and they’ve been doing amazingly well,” says Uma Bhatt, an atmospheric scientist at the University of Alaska Fairbanks Geophysical Institute who was not involved in the new study. She also leads the Sea Ice Prediction Network, a group of multidisciplinary scientists working to improve forecasting.

    Bhatt says that good seasonal ice forecasts are important for assessing the risk of Arctic wildfires, which are tied strongly to the presence of sea ice (SN: 6/23/20). “Knowing where the sea ice is going to be in the spring could potentially help you figure out where you’re likely to have fires — in Siberia, for example, as soon as the sea ice moves away from the shore, the land can warm up very quickly and help set the stage for a bad fire season.”

    Any improvement in sea ice forecasting can also help economic, safety and environmental planning in northern and Indigenous communities. For example, tens of thousands of walruses haul out on land to rest when the sea ice disappears (SN: 10/2/14). Human disturbances can trigger deadly stampedes and lead to high walrus mortality. With seasonal ice forecasts, biologists can anticipate rapid ice loss and manage haul-out sites in advance by limiting human access to those locations.

    Still, limitations remain. At four months of lead time, the system was about 91 percent accurate in predicting the location of September’s ice edge.IceNet, like other forecasting systems, struggles to produce accurate long-range forecasts for late summer due, in part, to what scientists call the “spring predictability barrier.” It’s crucial to know the condition of the sea ice at the start of the spring melting season to be able to forecast end-of-summer conditions.

    Another limit is “the fact that the weather is so variable,” says Mark Serreze, director of the National Snow and Ice Data Center in Boulder, Colo. Though sea ice seemed primed to set a new annual record low at the start of July, the speed of ice loss ultimately slowed due to cool atmospheric temperatures. “We know that sea ice responds very strongly to summer weather patterns, but we can’t get good weather predictions. Weather predictability is about 10 days in advance.” More

  • in

    Just by changing its shape, scientists show they can alter material properties

    By confining the transport of electrons and ions in a patterned thin film, scientists find a way to potentially enhance material properties for design of next-generation electronics
    Like ripples in a pond, electrons travel like waves through materials, and when they collide and interact, they can give rise to new and interesting patterns.
    Scientists at the U.S. Department of Energy’s (DOE) Argonne National Laboratory have seen a new kind of wave pattern emerge in a thin film of metal oxide known as titania when its shape is confined. Confinement, the act of restricting materials within a boundary, can alter the properties of a material and the movement of molecules through it.
    In the case of titania, it caused electrons to interfere with each other in a unique pattern, which increased the oxide’s conductivity, or the degree to which it conducts electricity. This all happened at the mesoscale, a scale where scientists can see both quantum effects and the movement of electrons and molecules.
    In all, this work offers scientists more insight about how atoms, electrons and other particles behave at the quantum level. Such information could aid in designing new materials that can process information and be useful in other electronic applications.
    “What really set this work apart was the size of the scale we investigated,” said lead author Frank Barrows, a Northwestern University graduate student in Argonne’s Materials Science Division (MSD). “Investigating at this unique length scale enabled us to see really interesting phenomena that indicate there is interference happening at the quantum level, and at the same time gain new information about how electrons and ions interact.”
    Altering geometry to change material properties More

  • in

    Do Alexa and Siri make kids bossier? New research suggests you might not need to worry

    Chatting with a robot is now part of many families’ daily lives, thanks to conversational agents such as Apple’s Siri or Amazon’s Alexa. Recent research has shown that children are often delighted to find that they can ask Alexa to play their favorite songs or call Grandma.
    But does hanging out with Alexa or Siri affect the way children communicate with their fellow humans? Probably not, according to a recent study led by the University of Washington that found that children are sensitive to context when it comes to these conversations.
    The team had a conversational agent teach 22 children between the ages of 5 and 10 to use the word “bungo” to ask it to speak more quickly. The children readily used the word when a robot slowed down its speech. While most children did use bungo in conversations with their parents, it became a source of play or an inside joke about acting like a robot. But when a researcher spoke slowly to the children, the kids rarely used bungo, and often patiently waited for the researcher to finish talking before responding.
    The researchers published their findings in June at the 2021 Interaction Design and Children conference.
    “We were curious to know whether kids were picking up conversational habits from their everyday interactions with Alexa and other agents,” said senior author Alexis Hiniker, a UW assistant professor in the Information School. “A lot of the existing research looks at agents designed to teach a particular skill, like math. That’s somewhat different from the habits a child might incidentally acquire by chatting with one of these things.”
    The researchers recruited 22 families from the Seattle area to participate in a five-part study. This project took place before the COVID-19 pandemic, so each child visited a lab with one parent and one researcher. For the first part of the study, children spoke to a simple animated robot or cactus on a tablet screen that also displayed the text of the conversation. More

  • in

    Researchers develop new tool for analyzing large superconducting circuits

    The next generation of computing and information processing lies in the intriguing world of quantum mechanics. Quantum computers are expected to be capable of solving large, extremely complex problems that are beyond the capacity of today’s most powerful supercomputers.
    New research tools are needed to advance the field and fully develop quantum computers. Now Northwestern University researchers have developed and tested a theoretical tool for analyzing large superconducting circuits. These circuits use superconducting quantum bits, or qubits, the smallest units of a quantum computer, to store information.
    Circuit size is important since protection from detrimental noise tends to come at the cost of increased circuit complexity. Currently there are few tools that tackle the modeling of large circuits, making the Northwestern method an important contribution to the research community.
    “Our framework is inspired by methods originally developed for the study of electrons in crystals and allows us to obtain quantitative predictions for circuits that were previously hard or impossible to access,” said Daniel Weiss, corresponding and first author of the paper. He is a fourth-year graduate student in the research group of Jens Koch, an expert in superconducting qubits.
    Koch, an associate professor of physics and astronomy in Weinberg College of Arts and Sciences, is a member of the Superconducting Quantum Materials and Systems Center (SQMS) and the Co-design Center for Quantum Advantage (C2QA). Both national centers were established last yearby the U.S. Department of Energy (DOE). SQMSis focused on building and deploying a beyond-state-of-the-art quantum computer based on superconducting technologies. C2QA is building the fundamental tools necessary to create scalable, distributed and fault-tolerant quantum computer systems.
    “We are excited to contribute to the missions pursued by these two DOE centers and to add to Northwestern’s visibility in the field of quantum information science,” Koch said.
    In their study, the Northwestern researchers illustrate the use of their theoretical tool by extracting from a protected circuit quantitative information that was unobtainable using standard techniques.
    Details were published today (Sept. 13) in the open access journal Physical Review Research.
    The researchers specifically studied protected qubits. These qubits are protected from detrimental noise by designand could yield coherence times (how long quantum information is retained) that are much longer than current state-of-the-art qubits.
    These superconducting circuits are necessarily large, and the Northwestern tool is a means for quantifying the behavior of these circuits. There are some existing tools that can analyze large superconducting circuits, but each works well only when certain conditions are met. The Northwestern method is complementary and works well when these other tools may give suboptimal results.
    Story Source:
    Materials provided by Northwestern University. Original written by Megan Fellman. Note: Content may be edited for style and length. More

  • in

    Star attraction: Magnetism generated by star-like arrangement of molecules

    A 2D nanomaterial consisting of organic molecules linked to metal atoms in a specific atomic-scale geometry shows non-trivial electronic and magnetic properties due to strong interactions between its electrons.
    A new study, published today, shows the emergence of magnetism in a 2D organic material due to strong electron-electron interactions; these interactions are the direct consequence of the material’s unique, star-like atomic-scale structure.
    This is the first observation of local magnetic moments emerging from interactions between electrons in an atomically thin 2D organic material.
    The findings have potential for applications in next-generation electronics based on organic nanomaterials, where tuning of interactions between electrons can lead to a vast range of electronic and magnetic phases and properties.
    STRONG ELECTRON-ELECTRON INTERACTIONS IN A 2D ORGANIC KAGOME MATERIAL
    The Monash University study investigated a 2D metal-organic nanomaterial composed of organic molecules arranged in a kagome geometry, that is, following a ‘star-like’ pattern. More

  • in

    Quantum materials cut closer than ever

    DTU and Graphene Flagship researchers have taken the art of patterning nanomaterials to the next level. Precise patterning of 2D materials is a route to computation and storage using 2D materials, which can deliver better performance and much lower power consumption than today’s technology.
    One of the most significant recent discoveries within physics and material technology is two-dimensional materials such as graphene. Graphene is stronger, smoother, lighter, and better at conducting heat and electricity than any other known material.
    Their most unique feature is perhaps their programmability. By creating delicate patterns in these materials, we can change their properties dramatically and possibly make precisely what we need.
    At DTU, scientists have worked on improving state of the art for more than a decade in patterning 2D materials, using sophisticated lithography machines in the 1500 m2 cleanroom facility. Their work is based in DTU’s Center for Nanostructured Graphene, supported by the Danish National Research Foundation and a part of The Graphene Flagship.
    The electron beam lithography system in DTU Nanolab can write details down to 10 nanometers. Computer calculations can predict exactly the shape and size of patterns in the graphene to create new types of electronics. They can exploit the charge of the electron and quantum properties such as spin or valley degrees of freedom, leading to high-speed calculations with far less power consumption. These calculations, however, ask for higher resolution than even the best lithography systems can deliver: atomic resolution.
    “If we really want to unlock the treasure chest for future quantum electronics, we need to go below 10 nanometers and approach the atomic scale,” says professor and group leader at DTU Physics, Peter Bøggild. More

  • in

    A universal system for decoding any type of data sent across a network

    Every piece of data that travels over the internet — from paragraphs in an email to 3D graphics in a virtual reality environment — can be altered by the noise it encounters along the way, such as electromagnetic interference from a microwave or Bluetooth device. The data are coded so that when they arrive at their destination, a decoding algorithm can undo the negative effects of that noise and retrieve the original data.
    Since the 1950s, most error-correcting codes and decoding algorithms have been designed together. Each code had a structure that corresponded with a particular, highly complex decoding algorithm, which often required the use of dedicated hardware.
    Researchers at MIT, Boston University, and Maynooth University in Ireland have now created the first silicon chip that is able to decode any code, regardless of its structure, with maximum accuracy, using a universal decoding algorithm called Guessing Random Additive Noise Decoding (GRAND). By eliminating the need for multiple, computationally complex decoders, GRAND enables increased efficiency that could have applications in augmented and virtual reality, gaming, 5G networks, and connected devices that rely on processing a high volume of data with minimal delay.
    The research at MIT is led by Muriel Médard, the Cecil H. and Ida Green Professor in the Department of Electrical Engineering and Computer Science, and was co-authored by Amit Solomon and Wei Ann, both graduate students at MIT; Rabia Tugce Yazicigil, assistant professor of electrical and computer engineering at Boston University; Arslan Riaz and Vaibhav Bansal, both graduate students at Boston University; Ken R. Duffy, director of the Hamilton Institute at the National University of Ireland at Maynooth; and Kevin Galligan, a Maynooth graduate student. The research will be presented at the European Solid-States Device Research and Circuits Conference next week.
    Focus on noise
    One way to think of these codes is as redundant hashes (in this case, a series of 1s and 0s) added to the end of the original data. The rules for the creation of that hash are stored in a specific codebook. More