More stories

  • in

    Know your audience: Why data communication needs to pay attention to novice users

    Computer scientists at the University of Massachusetts Amherst recently found that data-visualization experts have no agreed-upon understanding of who makes up one of their largest audiences — novice users. The work, which recently won a coveted Best Paper Award at the Association for Computing Machinery’s conference on Human Factors in Computing Systems (ACM CHI), is an important first step in ensuring more inclusive data visualizations, and thus data visualization that works for all users.
    Data visualization is the representation of data in a visual and easily understandable way using common graphics such as charts, plots, infographics and animations. Using visual elements provides an accessible way to see and understand trends, outliers and patterns in data. One of the most familiar data visualizations — the pie chart — is legible to nearly everyone and has been a method used to quickly convey information since its invention in the early nineteenth century.
    But, with the advent of the internet, the range, reach and complexity of such visualizations have grown exponentially. Think of the various online COVID trackers, graphics showing economic projections or the outcomes of national elections. “More and more, everyday people are relying on data visualizations to make decisions about their lives,” says Narges Mayhar, assistant professor in the Manning College of Information and Computer Science at UMass Amherst, and the paper’s senior author. “Even many of our collective decisions rest on data visualizations.”
    Since a visualization’s use is dependent on its intelligibility, one would think that data visualization experts would have a clear and standard understanding of their audience, particularly their non-expert users. And yet, “despite many decades of data-visualization research, we had no clear notion of what makes someone a ‘novice,'” says Mayhar. This insight was important enough that the ACM CHI, the premier international conference for human-computer interaction, bestowed the Best Paper Award on the research, an honor reserved for the top 1% of submitted papers.
    Mayhar, lead author Alyxander Burns, who completed the research as part of his graduate studies at UMass Amherst, and their co-authors combed through the past 30 years of visualization research and found 79 papers spread across seven academic journals that concerned themselves with identifying the audience for data visualizations. Within those 79 papers, they found that the definitions of a novice user ranged widely, from people who have difficulty “effectively utilizing GPU clusters” to those who lack knowledge of “ontological models.” Moreover, the team found that most researchers’ sample groups of users overwhelmingly skewed toward white, college-aged people living in the U.S.
    “How do we know that the visualizations we create could work for older people, for those without college degrees, for people living in one of the world’s many other countries?” asks Mayhar. “We need to be clear, as a field, what we mean when we say ‘novice,’ and the goal of this paper is to change the way that visualization researchers think about novices, address their needs and design tools that work for everyone.” More

  • in

    Researchers create highly conductive metallic gel for 3D printing

    Researchers have developed a metallic gel that is highly electrically conductive and can be used to print three-dimensional (3D) solid objects at room temperature.
    “3D printing has revolutionized manufacturing, but we’re not aware of previous technologies that allowed you to print 3D metal objects at room temperature in a single step,” says Michael Dickey, co-corresponding author of a paper on the work and the Camille & Henry Dreyfus Professor of Chemical and Biomolecular Engineering at North Carolina State University. “This opens the door to manufacturing a wide range of electronic components and devices.”
    To create the metallic gel, the researchers start with a solution of micron-scale copper particles suspended in water. The researchers then add a small amount of an indium-gallium alloy that is liquid metal at room temperature. The resulting mixture is then stirred together.
    As the mixture is stirred, the liquid metal and copper particles essentially stick to each other, forming a metallic gel “network” within the aqueous solution.
    “This gel-like consistency is important, because it means you have a fairly uniform distribution of copper particles throughout the material,” Dickey says. “This does two things. First, it means the network of particles connect to form electrical pathways. And second, it means that the copper particles aren’t settling out of solution and clogging the printer.”
    The resulting gel can be printed using a conventional 3D printing nozzle and retains its shape when printed. And, when allowed to dry at room temperature, the resulting 3D object becomes even more solid while retaining its shape.
    However, if users decide to apply heat to the printed object while it is drying, some interesting things can happen.
    The researchers found that the alignment of the particles influences how the material dries. For example, if you printed a cylindrical object, the sides would contract more than the top and bottom as it dries. If something is drying at room temperature, the process is sufficiently slow that it doesn’t create structural change in the object. However, if you apply heat — for example, put it under a heat lamp at 80 degrees Celsius — the rapid drying can cause structural deformation. Because this deformation is predictable, that means you can make a printed object change shape after it is printed by controlling the pattern of the printed object and the amount of heat the object is exposed to while drying.
    “Ultimately, this sort of four-dimensional printing — the traditional three dimensions, plus time — is one more tool that can be used to create structures with the desired dimensions,” Dickey says. “But what we find most exciting about this material is its conductivity.
    “Because the printed objects end up being as much as 97.5% metal, they are highly conductive. It’s obviously not as conductive as conventional copper wire, but it’s impossible to 3D print copper wire at room temperature. And what we’ve developed is far more conductive than anything else that can be printed. We’re pretty excited about the applications here.
    “We’re open to working with industry partners to explore potential applications, and are always happy to talk with potential collaborators about future directions for research,” Dickey says.
    The work was done with support from the National Natural Science Foundation of China, under grant number 52203101; and from the China Scholarship Council, under grant number 201906250075. More

  • in

    Artificial cells demonstrate that ‘life finds a way’

    “Listen, if there’s one thing the history of evolution has taught us is that life will not be contained. Life breaks free. It expands to new territories, and it crashes through barriers painfully, maybe even dangerously, but . . . life finds a way,” said Ian Malcolm, Jeff Goldblum’s character in Jurassic Park, the 1993 science fiction film about a park with living dinosaurs.
    You won’t find any Velociraptors lurking around evolutionary biologist Jay T. Lennon’s lab; however, Lennon, a professor in the College of Arts and Sciences Department of Biology at Indiana University Bloomington, and his colleagues have found that life does indeed find a way. Lennon’s research team has been studying a synthetically constructed minimal cell that has been stripped of all but its essential genes. The team found that the streamlined cell can evolve just as fast as a normal cell — demonstrating the capacity for organisms to adapt, even with an unnatural genome that would seemingly provide little flexibility.
    “It appears there’s something about life that’s really robust,” says Lennon. “We can simplify it down to just the bare essentials, but that doesn’t stop evolution from going to work.”
    For their study, Lennon’s team used the synthetic organism, Mycoplasma mycoides JCVI-syn3B — a minimized version of the bacterium M. mycoides commonly found in the guts of goats and similar animals. Over millennia, the parasitic bacterium has naturally lost many of its genes as it evolved to depend on its host for nutrition. Researchers at the J. Craig Venter Institute in California took this one step further. In 2016, they eliminated 45 percent of the 901 genes from the natural M. mycoides genome — reducing it to the smallest set of genes required for autonomous cellular life. At 493 genes, the minimal genome of M. mycoides JCVI-syn3B is the smallest of any known free-living organism. In comparison, many animal and plant genomes contain more than 20,000 genes.
    In principle, the simplest organism would have no functional redundancies and possess only the minimum number of genes essential for life. Any mutation in such an organism could lethally disrupt one or more cellular functions, placing constraints on evolution. Organisms with streamlined genomes have fewer targets upon which positive selection can act, thus limiting opportunities for adaptation.
    Although M. mycoides JCVI-syn3B could grow and divide in laboratory conditions, Lennon and colleagues wanted to know how a minimal cell would respond to the forces of evolution over time, particularly given the limited raw materials upon which natural selection could operate as well as the uncharacterized input of new mutations.
    “Every single gene in its genome is essential,” says Lennon in reference to M. mycoides JCVI-syn3B. “One could hypothesize that there is no wiggle room for mutations, which could constrain its potential to evolve.”
    The researchers established that M. mycoides JCVI-syn3B, in fact, has an exceptionally high mutation rate. They then grew it in the lab where it was allowed to evolve freely for 300 days, equivalent to 2000 bacterial generations or about 40,000 years of human evolution.
    The next step was to set up experiments to determine how the minimal cells that had evolved for 300 days performed in comparison to the original, non-minimal M. mycoides as well as to a strain of minimal cells that hadn’t evolved for 300 days. In the comparison tests, the researchers put equal amounts of the strains being assessed together in a test tube. The strain better suited to its environment became the more common strain.
    They found that the non-minimal version of the bacterium easily outcompeted the unevolved minimal version. The minimal bacterium that had evolved for 300 days, however, did much better, effectively recovering all of the fitness that it had lost due to genome streamlining. The researchers identified the genes that changed the most during evolution. Some of these genes were involved in constructing the surface of the cell, while the functions of several others remain unknown.
    Understanding how organisms with simplified genomes overcome evolutionary challenges has important implications for long-standing problems in biology — including the treatment of clinical pathogens, the persistence of host-associated endosymbionts, the refinement of engineered microorganisms, and the origin of life itself. The research done by Lennon and his team demonstrates the power of natural selection to rapidly optimize fitness in the simplest autonomous organism, with implications for the evolution of cellular complexity. In other words, it shows that life finds a way. More

  • in

    From atoms to materials: Algorithmic breakthrough unlocks path to sustainable technologies

    New research by the University of Liverpool could signal a step change in the quest to design the new materials that are needed to meet the challenge of net zero and a sustainable future.
    Publishing in the journal Nature, the Liverpool researchers have shown that a mathematical algorithm can guarantee to predict the structure of any material just based on knowledge of the atoms that make it up.
    Developed by an interdisciplinary team of researchers from the University of Liverpool’s Departments of Chemistry and Computer Science, the algorithm systematically evaluates entire sets of possible structures at once, rather than considering them one at a time, to accelerate identification of the correct solution.
    This breakthrough makes it possible to identify those materials that can be made and, in many cases, to predict their properties. The new method was demonstrated on quantum computers that have the potential to solve many problems faster than classical computers and can therefore speed up the calculations even further.
    Our way of life depends on materials — “everything is made of something.” New materials are needed to meet the challenge of net zero, from batteries and solar absorbers for clean power to providing low-energy computing and the catalysts that will make the clean polymers and chemicals for our sustainable future.
    This search is slow and difficult because there are so many ways that atoms could be combined to make materials, and in particular so many structures that could form. In addition, materials with transformative properties are likely to have structures that are different from those that are known today, and predicting a structure that nothing is known about is a tremendous scientific challenge.
    Professor Matt Rosseinsky, from the University’s Department of Chemistry and Materials Innovation Factory, said: “Having certainty in the prediction of crystal structures now offers the opportunity to identify from the whole of the space of chemistry exactly which materials can be synthesised and the structures that they will adopt, giving us for the first time the ability to define the platform for future technologies.
    “With this new tool, we will be able to define how to use those chemical elements that are widely available and begin to create materials to replace those based on scarce or toxic elements, as well as to find materials that outperform those we rely on today, meeting the future challenges of a sustainable society.”
    Professor Paul Spirakis, from the University’s Department of Computer Science, said: “We managed to provide a general algorithm for crystal structure prediction that can be applied to a diversity of structures. Coupling local minimization to integer programming allowed us to explore the unknown atomic positions in the continuous space using strong optimization methods in a discrete space.
    Our aim is to explore and use more algorithmic ideas in the nice adventure of discovering new and useful materials. Joining efforts of chemists and computer scientists was the key to this success.”
    The research team includes researchers from the University of Liverpool’s Departments of Computer Science and Chemistry, the Materials Innovation Factory and the Leverhulme Research Centre for Functional Materials Design, which was established to develop new approaches to the design of functional materials at the atomic scale through interdisciplinary research.
    This project has received funding from the Leverhulme Trust and the Royal Society. More

  • in

    Deciphering the thermodynamic arrow of time in large-scale complex networks

    Life, from the perspective of thermodynamics, is a system out of equilibrium, resisting tendencies towards increasing their levels of disorder. In such a state, the dynamics are irreversible over time. This link between the tendency toward disorder and irreversibility is expressed as the arrow of time by the English physicist Arthur Eddington in 1927.
    Now, an international team including researchers from Kyoto University, Hokkaido University, and the Basque Center for Applied Mathematics, has developed a solution for temporal asymmetry, furthering our understanding of the behavior of biological systems, machine learning, and AI tools.
    “The study offers, for the first time, an exact mathematical solution of the temporal asymmetry — also known as entropy production — of nonequilibrium disordered Ising networks,” says co-author Miguel Aguilera of the Basque Center for Applied Mathematics.
    The researchers focused on a prototype of large-scale complex networks called the Ising model, a tool used to study recurrently connected neurons. When connections between neurons are symmetric, the Ising model is in a state of equilibrium and presents complex disordered states called spin glasses. The mathematical solution of this state led to the award of the 2021 Nobel Prize in physics to Giorgio Parisi.
    Unlike in living systems, however, spin crystals are in equilibrium and their dynamics are time-reversible. The researchers instead worked on the time-irreversible Ising dynamics caused by asymmetric connections between neurons.
    The exact solutions obtained serve as benchmarks for developing approximate methods for learning artificial neural networks. The development of learning methods used in multiple phases may advance machine learning studies.
    “The Ising model underpins recent advances in deep learning and generative artificial neural networks. So, understanding its behavior offers critical insights into both biological and artificial intelligence in general,” added Hideaki Shimazaki at KyotoU’s Graduate School of Informatics.
    “Our findings are the result of an exciting collaboration involving insights from physics, neuroscience and mathematical modeling,” remarked Aguilera. “The multidisciplinary approach has opened the door to novel ways to understand the organization of large-scale complex networks and perhaps decipher the thermodynamic arrow of time.” More

  • in

    Growing bio-inspired polymer brains for artificial neural networks

    A new method for connecting neurons in neuromorphic wetware has been developed by researchers from Osaka University and Hokkaido University. The wetware comprises conductive polymer wires grown in a three-dimensional configuration, done by applying square-wave voltage to electrodes submerged in a precursor solution. The voltage can modify wire conductance, allowing the network to be trained. This fabricated network is able to perform unsupervised Hebbian learning and spike-based learning.
    The development of neural networks to create artificial intelligence in computers was originally inspired by how biological systems work. These ‘neuromorphic’ networks, however, run on hardware that looks nothing like a biological brain, which limits performance. Now, researchers from Osaka University and Hokkaido University plan to change this by creating neuromorphic ‘wetware’.
    While neural-network models have achieved remarkable success in applications such as image generation and cancer diagnosis, they still lag far behind the general processing abilities of the human brain. In part, this is because they are implemented in software using traditional computer hardware that is not optimized for the millions of parameters and connections that these models typically require.
    Neuromorphic wetware, based on memristive devices, could address this problem. A memristive device is a device whose resistance is set by its history of applied voltage and current. In this approach, electropolymerization is used to link electrodes immersed in a precursor solution using wires made of conductive polymer. The resistance of each wire is then tuned using small voltage pulses, resulting in a memristive device.
    “The potential to create fast and energy-efficient networks has been shown using 1D or 2D structures,” says senior author Megumi Akai-Kasaya. “Our aim was to extend this approach to the construction of a 3D network.”
    The researchers were able to grow polymer wires from a common polymer mixture called ‘PEDOT:PSS’, which is highly conductive, transparent, flexible, and stable. A 3D structure of top and bottom electrodes was first immersed in a precursor solution. The PEDOT:PSS wires were then grown between selected electrodes by applying a square-wave voltage on these electrodes, mimicking the formation of synaptic connections through axon guidance in an immature brain.
    Once the wire was formed, the characteristics of the wire, especially the conductance, were controlled using small voltage pulses applied to one electrode, which changes the electrical properties of the film surrounding the wires.
    “The process is continuous and reversible,” explains lead author Naruki Hagiwara, “and this characteristic is what enables the network to be trained, just like software-based neural networks.”
    The fabricated network was used to demonstrate unsupervised Hebbian learning (i.e., when synapses that often fire together strengthen their shared connection over time). What’s more, the researchers were able to precisely control the conductance values of the wires so that the network could complete its tasks. Spike-based learning, another approach to neural networks that more closely mimics the processes of biological neural networks, was also demonstrated by controlling the diameter and conductivity of the wires.
    Next, by fabricating a chip with a larger number of electrodes and using microfluidic channels to supply the precursor solution to each electrode, the researchers hope to build a larger and more powerful network. Overall, the approach determined in this study is a big step toward the realization of neuromorphic wetware and closing the gap between the cognitive abilities of humans and computers. More

  • in

    Antarctic sea ice has been hitting record lows for most of this year

    Something strange is happening to the Antarctic’s sea ice. The areal expanse of floating ice fringing the continent is not only at a record low for this time of year — surpassing a record just set in 2022 — but ice extent has been hitting record lows throughout the year.

    “What’s happened here is unlike the Arctic sea ice expanse,” says Mark Serreze, a climate scientist and the director of the U.S. National Snow and Ice Data Center, or NSIDC, in Boulder, Colo. We’ve come to expect a dramatic decline in sea ice at Earth’s other pole, he says (SN: 9/25/19). “Not much has happened to Antarctica’s sea ice until the last few years. But it’s just plummeted.”

    NSIDC uses satellite-gleaned data, collected daily, to keep an eye on the spread of sea ice at both poles. Throughout most of 2023, the ring of sea ice around Antarctica has repeatedly set new record lows, staying well below the average extent from 1981 to 2010. On February 21 — the height of the Southern Hemisphere’s summer — the sea ice expanse hit an all-time low since record-keeping began in 1978, of 1.79 million square kilometers. That’s 130,000 square kilometers — about the size of the state of New York — smaller than the previous recorded minimum, reached on February 25, 2022.

    Subpar sea ice

    The amount of ocean around Antarctica covered in sea ice in 2023 (red) has stayed well below the average from 1981 to 2010 (black). Sea ice expanse hit a record low in late February — surpassing a record set just in 2022 (blue). The sea ice expanse for every year from 1981 to 2021 is shown in gray.

    Even as the Southern Hemisphere shifted into winter, Antarctic sea ice remained at record low levels. On June 27, the ice was dotted across about 11.7 million square kilometers of ocean. That’s about 2.6 million square kilometers below the 1981–2010 average, and about 1.2 million square kilometers below the previous lowest extent on record for June 27, set in 2022.

    Unlike Arctic ice, whose dwindling is known to be closely tied to global warming, it’s been harder to parse the reasons for changes in Antarctic sea ice extent. That difficulty has made it unclear whether changes are the result of natural variability or whether “something big has changed,” Serreze says.

    As of June 28, the sea ice surrounding Antarctica, as measured by satellite, covered a smaller area of ocean than the average extent from 1981 to 2010 for this time of year. Yellow lines and dots represent missing satellite data.U.S. National Snow and Ice Data Center

    The last few years have given scientists pause (SN: 6/27/17). “We’re kind of dropping off an edge,” Serreze says. It’s not yet clear whether this year’s extent is part of a larger trend, he notes. But “the longer that persists, the more likely it is that something big is happening.”

    The Arctic and the Antarctic regions are polar opposites, so to speak, in their geographic setting. Ice in the Arctic Ocean is confined to a relatively small body of water ringed by land. The Antarctic, by contrast, is a landmass surrounded by ocean, which means the sea ice around the continent is much more mobile than up north, with a larger seasonal range as it expands in the Southern Hemisphere’s winter and shrinks in summer. Climate simulations have, accordingly, consistently predicted that the Arctic would show bigger sea ice losses as the planet warms, at least at first, while Antarctica would be slower to respond.

    As to why the Antarctic ice has tracked so low this year there are a few possible culprits. Regional climate patterns — particularly an air pressure pattern known as the Southern Annular Mode that shifts the direction of winds blowing around the continent — can pack or diffuse the sea ice cover around Antarctica. And other regional patterns, such as the El Niño Southern Oscillation, can affect both ocean and air circulation in the southern high latitudes.

    .subscribe-cta {
    color: black;
    margin-top: 0px;
    background-image: url(“”);
    background-size: cover;
    padding: 20px;
    border: 1px solid #ffcccb;
    border-top: 5px solid #e04821;
    clear: both;
    }

    Subscribe to Science News

    Get great science journalism, from the most trusted source, delivered to your doorstep.

    Right now, scientists are concerned most with what lies beneath the ice (SN: 12/13/21). “There’s growing evidence that there has been some kind of change in ocean circulation that is bringing more heat” to the region, which affects the ice cover, Serreze says. “There are a bunch of people looking into this; we’re really blitzing to get the data. We need to understand what the heck is going on in the ocean.” More

  • in

    ‘Workplace AI revolution isn’t happening yet,’ survey shows

    The UK risks a growing divide between organisations who have invested in new, artificial intelligence-enabled digital technologies and those who haven’t, new research suggests.
    Only 36% of UK employers have invested in AI-enabled technologies like industrial robots, chat bots, smart assistants and cloud computing over the past five years, according to a nationally representative survey from the Digital Futures at Work Research Centre (Digit). The survey was carried out between November 2021 and June 2022, with a second wave now underway.
    Academics at the University of Leeds, with colleagues at the Universities of Sussex and Cambridge, led the research, finding that just 10% of employers who hadn’t already invested in AI-enabled technologies were planning to invest in the next two years.
    The new data also points to a growing skills problem. Less than 10% of employers anticipated a need to make an investment in digital skills training in the coming years, despite 75% finding it difficult to recruit people with the right skills. Almost 60% of employers reported that none of their employees had received formal digital skills training in the past year.
    Lead researcher Professor Mark Stuart, Pro Dean for Research and Innovation at Leeds University Business School, said: “A mix of hope, speculation, and hype is fuelling a runaway narrative that the adoption of new AI-enabled digital technologies will rapidly transform the UK’s labour market, boosting productivity and growth. These hopes are often accompanied by fears about the consequences for jobs and even of existential risk.
    “However, our findings suggest there is a need to focus on a different policy challenge. The workplace AI revolution is not happening quite yet. Policymakers will need to address both low employer investment in digital technologies and low investment in digital skills, if the UK economy is to realise the potential benefits of digital transformation.”
    Stijn Broecke, Senior Economist at the Organisation for Economic Co-operation and Development (OECD), said: “At a time when AI is shifting digitalisation into a higher gear, it is important to move beyond the hype and have a debate that is driven by evidence rather than fear and anecdote. This new report by the Digital Futures at Work Research Centre (Digit) does exactly this and provides a nuanced picture of the impact of digital technologies on the workplace, highlighting both the risks and the opportunities.”
    The main reasons for investing were improving efficiency, productivity and product and service quality, according to the survey. On the other hand, the key reasons for non-investment were AI being irrelevant to the business activity, wider business risks and the nature of skills demanded.
    There was little evidence in this survey to suggest that investing in AI-enabled technology leads to job losses. In fact, digital adopters were more likely to have increased their employment in the five-year period before the survey.
    As policymakers race to keep up with new developments in technology, the researchers are now urging politicians to focus on the facts of AI in the workplace.
    The Employers’ Digital Practices at Work Survey is a key output of the Digital Futures at Work Research Centre, which is funded by the Economic and Social Research Council (ESRC) and co-led by the Universities of Sussex and Leeds Business Schools. The First Findings report will be available on the Digit website on Tuesday 4 July. More