More stories

  • in

    Dying Light 2 review: Avoiding zombies in a game with nods to covid-19

    In Dying Light 2, a variant of a virus has turned people into zombiesTechland
    Dying Light 2 Stay Human
    Techland
    PC, PlayStation 4 and 5, Xbox One and Series X/SAdvertisement
    IN APRIL 2020, soon after the UK entered its first lockdown, I reviewed the zombie-packed Resident Evil 3, describing it as noticeably “pre-pandemic fiction”. Two years on, the pandemic is still going, and I am still playing zombie games. This time, it is Dying Light 2 Stay Human, and it is interesting to look at the game as a work of post-pandemic (mid-pandemic?) fiction.
    It is a sequel to the 2015 game Dying Light, which saw a viral outbreak in the fictional Middle Eastern city of Harran turn people into zombies. The end of the game promised a cure to the disease, but as the introduction of Dying Light 2 explains – and stop me if you have heard this before – a new variant of the virus emerged in 2021 and spread rapidly. The zombies took over and civilisation collapsed. Cheery stuff.
    The game picks up the story in 2036, where you play as a survivor called Aiden Caldwell. After being bitten by a zombie, you enter one of the last remaining outposts of society, known only as the City. There, you discover that all of the other survivors are also infected, but use a variety of tools to avoid zombification – hence the “Stay Human” part of the game’s title.
    Full zombies can’t survive in sunlight, so City folk have set up ultraviolet lamps to hold back the infection. One of your early goals in the game is to acquire a wristband that provides an alert when you need a top-up of UV. Owning one of these wristbands is a condition of living in the City, perhaps a nod to the various covid passes that have been implemented around the world.
    “Aiden has expert parkour skills that allow him to scale buildings and dodge undesirable characters”
    With a wristband secured, the game settles into a rhythm. By day, you are more or less safe from zombies outside (though not from roving bandits), although it is risky to enter derelict buildings, where the undead tend to gather. Then, at night, the zombies hit the streets, so it is tricky to get around outside, but easier to explore within. Dodging zombies has its rewards: you get bonus experience points, which you can use to upgrade your abilities, handy for venturing out at night and for surviving a zombie chase.
    For reasons that are never properly explained, Aiden has expert parkour skills that allow him to scale buildings, jump across rooftops and generally dodge undesirable characters. In a strange game design decision, features that would usually be part of the basic move set in this kind of game (such as the ability to slide) require unlocking upgrades, so it takes a while to accumulate the full set of skills.
    That is a shame, because this freedom of movement is probably the best thing about the game. I had great fun racing through the city, but beyond the obvious covid-19 links, the meat of the game is nothing you haven’t seen before. Everything boils down to: go here, get this thing, kill these zombies, repeat.
    As you explore the city, you get the opportunity to claim various locations, such as a water tower, for one of three factions: the slightly fascist Peacekeepers, the anarchic Renegades or the ordinary survivors. You get to pick a side, and Techland, the game’s developer, goes big on the idea that which you choose matters to the (entirely forgettable) storyline. But two years into the pandemic, I was more inclined to stick with the ordinary survivors. It is hard not to sympathise with people who have lived through a world-altering disaster and are just trying their best to carry on existing.
    Jacob also recommends… More

  • in

    How to grow strawberries and protect them from slugs

    Shop-bought strawberries can taste disappointing, but home grown ones are delicious. Here’s how to succeed in growing these delicate fruits, says Clare Wilson

    Humans

    2 March 2022

    By Clare Wilson
    GAP Photos/Julia Boulton
    THERE are plenty of reasons people grow their own fruit and vegetables: it is a satisfying outdoor hobby, it gets you some exercise and the produce has low food miles, usually making it good for the planet too.
    Another reason is that many home-grown fruits and vegetables taste better than the ones on sale in shops. The difference is particularly noticeable for some types of produce, such as new potatoes, asparagus, tomatoes, strawberries, raspberries and blueberries.
    There can be several explanations. One is that the varieties grown by farmers are often different to those sold for home growing. Farmers use … More

  • in

    How to Stay Smart in a Smart World review: Why humans still trump AI

    Despite AI’s impressive feats at driving cars and playing games, a new book by psychologist Gerd Gigerenzer argues that our brains have plenty to offer that AI will never match

    Humans

    2 March 2022

    By Chen Ly

    IN THE 1950s, Herbert Simon – a political scientist and one of the founders of AI – declared that, once a computer could beat the best chess player in the world, machines would have reached the pinnacle of human intelligence. Just a few decades later, in 1997, the chess-playing computer Deep Blue beat world champion Garry Kasparov.
    It was an impressive feat, but according to Gerd Gigerenzer, a psychologist at the Max Planck Institute for Human Development in Berlin, human minds don’t need to worry just yet. In How to Stay Smart in a Smart World, he unpacks humanity’s complicated relationship with artificial intelligence and digital technology. In an age where self-driving cars have been let loose on the roads, smart homes can anticipate and cater for our every need and websites seem to know our preferences better than we do, people tend to “assume the near-omniscience of artificial intelligence”, says Gigerenzer. But, he argues, AIs aren’t as clever as you might think.
    A 2015 study, for example, showed that even the smartest object-recognition system is easily fooled, confidently classifying meaningless patterns as objects with more than 99 per cent confidence. And at the 2017 UEFA Champions League final in Cardiff, UK, a face-recognition system matched the faces of 2470 football fans at the stadium and the city’s railway station to those of known criminals. This would have been useful but for 92 per cent of the matches turning out to be false alarms, despite the system being designed to be both more efficient and more reliable than humans.
    There are good reasons why even the smartest systems fail, says Gigerenzer. Unlike chess, which has rules that are rigid and unchanging, the world of humans is squishy and inconsistent. In the face of real-world uncertainty, algorithms fall apart.
    Here, we get to the crux of Gigerenzer’s main argument: technology, at least as we know it today, could never replace humans because there is no algorithm for common sense. Knowing, but not truly understanding, leaves AI in the dark about what is really important.
    Obviously, technology can be, and often is, useful. The voice and face-recognition software on smartphones are largely convenient and the fact that YouTube seems to know what I want to watch saves the hassle of working it out for myself. Yet even if smart technology is mostly helpful, and is showing few signs of replacing us, Gigerenzer argues that we should still be aware of the dangers it can pose to our society.
    “Knowing, but not truly understanding, leaves artificial intelligence in the dark about what is really important”
    Digital technology has created an economy that trades on the exchange of personal data, which can be used against our best interests. Companies and political parties can purchase targeted adverts that subtly influence our online shopping choices and, even more nefariously, how we vote. “One might call this turn to an ad-based business model the ‘original sin’ of the internet,” writes Gigerenzer.
    So, what can be done? Gigerenzer says that more transparency from tech firms and advertisers is vital. But technology users also need to change our relationship with it. Rather than treating technology with unflinching awe or suspicion, we must cultivate a healthy dose of scepticism, he says. In an age where we seem to accept the rise of social media addiction, regular privacy breaches and the spread of misinformation as unavoidable downsides of internet use – even when they cause significant harm to society – it is perhaps time we took stock and reconsidered.
    Using personal anecdotes, cutting-edge research and cautionary real-world tales, Gigerenzer deftly explains the limits and dangers of technology and AI. Occasionally, he uses extreme examples for the sake of making a point, and in places he blurs the lines between digital technology, smart technology, algorithms and AI, which muddies the waters. Nevertheless, the overall message of Gigerenzer’s book still stands: in a world that increasingly relies on technology to make it function, human discernment is vital “to make the digital world a world we want to live in”.

    More on these topics: More

  • in

    Stonehenge may have been a giant calendar and now we know how it works

    The sarsen stones of the Stonehenge monument could have been designed as a calendar to track a solar year, with each of the stones in the large sarsen circle representing a day within a month

    Humans

    2 March 2022

    By Alison George
    Stonehenge – an ancient calendar?nagelestock.com / Alamy
    Stonehenge has long been thought to be an ancient calendar due to its alignment with the summer and winter solstices, but exactly how the calendar system worked was a mystery. Now a new analysis shows that it could have functioned like the solar calendar used in ancient Egypt, based on a year of 365.25 days, with each of the stones of the large sarsen circle representing a day within a month.
    “It’s a perpetual calendar that recalibrates every winter solstice sunset,” says Tim Darvill of Bournemouth University, UK, who carried out the analysis. This would have enabled the ancient people who lived near the monument in what is now Wiltshire, UK, to keep track of days and months of the year.
    The key to unlocking this calendar system came from the discovery in 2020 that most of the sarsen stones were quarried from the same location 25 kilometres away, and were placed at Stonehenge at around the same time.Advertisement
    Join us for a mind-blowing festival of ideas and experiences. New Scientist Live is going hybrid, with a live in-person event in Manchester, UK, that you can also enjoy from the comfort of your own home, from 12 to 14 March 2022. Find out more.
    “All except two of the sarsens at Stonehenge come from that single source, so the message to me was that they’ve got a unity to them,” says Darvill. This indicates that they were intended for a common purpose. To find out what, he looked for clues in the numbers.
    The sarsens were arranged in three different formations at Stonehenge around 2500 BC: 30 formed the large stone circle that dominates the monument, 4 “station stones” were placed in a rectangular formation outside this circle, and the rest were constructed into 5 trilithons – consisting of two vertical stones with a third stone laid horizontally across the top like a lintel – located inside the stone circle.
    “30, 5 and 4 are interesting numbers in a calendrical kind of sense,” says Darvill. “Those 30 uprights around the main sarsen ring at Stonehenge would fit very nicely as days of the month,” he says. “Multiply that by 12 and you get 360, add on another 5 from the central trilithons you get 365.” To adjust the calendar to match a solar year, the addition of one extra leap day every four years is needed, and Darvill thinks that the four station stones may have been used to keep track of this. In this system, the summer and winter solstice would be framed every year by the same pair of stones.
    This Stonehenge calendar system “makes a lot of sense,” says David Nash at the University of Brighton, UK.  “I like the elegant simplicity of it.”
    Others are not so sure. “It’s certainly intriguing but ultimately it fails to convince,” says Mike Parker Pearson of University College London, UK.” The numbers don’t really add up – why should two uprights of a trilithon equal one upright of the sarsen circle to represent 1 day? There’s selective use of evidence to try to make the numbers fit.”

    Although a calendar with 30-days months and an extra “intercalary” month of five days might not be familiar to us today, such a system was used in ancient Egypt from around 2700 BC and other solar calendars had been developed in the eastern Mediterranean region at around that time.
    In the Egyptian calendar these five extra days were “very significant, religiously speaking,” says Sacha Stern, an expert in ancient calendars at University College London. This has led Darvill to think that the five trilithon structures at Stonehenge might have marked a five-day mid-winter celebration, an idea bolstered by the fact that the tallest stone at the monument, part of one of the trilithons, points to the sunrise on the midwinter solstice.
    The similarity between the Stonehenge calendar and that used in ancient Egypt hints that the idea for the Stonehenge system may have come from afar. Recent archaeological finds support the idea of long-distance travel and trade around that time. Isotope analysis of the body of the Amesbury Archer, who was buried 5 kilometres from Stonehenge around 2300 BC, revealed that he was born in the Alps and came to Britain as a teenager, and a red glass bead found 2km from the monument appears to have been made in Egypt around 2000 BC.
    However, Stern is not convinced by the argument that the Stonehenge calendar system originated elsewhere. “I wonder if you need to invoke the Egyptians. Why can’t we just imagine that [the people who built Stonehenge] created the whole system by themselves? They certainly knew when the solstice was, and from that point onwards you just have to count the days, and it won’t take long to figure out how many days you need in the year.”
    Journal reference: Antiquity, DOI: 10.15184/aqy.2022.5

    More on these topics: More

  • in

    Would Vladimir Putin really use nuclear weapons in Ukraine?

    Russia’s invasion of Ukraine hasn’t gone to plan and has led to an economic backlash from the West. If Russian president Vladimir Putin feels backed into a corner, there is a real possibility he could use a nuclear weapon in an attempt to show strength, say analysts

    Humans

    | Analysis

    28 February 2022

    By Matthew Sparkes
    Vladimir Putin may feel increasingly isolatedRussian Look Ltd. / Alamy
    Nuclear conflict is a distinct but remote possibility as global tensions are ratcheted up by Russia’s faltering invasion of Ukraine, warn analysts. Russian president Vladimir Putin is in a vulnerable and unpredictable position as he contends with a lacklustre economy, increasing dissent among his citizens and, now, the potential for military defeat.
    On 27 February, Putin raised Russia’s nuclear readiness system level by ordering his forces to take a “special regime of combat duty”. Patrick Bury at the University of Bath, UK, says this announcement was unusually vague, counter to the typical nuclear deterrence strategy of acting clearly and transparently as a warning to others. He and fellow academics and analysts assumed that the country would have been at level 2 of Russia’s four-level system already, given the situation in Ukraine.
    But Putin’s announcement is being widely interpreted as a move from level 1 (stood down) to level 2 (ready to accept an order to fire). Bury believes we are closer to nuclear conflict now than at any point since the cold war tension of the 1980s. “Putin has poked a sleeping giant,” he says. “The West has responded massively.”Advertisement
    This response included Western nations sending weapons and aid to Ukraine, while stronger-than-expected economic sanctions from around the world are piling on the pressure against Putin. If Russia’s invasion now fails, he could be removed from power or even killed in a coup, which Bury warns is a situation that backs Putin into a corner.
    Bury puts the odds of a nuclear detonation as a result of this crisis at between 20 per cent and 30 per cent, but points out that it need not lead to all-out nuclear war. Instead, we could see a low-yield device used against the military in Ukraine, or even a large device detonated at sea simply as a show of force.
    David Galbreath at the University of Bath says that the conflict is about more than Ukraine: it is a flexing of Russian muscles against what Putin sees as the growing threat of cooperation in the European Union and the NATO military alliance.

    Galbreath says it was obvious in the build-up to the invasion that the types of personnel and weapons amassing at the border were the type one would deploy to quickly strike Kiev, the Ukrainian capital, oust Ukrainian president Volodymyr Zelenskyy and install a puppet leader – not those needed to occupy a country.
    If this was the plan, it has already failed. And therefore we may now see the use of stronger military options that are available to Putin, such as electronic warfare that can cripple enemy surveillance and vehicles, and sophisticated anti-aircraft missiles that would prevent Ukraine from defending its airspace – currently it is still able to launch its aircraft and dogfights with Russian aircraft continue. Nuclear weapons are also a possibility, but only as a last resort, says Galbreath.
    “In terms of military action, I think what we’ve seen so far is fairly limited. I think they’re going to get heavy next. And I think we need to prepare for far worse casualties,” says Kenton White at the University of Reading, UK.
    White points to Russia’s military tactic of maskirovka, or disinformation, which the country has already used during the invasion. In an extreme case, White says this could stretch to a false-flag operation, such as the detonation of a small nuclear bomb outside Ukraine’s border, which is blamed on NATO.
    “There’s a lot of talk about rationality of action when you’re discussing nuclear deterrence,” says White. “Well, President Putin has a rationality all of his own.”

    More on these topics: More

  • in

    Largest ever family tree of humanity reveals our species' history

    By Michael Marshall
    A visualisation of relationships between ancestors and descendants in the genealogy of modern and ancient genomesWohns et al. (2022)
    Meet your relatives. A family tree of humanity has been constructed using genetic data from thousands of modern and prehistoric people. The tree gives a view of 2 million years of prehistory and evolution.
    “Humans are all ultimately related to each other,” says Gil McVean at the University of Oxford. “What I’ve long wanted to do is to be able to represent the totality of what we can learn about human history through this genealogy.”
    The new family tree suggests that our earliest roots were in north-east Africa. It also offers clues that people reached Papua New Guinea and the Americas tens of thousands of years earlier than the archaeological record implies, hinting at early migrations that haven’t yet been discovered. However, both these ideas would need to be confirmed by archaeologists.Advertisement
    Join us for a mind-blowing festival of ideas and experiences. New Scientist Live is going hybrid, with a live in-person event in Manchester, UK, that you can also enjoy from the comfort of your own home, from 12 to 14 March 2022. Find out more.
    Geneticists have been reading people’s entire genomes for the past two decades. McVean and his colleagues compiled 3609 complete genomes, almost all of which belonged to our species, Homo sapiens, except for three Neanderthals and one from the Denisovan group, which may be a subspecies of H. sapiens or a separate species.
    Putting them together into a tree was challenging. “The different data sets have been produced over time, using different technologies, analysed in different ways,” says McVean.
    The team focused on bits of DNA that vary from person to person. They identified 6,412,717 variants and tried to figure out when and where each one arose. To do this, they also looked at an additional 3589 samples of ancient DNA that weren’t good enough to include in the tree, but did shed light on when the variants emerged.
    Variants that emerged before 72,000 years ago were most common in north-east Africa, and the oldest 100 variants were also from there, specifically in what is now Sudan. Those oldest variants are about 2 million years old, so they long predate our species, which emerged around 300,000 years ago. Instead, the variants date to the earliest members of our genus, Homo.
    The simplistic interpretation of this is that humanity first evolved in that region, but it is likely that subsequent migrations have interfered with the data. “I would definitely not take the naive and immediate answer,” says Jennifer Raff at the University of Kansas.
    The earliest H. sapiens fossils are from the north and east of Africa, but few have been discovered, so we don’t know our species’ early range with any certainty. The oldest known specimens are from Jebel Irhoud in Morocco, in north Africa, and are perhaps 315,000 years old. The next oldest are those from Omo-Kibish in Ethiopia, in the east. They were thought to be 197,000 years old, but a paper published in January presented evidence that they are more like 233,000 years old.
    Many anthropologists now think there were multiple populations spread across Africa, which were sometimes separated and sometimes interbred. If that is correct, humanity doesn’t have a central origin point. “Our findings are certainly perfectly compatible with that,” says McVean. “There’s a lot of very deep lineages within Africa, which are suggestive of that notion of there being multiple source populations, very deeply diverged, representing really ancient splits.”

    In line with this, a second study published this week obtained ancient DNA from six sub-Saharan African people who lived within the past 18,000 years. They carried DNA from three distinct lineages that originated in the distant past, from eastern, central and southern Africa. These groups began interbreeding more around 50,000 years ago, but by 20,000 years ago this largely stopped.
    The new genealogy also contains hints of early journeys. It suggests that people were living in Papua New Guinea 140,000 years ago, almost 100,000 years before the earliest documented inhabitants. Similarly, it indicates that people were in the Americas 56,000 years ago, despite many archaeologists having settled on 18,000 years ago as the earliest entrance.
    The idea of people in the Americas earlier than this is controversial because, prior to that, great ice sheets covered the northern regions, blocking migration. Nevertheless, a study from September 2021 described footprints from White Sands National Park in New Mexico, which suggest humans were in the Americas between 23,000 and 21,000 years ago. There is also disputed evidence of humans living in Chiquihuite cave in Mexico as much as 33,000 years ago. But 56,000 years ago is still a big reach.
    “I think there are three possible explanations,” says McVean. “One is, we’re wrong.” The second is that people really were in these places very early.
    The third option is a more complex scenario. The first people to live in the Americas came from eastern Asia, and it may be that the population from which they came has died out in Asia. This would mean the oldest American-looking genetic variants are actually from people who lived in Asia – but the only living people with those variants today are in America, throwing off the analysis. A similar story could have played out for Papua New Guinea.
    “It’s very common in our genetic data that there are ancient lineages which don’t persist throughout time,” says Raff. “That’s completely plausible.”
    Journal reference: Science, DOI: 10.1126/science.abi8264
    Sign up to Our Human Story, a free monthly newsletter on the revolution in archaeology and human evolution

    More on these topics: More

  • in

    The Man Who Tasted Words review: Inside the odd world of human senses

    A new book by neurologist Guy Leschziner looks at the astonishing ways some people’s brains interpret the world, offering insight into how we all experience reality

    Humans

    23 February 2022

    By Carissa Wong

    Tasting words is one possible outcome of crossed sensory wires in the brainShutterstock/Brian Mueller
    The Man Who Tasted Words: Inside the strange and startling world of our senses
    Guy Leschziner
    Simon & Schuster UKAdvertisement

    VALERIA was 14 years old when she realised that most people don’t see colours and feel textures when they listen to music. Now in her mid-20s, when she plays a piano, bright oranges, purples and yellows flow in and out of her sight, accompanied by fleeting feelings of warmth on her face, an ocean breeze or a sharp sensation around the spine.
    Valeria has synaesthesia, a phenomenon in which stimulation of one sense generates sensations of another. People with synaesthesia can’t control how their senses join up and many can’t imagine living with any other form of perception.
    In The Man Who Tasted Words, neurologist Guy Leschziner explores how the senses, and the neural circuits that underlie them, shape our view of the world. By introducing us to people with rare sensory capabilities such as Valeria, Leschziner highlights that there is no “normal” perception of reality because what we perceive as being “out there” in the world is entirely generated by activity in our brains.
    The book title is inspired by James, another synaesthete, who associates words with flavours. In James’s world, a trip on the London Underground is an uncontrollable buffet of flavours. Holborn station tastes of burnt matches and Liverpool Street of liver and onions.
    Leschziner alludes to the fact that synaesthesia tends to run in families, but stops short of a satisfying deep dive into the research on how synaesthesia is linked to genetically determined structural changes in the brain.
    As well as chronicling the experiences of people like Valeria and James, who have experienced the world in unusual ways since birth, Leschziner explores cases of sensory alteration that have affected people following illness or injury. Each case reads like a short detective story, with puzzling symptoms pieced together from Leschziner’s perspective as their neurologist, supported by quotes from the individuals themselves.
    We meet Alison, whose taste for trout while holidaying in Fiji led to a type of nervous system poisoning that reversed her sense of hot and cold. A sip of icy water now causes her lips to burn, while a warm shower feels freezing cold.
    We encounter Nina, who lost almost all her sight after a bout of flu as a toddler caused damaging inflammation in her eyeballs. Starved of visual inputs from her eyes, her brain now hallucinates colourful shapes, cartoons and sometimes zombie faces. This is a condition called Charles Bonnet syndrome, which Nina finds distracting and sometimes upsetting, but is, she says, preferable to darkness.
    “Each case of sensory alteration reads like a detective story, with puzzling symptoms pieced together”
    Leschziner also meets Paul, who can’t feel pain due to a genetic mutation that affects his sensory nerves. He has had a lifetime of injuries caused by a low aversion to danger because he can’t feel the painful consequences of risky behaviour. This has led to so many bone and joint problems that his movement is restricted. Leschziner explores the emotional toll of the condition on Paul and his parents, who lost a 13-month-old daughter with the same condition after her sepsis went undetected because she wasn’t showing signs of distress.
    This book is packed with examples of remarkable perception, but it doesn’t stop there. Leschziner also considers how our senses affect the way we all live our lives. Smell, for example, plays a role in our choice of partners in ways that have driven the evolution of our species. He also touches on more philosophical questions, such as how we know what the world is really like, given that we can’t say with any certainty that our experience of it is anything like that of other people.
    Throughout the book, Leschziner makes it clear that every person’s reality is as valid as the next. There are, however, moments where he seemingly assumes that the reader experiences all five senses – and in the “normal” way. At other points, there is unnecessary repetition, which detracts from the message he is trying to get across.
    Overall, though, Leschziner provides a thought-provoking journey through the fundamental role our senses play in our experience of life and punctures the illusion that our window on the world is the unflinching truth. The fact that it is anything but only makes it more magical.

    More on these topics: More

  • in

    Science needs to address its imagination problem – lives depend on it

    Almost 200 people died in the German floods of 2021 because experts couldn’t convince them of impending danger. We must rethink how to get through to the public, says hydrologist Hannah Cloke

    Humans

    | Comment

    23 February 2022

    By Hannah Cloke
    Simone Rotella
    IMAGINATION is one of those powerful human traits that sets us apart from other animals. By reading the word “circus”, your brain automatically conjures up a rich tableau of images and ideas. But you don’t need to be daydreaming of clowns to know that imagination plays a vital role in science.
    The advancement of this domain intrinsically requires the birth of new ideas. Einstein famously claimed that imagination was more important than knowledge in the formulation of his theories. When researchers test ideas against reality, imagination is hardwired into the process: the point of science is that it allows you to see the future, to look round corners, to extend the capability of human insight. In that sense, imagination in science is alive and well.
    But in another sense, it has an imagination problem. I recently gave evidence to two state-level inquiries in Germany into the July 2021 floods in the west of the country. Both inquiries are exploring why almost 200 people died there in a deluge that was forecast accurately several days in advance. It is a complicated question that will probably yield many answers. I believe a lack of imagination may be partly behind this.Advertisement
    The scientists couldn’t imagine that their forecasts, delivered in good time and with accuracy, could be ignored. Municipal authorities couldn’t imagine that such dire forecasts might be correct. And many of the people living in harm’s way just couldn’t imagine what a 9-metre wall of water would do, or how badly they would be affected.
    The best scientists use many of their human abilities – imagination and creativity, collaboration, communication and empathy – to make discoveries and reach new insights. Yet when it comes to telling people about them, we can turn into robots, unable to deliver important messages.
    All of the most compelling ideas are those conveyed to us in ways that we can see and picture and feel. The big bang is a conceptual theory that no one needs to grasp to stay alive, yet it fundamentally changed the understanding of our existence. If physicists were able to describe it only to other physicists, humanity would be all the poorer.
    Putting a human face on non-human phenomena can work too. There is good evidence that naming storms leads people to take action to protect themselves. In the UK, we have had plenty of exposure to this recently. The prospect of Corrie, Dudley or Eunice smashing into your home, as opposed to just seeing a generic warning of “gusts greater than 80mph”, engages your brain in a way that encourages a response.
    If naming storms works, then how about naming floods? Would people be more or less likely to respond to a warning and move to higher ground if a rising river was renamed Flood Dave? Such a label may be less accurate to hydrologists, perhaps, than saying that a rise in river levels of 5 metres will lead to flooding with a return period of 20 years. But probably more useful to everybody else.
    As with the comet-spotting astronomers in the film Don’t Look Up, or the real-life climate scientists that it is based on, it is a tragedy to see danger ahead when no-one acts to avoid it. The most advanced supercomputers running complex simulations are useless if nobody understands the risks that they foretell.
    By ignoring imagination when we convey science, we are shirking our responsibility as scientists. If communicating our findings is important – and sometimes, lives depend on it – then we have a responsibility to undertake the task with as much flair, creativity and passion as we use when we do our research. Logic and reason is fine. But when we can’t move beyond the facts, people may die.

    Hannah Cloke is a hydrologist at the University of Reading in the UK (@hancloke)

    More on these topics: More