More stories

  • in

    New study finds AI-generated empathy has its limits

    Conversational agents (CAs) such as Alexa and Siri are designed to answer questions, offer suggestions — and even display empathy. However, new research finds they do poorly compared to humans when interpreting and exploring a user’s experience.
    CAs are powered by large language models (LLMs) that ingest massive amounts of human-produced data, and thus can be prone to the same biases as the humans from which the information comes.
    Researchers from Cornell University, Olin College and Stanford University tested this theory by prompting CAs to display empathy while conversing with or about 65 distinct human identities.
    The team found that CAs make value judgments about certain identities — such as gay and Muslim — and can be encouraging of identities related to harmful ideologies, including Nazism.
    “I think automated empathy could have tremendous impact and huge potential for positive things — for example, in education or the health care sector,” said lead author Andrea Cuadra, now a postdoctoral researcher at Stanford.
    “It’s extremely unlikely that it (automated empathy) won’t happen,” she said, “so it’s important that as it’s happening, we have critical perspectives so that we can be more intentional about mitigating the potential harms.”
    Cuadra will present “The Illusion of Empathy? Notes on Displays of Emotion in Human-Computer Interaction” at CHI ’24, the Association of Computing Machinery conference on Human Factors in Computing Systems, May 11-18 in Honolulu. Research co-authors at Cornell University included Nicola Dell, associate professor, Deborah Estrin, professor of computer science and Malte Jung, associate professor of information science.

    Researchers found that, in general, LLMs received high marks for emotional reactions, but scored low for interpretations and explorations. In other words, LLMs are able to respond to a query based on their training but are unable to dig deeper.
    Dell, Estrin and Jung said there were inspired to think about this work as Cuadra was studying the use of earlier-generation CAs by older adults.
    “She witnessed intriguing uses of the technology for transactional purposes such as frailty health assessments, as well as for open-ended reminiscence experiences,” Estrin said. “Along the way, she observed clear instances of the tension between compelling and disturbing ’empathy.'”
    Funding for this research came from the National Science Foundation; a Cornell Tech Digital Life Initiative Doctoral Fellowship; a Stanford PRISM Baker Postdoctoral Fellowship; and the Stanford Institute for Human-Centered Artificial Intelligence. More

  • in

    Researchers say future is bright for treating substance abuse through mobile health technologies

    Despite the high prevalence of substance abuse and its often devastating outcomes, especially among disadvantaged populations, few Americans receive treatment for substance use disorders. However, the rise of mobile health technologies can make treatments more accessible.
    Researchers at the University of Oklahoma are creating and studying health interventions delivered via smartphones to make effective, evidence-based treatments available to those who cannot or don’t want to enter traditional in-person treatment. Michael Businelle, Ph.D., co-director of the TSET Health Promotion Center, a program of OU Health Stephenson Cancer Center, recently published a paper in the Annual Review of Clinical Psychology that details the current landscape of mobile health technology for substance use disorders and suggests a roadmap for the future.
    The Health Promotion Research Center (HPRC) is at the forefront of mobile health technologies worldwide, having attracted $65 million in grants and supporting nearly 100 mobile health studies. Within HPRC, Businelle leads the mHealth Shared Resource, which launched the Insight™ mHealth Platform in 2015 to create and test technology-based interventions. A multitude of health apps are available commercially, but few have undergone the research necessary to determine if they are effective. Businelle sees the promise of rigorously tested smartphone apps to fill gaps in substance abuse treatment.
    “According to the Substance Abuse and Mental Health Services Administration, only 6% of people with substance use disorders receive any form of treatment,” Businelle said. “There are many reasons — we have a shortage of care providers, people may not have reliable transportation, may not be able to get away from work, or they may not be able to afford treatment. However, 90% of all U.S. adults own smartphones, and technology now allows us to create highly tailored interventions delivered at the time that people need them.”
    Businelle and his team have many mobile health studies underway for substance abuse, and the Insight™ mHealth Platform is used by other research institutions across the United States. The mobile health field is large and growing, not only for substance abuse but for mental health disorders like depression and anxiety. In his publication, Businelle makes several recommendations for research going forward.
    Re-randomize clinical trial participants
    Thus far, most clinical trials for mobile health interventions have mirrored traditional clinical trials studying new drugs, in which participants are randomly assigned to receive a new drug or a placebo and stay in those groups for the duration of the trial. But that approach doesn’t work well for substance abuse trials, Businelle said. For example, if people don’t quit smoking on their targeted quit date, they are unlikely to quit during the trial. Unlike traditional trials, mobile health apps can be programmed to re-randomize participants, or move them to a different intervention that might work better for them, he said.

    “Instead of being stuck receiving a treatment that we know isn’t likely to be effective for an individual, the app can easily re-randomize participants to different treatments,” he said. “Just because they weren’t successful with one type of intervention doesn’t mean that another one won’t work.”
    Objectively verify self-reports
    Most substance abuse interventions have historically relied on people to report their own relapses. Unfortunately, because of stigma, people don’t always report their usage truthfully, Businelle said. However, technology can now be used to biochemically verify self-reported substance use. In six of his smoking cessation trials, Businelle verifies whether participants have smoked by asking them to blow into a small device connected to a smartphone that detects the presence of carbon monoxide. Facial recognition software confirms the participant is the one testing.
    “It is really important for the accuracy of our studies to objectively verify what people report,” he said. “We are also developing similar noninvasive technologies that can detect the use of other types of substances without collecting urine or blood samples.”
    What is a successful outcome?
    In mobile health substance abuse trials, success is often measured by whether a person is still using a substance at the end of the trial. But reality isn’t usually so straightforward. Businelle said people may stop and start using a substance several times during a six-month trial. Instead of emphasizing the end result, he recommends using technology to assess the effectiveness of an intervention at daily, weekly and monthly intervals. By understanding the number of days of abstinence or number of days until a relapse, for example, the intervention can be more accurately assessed and improved.
    Mobile health technology has disadvantages, such as the potential lack of a therapeutic relationship that can develop between patient and therapist, and because some people may need more intensive treatments than mobile health can provide. However, mobile health is still in its infancy.
    “Mobile health interventions may reduce stigma because people do not have to attend treatment in person,” Businelle said. “Because there is a severe shortage of qualified therapists, always-available behavior change apps could become a first line of treatment for substance abuse, with traditional counseling being reserved for those who do not respond to mobile health interventions.” More

  • in

    Stilling the quantum dance of atoms

    Researchers based at the University of Cambridge have discovered a way to stop the quantum dance of atoms ‘seen’ by electrons in carbon-based organic molecules. This development will help improve the performance of light emitting molecules used in displays and bio-medical imaging.
    Since the discovery of quantum mechanics more than a hundred years ago, it has been known that electrons in molecules can be coupled to the motion of the atoms that make up the molecules. Often referred to as molecular vibrations, the motion of atoms act like tiny springs, undergoing periodic motion. For electrons in these systems, being joined to the hip with these vibrations means they are constantly in motion too, dancing to the tune of the atoms, on timescales of a millionth of a billionth of a second. But all this dancing around leads to a loss of energy and limits the performance of organic molecules in applications like light emitting diodes (OLEDs), infrared sensors and fluorescent biomarkers used in the study of cells and for tagging diseases such as cancer cells.
    Now, researchers using laser-based spectroscopic techniques have discovered ‘new molecular design rules’ capable of halting this molecular dance. Their results, reported in Nature, revealed crucial design principles that can stop the coupling of electrons to atomic vibrations, in effect shutting down their hectic dancing and propelling the molecules to achieve unparalleled performance.
    “All organic molecules, such as those found in living cells or within the screen of your phone consist of carbon atoms connected to each other via a chemical bond,” said Cavendish PhD student Pratyush Ghosh, first author of the study and member of St John’s College.
    “Those chemical bonds are like tiny vibrating springs, which are generally felt by electrons, impairing the performance of molecules and devices. However, we have now found that certain molecules can avoid these detrimental effects when we restrict the geometric and electronic structure of the molecule to some special configurations.”
    To demonstrate these design principles, the scientists designed a series of efficient near-infrared emitting (680-800 nm) molecules. In these molecules, energy losses resulting from vibrations — essentially, electrons dancing to the tune of atoms — were more than 100 times lower than in previous organic molecules.
    This understanding and development of new rules to design light emitting molecules has opened an extremely interesting trajectory for the future, where these fundamental observations can be applied to industries.
    “These molecules also have a wide range of applications today. The task now is to translate our discovery to make better technologies, from enhanced displays to improved molecules for bio-medical imaging and disease detection,” concluded Professor Akshay Rao from Cavendish Laboratory, who led this research. More

  • in

    Emergency department packed to the gills? Someday, AI may help

    UCSF-led study finds artificial intelligence is as good as a physician at prioritizing which patients need to be seen first.
    Emergency departments nationwide are overcrowded and overtaxed, but a new study suggests artificial intelligence (AI) could one day help prioritize which patients need treatment most urgently.
    Using anonymized records of 251,000 adult emergency department (ED) visits, researchers at UC San Francisco evaluated how well an AI model was able to extract symptoms from patients’ clinical notes to determine their need to be treated immediately. They then compared the AI analysis with the patients’ scores on the Emergency Severity Index, a 1-5 scale that ED nurses use when patients arrive to allocate care and resources by highest need, a process known as triage.
    The patients’ data were separated from their actual identities (de-identified) for the study, which publishes May 7, 2024, in JAMA Network Open. The researchers evaluated the data using the ChatGPT-4 large language model (LLM), accessing it via UCSF’s secure generative AI platform, which has broad privacy protections.
    The researchers tested the LLM’s performance with a sample of 10,000 matched pairs — 20,000 patients in total — that included one patient with a serious condition, such as stroke, and another with a less urgent condition, such as a broken wrist. Given only the patients’ symptoms, the AI was able to identify which ED patient in the pair had a more serious condition 89% of the time.
    In a sub-sample of 500 pairs that were evaluated by a physician as well as the LLM, the AI was correct 88% of the time, compared to 86% for the physician.
    Having AI assist in the triage process could free up critical physician time to treat patients with the most serious conditions, while offering backup decision-making tools for clinicians who are juggling multiple urgent requests.

    “Imagine two patients who need to be transported to the hospital but there is only one ambulance. Or a physician is on call and there are three people paging her at the same time, and she has to determine who to respond to first,” said lead author Christopher Williams, MB, BChir, a UCSF postdoctoral scholar at the Bakar Computational Health Sciences Institute.
    Not quite ready for prime time
    The study is one of only a few to evaluate an LLM using real-world clinical data, rather than simulated scenarios, and is the first to use more than 1,000 clinical cases for this purpose. It’s also the first study to use data from visits to the emergency department, where there is a wide array of possible medical conditions.
    Despite its success within this study, Williams cautioned that AI is not ready to use responsibly in the ED without further validation and clinical trials.
    “It’s great to show that AI can do cool stuff, but it’s most important to consider who is being helped and who is being hindered by this technology,” said Williams. “Is just being able to do something the bar for using AI, or is it being able to do something well, for all types of patients?”
    One important issue to untangle is how to eliminate bias from the model. Previous research has shown these models may perpetuate racial and gender biases in health care, due to biases within the data used to train them. Williams said that before these models can be used, they will need to be modified to strip out that bias.
    “First we need to know if it works and understand how it works, and then be careful and deliberate in how it is applied,” Williams said. “Upcoming work will address how best to deploy this technology in a clinical setting.” More

  • in

    New super-pure silicon chip opens path to powerful quantum computers

    Researchers at the Universities of Melbourne and Manchester have invented a breakthrough technique for manufacturing highly purified silicon that brings powerful quantum computers a big step closer.
    The new technique to engineer ultra-pure silicon makes it the perfect material to make quantum computers at scale and with high accuracy, the researchers say.
    Project co-supervisor Professor David Jamieson, from the University of Melbourne, said the innovation – published today in Communication Materials, a Nature journal – uses qubits of phosphorous atoms implanted into crystals of pure stable silicon and could overcome a critical barrier to quantum computing by extending the duration of notoriously fragile quantum coherence.
    “Fragile quantum coherence means computing errors build up rapidly. With robust coherence provided by our new technique, quantum computers could solve in hours or minutes some problems that would take conventional or ‘classical’ computers – even supercomputers – centuries,” Professor Jamieson said.
    Quantum bits or qubits* – the building blocks of quantum computers – are susceptible to tiny changes in their environment, including temperature fluctuations. Even when operated in tranquil refrigerators near absolute zero (minus 273 degrees Celsius), current quantum computers can maintain error-free coherence for only a tiny fraction of a second.  
    University of Manchester co-supervisor Professor Richard Curry said ultra-pure silicon allowed construction of high-performance qubit devices – a critical component required to pave the way towards scalable quantum computers.  
    “What we’ve been able to do is effectively create a critical ‘brick’ needed to construct a silicon-based quantum computer. It’s a crucial step to making a technology that has the potential to be transformative for humankind,” Professor Curry said. 
    Lead author Ravi Acharya, a joint University of Manchester/University of Melbourne Cookson Scholar, said the great advantage of silicon chip quantum computing was it used the same essential techniques that make the chips used in today’s computers.

    “Electronic chips currently within an everyday computer consist of billions of transistors — these can also be used to create qubits for silicon-based quantum devices. The ability to create high quality silicon qubits has in part been limited to date by the purity of the silicon starting material used. The breakthrough purity we show here solves this problem.” 
    Professor Jamieson said the new highly purified silicon computer chips house and protect the qubits so they can sustain quantum coherence much longer, enabling complex calculations with greatly reduced need for error correction.
    “Our technique opens the path to reliable quantum computers that promise step changes across society, including in artificial intelligence, secure data and communications, vaccine and drug design, and energy use, logistics and manufacturing,” he said.
    Silicon – made from beach sand – is the key material for today’s information technology industry because it is an abundant and versatile semiconductor: it can act as a conductor or an insulator of electrical current, depending on which other chemical elements are added to it.
    “Others are experimenting with alternatives, but we believe silicon is the leading candidate for quantum computer chips that will enable the enduring coherence required for reliable quantum calculations,” Professor Jamieson said.
    “The problem is that while naturally occurring silicon is mostly the desirable isotope silicon-28, there’s also about 4.5 percent silicon-29. Silicon-29 has an extra neutron in each atom’s nucleus that acts like a tiny rogue magnet, destroying quantum coherence and creating computing errors,” he said.

    The researchers directed a focused, high-speed beam of pure silicon-28 at a silicon chip so the silicon-28 gradually replaced the silicon-29 atoms in the chip, reducing silicon-29 from 4.5 per cent to two parts per million (0.0002 per cent). 
    “The great news is to purify silicon to this level, we can now use a standard machine – an ion implanter – that you would find in any semiconductor fabrication lab, tuned to a specific configuration that we designed,” Professor Jamieson said.
    In previously published research with the ARC Centre of Excellence for Quantum Computation and Communication Technology, the University of  Melbourne set – and still holds – the world record for single-qubit coherence of 30 seconds using silicon that was less purified. 30 seconds is plenty of time to complete error-free, complex quantum calculations.
    Professor Jamieson said the  largest existing quantum computers had more than 1000 qubits, but errors occurred within milliseconds due to lost coherence.
    “Now that we can produce extremely pure silicon-28, our next step will be to demonstrate that we can sustain quantum coherence for many qubits simultaneously. A reliable quantum computer with just 30 qubits would exceed the power of today’s supercomputers for some applications,” he said.
    This latest work was supported by research grants from the Australian and UK governments.  Professor Jamieson’s collaboration with the University of Manchester is supported by a Royal Society Wolfson Visiting Fellowship.
    A 2020 report from Australia’s CSIRO estimated that quantum computing in Australia has potential to create 10,000 jobs and $2.5 billion in annual revenue by 2040.
    “Our research takes us significantly closer to realising this potential,” Professor Jamieson said.
    *A qubit – such as an atomic nucleus, electron, or photon – is a quantum object when it is in a quantum superposition of multiple states. Coherence is lost when the qubit reverts to a single state and becomes a classical object like a conventional computer bit, which is only ever one or zero and never in superposition. More

  • in

    Engineers develop innovative microbiome analysis software tools

    Since the first microbial genome was sequenced in 1995, scientists have reconstructed the genomic makeup of hundreds of thousands of microorganisms and have even devised methods to take a census of bacterial communities on the skin, in the gut, or in soil, water and elsewhere based on bulk samples, leading to the emergence of a relatively new field of study known as metagenomics.
    Parsing through metagenomic data can be a daunting task, much like trying to assemble several massive jigsaw puzzles with all of the pieces jumbled together. Taking on this unique computational challenge, Rice University graph-artificial intelligence (AI) expert Santiago Segarra and computational biologist Todd Treangen paired up to explore how AI-powered data analysis could help craft new tools to supercharge metagenomics research.
    The scientist duo zeroed in on two types of data that make metagenomic analysis particularly challenging — repeats and structural variants — and developed tools for handling these data types that outperform current methods.
    Repeats are identical DNA sequences occurring repeatedly both throughout the genome of single organisms and across multiple genomes in a community of organisms.
    “The DNA in a metagenomic sample from multiple organisms can be represented as a graph,” said Segarra, assistant professor of electrical and computer engineering. “Essentially, one of the tools we developed leverages the structure of this graph in order to determine which pieces of DNA appear repeatedly either across microbes or within the same microorganism.”
    Dubbed GraSSRep, the method combines self-supervised learning, a machine learning process where an AI model trains itself to distinguish between hidden and available input, and graph neural networks, systems that process data representing objects and their interconnections as graphs. The peer-reviewed paper was presented at the 28th session of a leading annual international conference on research in computational molecular biology, RECOMB 2024. The project was led by Rice graduate student and research assistant Ali Azizpour. Advait Balaji, a Rice doctoral alumnus, is also an author on the study.
    Repeats are of interest because they play a significant role in biological processes such as bacterial response to changes in their environment or microbiomes’ interaction with host organisms. A specific example of a phenomenon where repeats can play a role is antibiotic resistance. Generally speaking, tracking repeats’ history or dynamics in a bacterial genome can shed light on microorganisms’ strategies for adaptation or evolution. What’s more, repeats can sometimes actually be viruses in disguise, or bacteriophages. From the Greek word for “devour,” phages are sometimes used to kill bacteria.

    “These phages actually show up looking like repeats, so you can track bacteria-phage dynamics based off the repeats contained in the genomes,” said Treangen, associate professor of computer science. “This could provide clues on how to get rid of hard-to-kill bacteria, or paint a clearer picture of how these viruses are interacting with a bacterial community.”
    Previously when a graph-based approach was used to carry out repeat detection, researchers used predefined specifications for what to look for in the graph data. What sets GraSSRep apart from these prior approaches is the lack of any such predefined parameters or references informing how the data is processed.
    “Our method learns how to better use the graph structure in order to detect repeats as opposed to relying on initial input,” Segarra said. “Self-supervised learning allows this tool to train itself in the absence of any ground truth establishing what is a repeat and what is not a repeat. When you’re handling a metagenomic sample, you don’t need to know anything about what’s in there to analyze it.”
    The same is true in the case of another metagenomic analysis method co-developed by Segarra and Treangen — reference-free structural variant detection in microbiomes via long-read coassembly graphs, or rhea. Their peer-reviewed paper on rhea will be presented at the International Society for Computational Biology’s annual conference, which will take place July 12-16 in Montreal. The lead author on the paper is Rice computer science doctoral alumna Kristen Curry, who will be joining the lab of Rayan Chikhi — also a co-author on the paper — at the Institut Pasteur in Paris as a postdoctoral scientist.
    While GraSSRep is designed to deal with repeats, rhea handles structural variants, which are genomic alterations of 10 base pairs or more that are relevant to medicine and molecular biology due to their role in various diseases, gene expression regulation, evolutionary dynamics and promoting genetic diversity within populations and among species.
    “Identifying structural variants in isolated genomes is relatively straightforward, but it’s harder to do so in metagenomes where there’s no clear reference genome to help categorize the data,” Treangen said.

    Currently one of the widely used methods for processing metagenomic data is through metagenome-assembled genomes or MAGs.
    “These de novo or reference-guided assemblers are pretty well-established tools that entail a whole operational pipeline with repeat detection or structural variants’ identification being just some of their functionalities,” Segarra said. “One thing that we’re looking into is replacing existing algorithms with ours and seeing how that can improve the performance of these very widely used metagenomic assemblers.”
    Rhea does not need reference genomes or MAGs to detect structural variants, and it outperformed methods relying on such prespecified parameters when tested against two mock metagenomes.
    “This was particularly noticeable because we got a much more granular read of the data than we did using reference genomes,” Segarra said. “The other thing that we’re currently looking into is applying the tool to real-world datasets and seeing how the results relate back to biological processes and what insights this might give us.”
    Treangen said GraSSRep and rhea combined — building on previous contributions in the area — have the potential “to unlock the underlying rules of life governing microbial evolution.”
    The projects are the result of a yearslong collaboration between the Segarra and Treangen labs. More

  • in

    Researchers use foundation models to discover new cancer imaging biomarkers

    Researchers at Mass General Brigham have harnessed the technology behind foundation models, which power tools like ChatGPT, to discover new cancer imaging biomarkers that could transform how patterns are identified from radiological images. Improved identification of such patterns can greatly impact the early detection and treatment of cancer.
    The research team developed their foundation model using a comprehensive dataset consisting of 11,467 images of abnormal radiologic scans. Using these images, the model was able to identify patterns that predict anatomical site, malignancy, and prognosis across three different use cases in four cohorts. Compared to existing methods in the field, their approach remained powerful when applied to specialized tasks where only limited data are available. Results are published in Nature Machine Intelligence.
    “Given that image biomarker studies are tailored to answer increasingly specific research questions, we believe that our work will enable more accurate and efficient investigations,” said first author Suraj Pai from the Artificial Intelligence in Medicine (AIM) Program at Mass General Brigham.
    Despite the improved efficacy of AI methods, a key question remains their reliability and explainability (the concept that an AI’s answers can be explained in a way that “makes sense” to humans). The researchers demonstrated that their methods remained stable across inter-reader variations and differences in acquisition. Patterns identified by the foundation model also demonstrated strong associations with underlying biology, mainly correlating with immune-related pathways.
    “Our findings demonstrate the efficacy of foundation models in medicine when only limited data might be available for training deep learning networks, especially when applied to identifying reliable imaging biomarkers for cancer-associated use cases,” said senior author Hugo Aerts, PhD, director of the AIM Program. More

  • in

    Why getting in touch with our ‘gerbil brain’ could help machines listen better

    Macquarie University researchers have debunked a 75-year-old theory about how humans determine where sounds are coming from, and it could unlock the secret to creating a next generation of more adaptable and efficient hearing devices ranging from hearing aids to smartphones.
    In the 1940s, an engineering model was developed to explain how humans can locate a sound source based on differences of just a few tens of millionths of a second in when the sound reaches each ear.
    This model worked on the theory that we must have a set of specialised detectors whose only function was to determine where a sound was coming from, with location in space represented by a dedicated neuron.
    Its assumptions have been guiding and influencing research — and the design of audio technologies — ever since.
    But a new research paper published in Current Biology by Macquarie University Hearing researchers has finally revealed that the idea of a neural network dedicated to spatial hearing does not hold.
    Lead author, Macquarie University Distinguished Professor of Hearing, David McAlpine, has spent the past 25 years proving that one animal after another was actually using a much sparser neural network, with neurons on both sides of the brain performing this function in addition to others.
    Showing this in action in humans was more difficult.

    Now through the combination of a specialised hearing test, advanced brain imaging, and comparisons with the brains of other mammals including rhesus monkeys, he and his team have shown for the first time that humans also use these simpler networks.
    “We like to think that our brains must be far more advanced than other animals in every way, but that is just hubris,” Professor McAlpine says.
    “We’ve been able to show that gerbils are like guinea pigs, guinea pigs are like rhesus monkeys, and rhesus monkeys are like humans in this regard.
    “A sparse, energy efficient form of neural circuitry performs this function — our gerbil brain, if you like.”
    The research team also proved that the same neural network separates speech from background sounds — a finding that is significant for the design of both hearing devices and the electronic assistants in our phones.
    All types of machine hearing struggles with the challenge of hearing in noise, known as the ‘cocktail party problem’. It makes it difficult for people with hearing devices to pick out one voice in a crowded space, and for our smart devices to understand when we talk to them.

    Professor McAlpine says his team’s latest findings suggest that rather than focusing on the large language models (LLMs) that are currently used, we should be taking a far simpler approach.
    “LLMs are brilliant at predicting the next word in a sentence, but they’re trying to do too much,” he says.
    “Being able to locate the source of a sound is the important thing here, and to do that, we don’t need a ‘deep mind’ language brain. Other animals can do it, and they don’t have language.
    “When we are listening, our brains don’t keep tracking sound the whole time, which the large language processors are trying to do.
    “Instead, we, and other animals, use our ‘shallow brain’ to pick out very small snippets of sound, including speech, and use these snippets to tag the location and maybe even the identity of the source.
    “We don’t have to reconstruct a high-fidelity signal to do this, but instead understand how our brain represents that signal neurally, well before it reaches a language centre in the cortex.
    “This shows us that a machine doesn’t have to be trained for language like a human brain to be able to listen effectively.
    “We only need that gerbil brain.”
    The next step for the team is to identify the minimum amount of information that can be conveyed in a sound but still get the maximum amount of spatial listening. More