More stories

  • in

    How we play together

    Intense focus pervades the EEG laboratory at the University of Konstanz on this day of experimentation. In separate labs, two participants, connected by screens, engage in the computer game Pacman. The burning question: Can strangers, unable to communicate directly, synchronize their efforts to conquer the digital realm together?
    Doctoral candidate Karl-Philipp Flösch is leading today’s experiment. He states: “Our research revolves around cooperative behaviour and the adoption of social roles.” However, understanding brain processes underlying cooperative behaviour is still in its infancy, presenting a central challenge for cognitive neuroscience. How can cooperative behaviour be brought into a highly structured EEG laboratory environment without making it feel artificial or boring for study participants?
    Pacman as a scientific “playground”
    The research team, led by Harald Schupp, Professor of Biological Psychology at the University of Konstanz, envisioned using the well-known computer game Pacman as a natural medium to study cooperative behaviour in the EEG laboratory. Conducting the study as part of the Cluster of Excellence Centre for the Advanced Study of Collective Behaviour, they recently published their findings in Psychophysiology.
    “Pacman is a cultural icon. Many have navigated the voracious Pacman through mazes in their youth, aiming to devour fruits and outsmart hostile ghosts,” reminisces Karl-Philipp Flösch. Collaborating with colleagues, co-author Tobias Flaisch adapted the game. In the EEG version, two players instead of one must collaboratively guide Pacman to the goal. Flaisch explains: “Success hinges on cooperative behaviour, as players must seamlessly work together.”
    However, the researchers have built in a special hurdle: the labyrinth’s path is concealed. Only one of the two players can see where Pacman is going next. Flösch elaborates: “The active player can communicate the direction to the partner, but only indirectly using pre-agreed symbols, communicated solely through the computer screen.” If you do not remember quickly enough that a crescent moon on the screen means that Pacman should move right, and that only the banana on the keyboard can make Pacman move to the right, you’re making a mistake. “From the perspective of classical psychological research, the game combines various skills inherent in natural social situations,” notes Harald Schupp.
    EEG measures event-related potentials
    During each game, the players’ brain reactions were measured using EEG. Calculating event-related potentials provides a detailed view of the effects elicited by different game roles with millisecond-level temporal precision. The team hypothesized that the game role significantly influences brain reactions. Therefore, they examined the P3 component, a well-studied brain reaction exhibiting a stronger deflection in the presence of significant and task-relevant stimuli. The results confirmed their assumption: “The P3 was increased not only when the symbol indicated the next move’s direction but also when observing whether the game partner selected the correct symbol,” says Flösch. The team concludes that the role we take on during cooperation determines the informational value of environmental stimuli situationally. EEG measurements allow the brain processes involved to be dynamically mapped.
    “Cooperative role adoption structures our entire society,” summarizes Schupp, providing context for the study. “An individual achieves little alone, but collectively, humanity even reaches the moon. Our technological society hinges on cooperative behavior,” says Flösch, adding that children early on take individual roles, thereby learning the art of complex cooperation. Consequently, this role adoption occurs nearly effortlessly and automatically for us every day. “Our brains are practically ‘built’ for it, as evidenced by the results of our study.” More

  • in

    Long in the Bluetooth: Scientists develop a more efficient way to transmit data between our devices

    University of Sussex researchers have developed a more energy-efficient alternative to transmit data that could potentially replace Bluetooth in mobile phones and other tech devices. With more and more of us owning smart phones and wearable tech, researchers at the University of Sussex have found a more efficient way of connecting our devices and improving battery life. Applied to wearable devices, it could even see us unlocking doors by touch or exchanging phone numbers by shaking hands.
    Professor Robert Prance and Professor Daniel Roggen, of the University of Sussex, have developed the use of electric waves, rather than electromagnetic waves, for a low-power way to transmit data at close range, while maintaining the high throughput needed for multimedia applications.
    Bluetooth, Wifi, and 5G currently rely on electromagnetic modulation, a form of wireless technology which was developed over 125 years ago. In the late 19th Century, the focus was to transmit data over long distances using electromagnetic waves. By contrast, electric field modulation uses short-range electric waves, which consumes much less power than Bluetooth.
    As we tend to be in close proximity to our devices, electric field modulation offers a proven, more efficient method of connecting our devices, enabling longer lasting battery life when streaming music to headphones, taking calls, using fitness trackers, or interacting with smart home tech.
    The development could advance how we use tech in our day to day lives and evolve a wide range of futuristic applications too. For example, a bracelet using this technology could enable phone numbers to be exchanged simply by shaking hands or a door could be unlocked just by touching the handle.
    Daniel Roggen, Professor of Engineering and Design at the University of Sussex, explains:
    “We no longer need to rely on electromagnetic modulation, which is inherently battery hungry. We can improve the battery life of wearable technology and home assistants, for example, by using electric field modulation instead of Bluetooth. This solution will not only make our lives much more efficient, but it also opens novel opportunities to interact with devices in smart homes.
    “The technology is also low cost, meaning it could be rolled out to society quickly and easily. If this were mass produced, the solution can be miniaturised to a single chip and cost just a few pence per device, meaning that it could be used in all devices in the not-too-distant future.”
    The University of Sussex researchers are now seeking industrial partnerships to help further miniaturize the technology for personal devices. More

  • in

    AI can ‘lie and BS’ like its maker, but still not intelligent like humans

    The emergence of artificial intelligence has caused differing reactions from tech leaders, politicians and the public. While some excitedly tout AI technology such as ChatGPT as an advantageous tool with the potential to transform society, others are alarmed that any tool with the word “intelligent” in its name also has the potential to overtake humankind.
    The University of Cincinnati’s Anthony Chemero, a professor of philosophy and psychology in the UC College of Arts and Sciences, contends that the understanding of AI is muddled by linguistics: That while indeed intelligent, AI cannot be intelligent in the way that humans are, even though “it can lie and BS like its maker.”
    According to our everyday use of the word, AI is definitely intelligent, but there are intelligent computers and have been for years, Chemero explains in a paper he co-authored in the journal Nature Human Behaviour. To begin, the paper states that ChatGPT and other AI systems are large language models (LLM), trained on massive amounts of data mined from the internet, much of which shares the biases of the people who post the data.
    “LLMs generate impressive text, but often make things up whole cloth,” he states. “They learn to produce grammatical sentences, but require much, much more training than humans get. They don’t actually know what the things they say mean,” he says. “LLMs differ from human cognition because they are not embodied.”
    The people who made LLMs call it “hallucinating” when they make things up; although Chemero says, “it would be better to call it ‘bullsh*tting,'” because LLMs just make sentences by repeatedly adding the most statistically likely next word — and they don’t know or care whether what they say is true.
    And with a little prodding, he says, one can get an AI tool to say “nasty things that are racist, sexist and otherwise biased.”
    The intent of Chemero’s paper is to stress that the LLMs are not intelligent in the way humans are intelligent because humans are embodied: Living beings who are always surrounded by other humans and material and cultural environments.
    “This makes us care about our own survival and the world we live in,” he says, noting that LLMs aren’t really in the world and don’t care about anything.
    The main takeaway is that LLMs are not intelligent in the way that humans are because they “don’t give a damn,” Chemero says, adding “Things matter to us. We are committed to our survival. We care about the world we live in.” More

  • in

    Creativity in the age of generative AI: A new era of creative partnerships

    Recent advancements in generative artificial intelligence (AI) have showcased its potential in a wide range of creative activities such as to produce works of art, compose symphonies, and even draft legal texts, slide presentations or the like. These developments have raised concerns that AI will outperform humans in creativity tasks and make knowledge workers redundant. These comments are most recently underlined by a Fortune article entitled ‘Elon Musk says AI will create a future where ‘no job is needed’: ‘The AI will be able to do everything’.
    In a new paper in a Nature Human Behavior special issue on AI, researcher Janet Rafner from Aarhus Institute of Advanced Studies and Center for Hybrid Intelligence at Aarhus University and Prof. Jacob Sherson, Director of the Center for Hybrid Intelligence, together with international collaborators discuss research and societal implications of creativity and AI.
    The team of researchers argue that we should direct our attention to understanding and nurturing co-creativity, the interaction between humans and machines towards what is termed a ‘human-centered AI’ and ‘hybrid intelligence.’ In this way we will be able to develop interfaces that at the same time ensure both high degrees of automatization through AI and human control and hereby supporting a relationship that optimally empower each other.
    Rafner comments: To date, most studies on human-AI co-creativity come from the field of human-computer interaction and focus on the abilities of the AI, and the interaction design and dynamics. While these advances are key for understanding the dynamics between humans and algorithms and human attitudes towards the co-creative process and product, there is an urgent need to enrich these applications with the insights about creativity obtained over the past decades in the psychological sciences.
    “Right now, we need to move the conversation away from questions like Can AI be creative? One reason for this is that defining creativity is not cut and dry. When investigating human only, machine only, and human-AI co-creativity, we need to consider the type and level of creativity under question, from everyday creative activities (e.g. making new recipes, artwork or music) that are perhaps more amenable to machine automatization to paradigm-shifting contributions that may require higher-level human intervention. Additionally, it is much more meaningful to consider nuanced questions like, What are the similarities and differences in human cognition, behavior, motivation and self-efficacy between human-AI co-creativity and human creativity?” explains Rafner.
    Currently, we do not have sufficient knowledge of co-creativity between human-machines as the delineation between human and AI contributions (and processes) are not always clear. Looking ahead, researchers should balance predictive accuracy with theoretical understanding (i.e., explainability), towards the goal of developing intelligent systems to both measure and enhance human creativity. When designing co-creative systems such as virtual assistants, it will be essential to balance psychometric rigor with ecological validity. That is, co-creativity tasks should combine precise psychological measurement with state-of-the-art intuitive and engaging interface design.
    Interdisciplinary collaborations are needed
    The challenge of understanding and properly developing human-AI co-creative systems is not to be faced by a single discipline. Business and management scholars should be included to ensure that tasks sufficiently capture real-world professional challenges and to understand the implications of co-creativity for the future of work at macro and micro organizational scales, such as creativity in team dynamics with blended teams of humans and AI. Linguistics and learning scientists are needed to help us understand the impact and nuances of prompt engineering in text-to-x systems. Developmental psychologists will have to study the impact on human learning processes.

    Ethical and meaningful developments
    Is not only seen as more ethical to keep humans closely in-the-loop when working and developing AI, but also in most cases it is the most efficient long-term choice, the team of researchers argue.
    Beyond this, ethics and legal scholars will have to consider the costs and benefits of co-creativity in terms of intellectual property rights, human sense of purpose, and environmental impact. More

  • in

    Study reveals bias in AI tools when diagnosing women’s health issue

    Machine learning algorithms designed to diagnose a common infection that affects women showed a diagnostic bias among ethnic groups, University of Florida researchers found.
    While artificial intelligence tools offer great potential for improving health care delivery, practitioners and scientists warn of their risk for perpetuating racial inequities. Published Friday in the Nature journal Digital Medicine, this is the first paper to evaluate fairness among these tools in connection to a women’s health issue.
    “Machine learning can be a great tool in medical diagnostics, but we found it can show bias toward different ethnic groups,” said Ruogu Fang, an associate professor in the J. Crayton Pruitt Family Department of Biomedical Engineering and the study’s author. “This is alarming for women’s health as there already are existing disparities that vary by ethnicity.”
    The researchers evaluated the fairness of machine learning in diagnosing bacterial vaginosis, or BV, a common condition affecting women of reproductive age, which has clear diagnostic differences among ethnic groups.
    Fang and co-corresponding author Ivana Parker, both faculty members in the Herbert Wertheim College of Engineering, pulled data from 400 women, comprising 100 from each of the ethnic groups represented — white, Black, Asian, and Hispanic.
    In investigating the ability of four machine learning models to predict BV in women with no symptoms, researchers say the accuracy varied among ethnicities. Hispanic women had the most false-positive diagnoses, and Asian women received the most false-negative. Algorithm
    “The models performed highest for white women and lowest for Asian women,” said the Parker, an assistant professor of bioengineering. “This tells us machine learning methods are not treating ethnic groups equally well.”
    Parker said that while they were interested in understanding how AI tools predict disease for specific ethnicities, their study also helps medical scientists understand the factors associated with bacteria in women of varying ethnic backgrounds, which can lead to improved treatments.

    BV, one of the most common vaginal infections, can cause discomfort and pain and happens when natural bacteria levels are out of balance. While there are symptoms associate with BV, many people have no symptoms, making it difficult to diagnose.
    It doesn’t often cause complications, but in some cases, BV can increase the risk of sexually transmitted infections, miscarriage, and premature births.
    The researchers said their findings demonstrate the need for improved methods for building the AI tools to mitigate health care bias. More

  • in

    Personalized cancer medicine: Humans make better treatment decisions than AI

    Treating cancer is becoming increasingly complex, but also offers more and more possibilities. After all, the better a tumor’s biology and genetic features are understood, the more treatment approaches there are. To be able to offer patients personalized therapies tailored to their disease, laborious and time-consuming analysis and interpretation of various data is required. Researchers at Charité — Universitätsmedizin Berlin and Humboldt-Universität zu Berlin have now studied whether generative artificial intelligence (AI) tools such as ChatGPT can help with this step. This is one of many projects at Charité analyzing the opportunities unlocked by AI in patient care.
    If the body can no longer repair certain genetic mutations itself, cells begin to grow unchecked, producing a tumor. The crucial factor in this phenomenon is an imbalance of growth-inducing and growth-inhibiting factors, which can result from changes in oncogenes — genes with the potential to cause cancer — for example. Precision oncology, a specialized field of personalized medicine, leverages this knowledge by using specific treatments such as low-molecular weight inhibitors and antibodies to target and disable hyperactive oncogenes.
    The first step in identifying which genetic mutations are potential targets for treatment is to analyze the genetic makeup of the tumor tissue. The molecular variants of the tumor DNA that are necessary for precision diagnosis and treatment are determined. Then the doctors use this information to craft individual treatment recommendations. In especially complex cases, this requires knowledge from various fields of medicine. At Charité, this is when the “molecular tumor board” (MTB) meets: Experts from the fields of pathology, molecular pathology, oncology, human genetics, and bioinformatics work together to analyze which treatments seem most promising based on the latest studies. It is a very involved process, ultimately culminating in a personalized treatment recommendation.
    Can artificial intelligence help with treatment decisions?
    Dr. Damian Rieke, a doctor at Charité, Prof. Ulf Leser and Xing David Wang of Humboldt-Universität zu Berlin, and Dr. Manuela Benary, a bioinformatics specialist at Charité, wondered whether artificial intelligence might be able to help at this juncture. In a study just recently published in the journal JAMA Network Open, they worked with other researchers to examine the possibilities and limitations of large language models such as ChatGPT in automatically scanning scientific literature with an eye to selecting personalized treatments.
    “We prompted the models to identify personalized treatment options for fictitious cancer patients and then compared the results with the recommendations made by experts,” Rieke explains. His conclusion: “AI models were able to identify personalized treatment options in principle — but they weren’t even close to the abilities of human experts.”
    The team created ten molecular tumor profiles of fictitious patients for the experiment. A human physician specialist and four large language models were then tasked with identifying a personalized treatment option. These results were presented to the members of the MTB for assessment, without them knowing where which recommendation came from.

    Improved AI models hold promise for future uses
    “There were some surprisingly good treatment options identified by AI in isolated cases,” Benary reports. “But large language models perform much worse than human experts.” Beyond that, data protection, privacy, and reproducibility pose particular challenges in relation to the use of artificial intelligence with real-world patients, she notes.
    Still, Rieke is fundamentally optimistic about the potential uses of AI in medicine: “In the study, we also showed that the performance of AI models is continuing to improve as the models advance. This could mean that AI can provide more support for even complex diagnostic and treatment processes in the future — as long as humans are the ones to check the results generated by AI and have the final say about treatment.”
    AI projects at Charité aim to improve patient care
    Prof. Felix Balzer, Director of the Institute of Medical Informatics, is also certain medicine will benefit from AI. In his role as Chief Medical Information Officer (CMIO) within IT, he is responsible for the digital transformation of patient care at Charité. “One special area of focus when it comes to greater efficiency in patient care is digitalization, which also means the use of automation and artificial intelligence,” Balzer explains.
    His institute is working on AI models to help with fall prevention in long-term care, for example. Other areas at Charité are also conducting extensive research on AI: The Charité Lab for Artificial Intelligence in Medicine is working to develop tools for AI-based prognosis following strokes, and the TEF-Health project, led by Prof. Petra Ritter of the Berlin Institute of Health at Charité (BIH), is working to facilitate the validation and certification of AI and robotics in medical devices. More

  • in

    People watched other people shake boxes for science: Here’s why

    When researchers asked hundreds of people to watch other people shake boxes, it took just seconds for almost all of them to figure out what the shaking was for.
    The deceptively simple work by Johns Hopkins University perception researchers is the first to demonstrate that people can tell what others are trying to learn just by watching their actions. Published today in the journal Proceedings of the National Academy of Sciences, the study reveals a key yet neglected aspect of human cognition, and one with implications for artificial intelligence.
    “Just by looking at how someone’s body is moving, you can tell what they are trying to learn about their environment,” said author Chaz Firestone, an assistant professor of psychological and brain sciences who investigates how vision and thought interact. “We do this all the time, but there has been very little research on it.”
    Recognizing another person’s actions is something we do every day, whether it’s guessing which way someone is headed or figuring out what object they’re reaching for. These are known as “pragmatic actions.” Numerous studies have shown people can quickly and accurately identify these actions just by watching them. The new Johns Hopkins work investigates a different kind of behavior: “epistemic actions,” which are performed when someone is trying to learn something.
    For instance, someone might put their foot in a swimming pool because they’re going for a swim or they might put their foot in a pool to test the water. Though the actions are similar, there are differences and the Johns Hopkins team surmised observers would be able to detect another person’s “epistemic goals” just by watching them.
    Across several experiments, researchers asked a total of 500 participants to watch two videos in which someone picks up a box full of objects and shakes it around. One shows someone shaking a box to figure out the number of objects inside it. The other shows someone shaking a box to figure out the shape of the objects inside. Almost every participant knew who was shaking for the number and who was shaking for shape.
    “What is surprising to me is how intuitive this is,” said lead author Sholei Croom, a Johns Hopkins graduate student. “People really can suss out what others are trying to figure out, which shows how we can make these judgments even though what we’re looking at is very noisy and changes from person to person.”
    Added Firestone, “When you think about all the mental calculations someone must make to understand what someone else is trying to learn, it’s a remarkably complicated process. But our findings show it’s something people do easily.”

    The findings could also inform the development of artificial intelligence systems designed to interact with humans. A commercial robot assistant, for example, that can look at a customer and guess what they’re looking for.
    “It’s one thing to know where someone is headed or what product they are reaching for,” Firestone said. “But it’s another thing to infer whether someone is lost or what kind of information they are seeking.”
    In the future the team would like to pursue whether people can observe someone’s epistemic intent versus their pragmatic intent — what are they up to when they dip their foot in the pool. They’re also interested in when these observational skills emerge in human development and if it’s possible to build computational models to detail exactly how observed physical actions reveal epistemic intent.
    The Johns Hopkins team also included Hanbei Zhou, a sophomore studying neuroscience. More

  • in

    AI system self-organizes to develop features of brains of complex organisms

    Cambridge scientists have shown that placing physical constraints on an artificially-intelligent system — in much the same way that the human brain has to develop and operate within physical and biological constraints — allows it to develop features of the brains of complex organisms in order to solve tasks.
    As neural systems such as the brain organise themselves and make connections, they have to balance competing demands. For example, energy and resources are needed to grow and sustain the network in physical space, while at the same time optimising the network for information processing. This trade-off shapes all brains within and across species, which may help explain why many brains converge on similar organisational solutions.
    Jascha Achterberg, a Gates Scholar from the Medical Research Council Cognition and Brain Sciences Unit (MRC CBSU) at the University of Cambridge said: “Not only is the brain great at solving complex problems, it does so while using very little energy. In our new work we show that considering the brain’s problem solving abilities alongside its goal of spending as few resources as possible can help us understand why brains look like they do.”
    Co-lead author Dr Danyal Akarca, also from the MRC CBSU, added: “This stems from a broad principle, which is that biological systems commonly evolve to make the most of what energetic resources they have available to them. The solutions they come to are often very elegant and reflect the trade-offs between various forces imposed on them.”
    In a study published today in Nature Machine Intelligence, Achterberg, Akarca and colleagues created an artificial system intended to model a very simplified version of the brain and applied physical constraints. They found that their system went on to develop certain key characteristics and tactics similar to those found in human brains.
    Instead of real neurons, the system used computational nodes. Neurons and nodes are similar in function, in that each takes an input, transforms it, and produces an output, and a single node or neuron might connect to multiple others, all inputting information to be computed.
    In their system, however, the researchers applied a ‘physical’ constraint on the system. Each node was given a specific location in a virtual space, and the further away two nodes were, the more difficult it was for them to communicate. This is similar to how neurons in the human brain are organised.

    The researchers gave the system a simple task to complete — in this case a simplified version of a maze navigation task typically given to animals such as rats and macaques when studying the brain, where it has to combine multiple pieces of information to decide on the shortest route to get to the end point.
    One of the reasons the team chose this particular task is because to complete it, the system needs to maintain a number of elements — start location, end location and intermediate steps — and once it has learned to do the task reliably, it is possible to observe, at different moments in a trial, which nodes are important. For example, one particular cluster of nodes may encode the finish locations, while others encode the available routes, and it is possible to track which nodes are active at different stages of the task.
    Initially, the system does not know how to complete the task and makes mistakes. But when it is given feedback it gradually learns to get better at the task. It learns by changing the strength of the connections between its nodes, similar to how the strength of connections between brain cells changes as we learn. The system then repeats the task over and over again, until eventually it learns to perform it correctly.
    With their system, however, the physical constraint meant that the further away two nodes were, the more difficult it was to build a connection between the two nodes in response to the feedback. In the human brain, connections that span a large physical distance are expensive to form and maintain.
    When the system was asked to perform the task under these constraints, it used some of the same tricks used by real human brains to solve the task. For example, to get around the constraints, the artificial systems started to develop hubs — highly connected nodes that act as conduits for passing information across the network.
    More surprising, however, was that the response profiles of individual nodes themselves began to change: in other words, rather than having a system where each node codes for one particular property of the maze task, like the goal location or the next choice, nodes developed a flexible coding scheme. This means that at different moments in time nodes might be firing for a mix of the properties of the maze. For instance, the same node might be able to encode multiple locations of a maze, rather than needing specialised nodes for encoding specific locations. This is another feature seen in the brains of complex organisms.

    Co-author Professor Duncan Astle, from Cambridge’s Department of Psychiatry, said: “This simple constraint — it’s harder to wire nodes that are far apart — forces artificial systems to produce some quite complicated characteristics. Interestingly, they are characteristics shared by biological systems like the human brain. I think that tells us something fundamental about why our brains are organised the way they are.”
    Understanding the human brain
    The team are hopeful that their AI system could begin to shed light on how these constraints, shape differences between people’s brains, and contribute to differences seen in those that experience cognitive or mental health difficulties.
    Co-author Professor John Duncan from the MRC CBSU said: “These artificial brains give us a way to understand the rich and bewildering data we see when the activity of real neurons is recorded in real brains.”
    Achterberg added: “Artificial ‘brains’ allow us to ask questions that it would be impossible to look at in an actual biological system. We can train the system to perform tasks and then play around experimentally with the constraints we impose, to see if it begins to look more like the brains of particular individuals.”
    Implications for designing future AI systems
    The findings are likely to be of interest to the AI community, too, where they could allow for the development of more efficient systems, particularly in situations where there are likely to be physical constraints.
    Dr Akarca said: “AI researchers are constantly trying to work out how to make complex, neural systems that can encode and perform in a flexible way that is efficient. To achieve this, we think that neurobiology will give us a lot of inspiration. For example, the overall wiring cost of the system we’ve created is much lower than you would find in a typical AI system.”
    Many modern AI solutions involve using architectures that only superficially resemble a brain. The researchers say their works shows that the type of problem the AI is solving will influence which architecture is the most powerful to use.
    Achterberg said: “If you want to build an artificially-intelligent system that solves similar problems to humans, then ultimately the system will end up looking much closer to an actual brain than systems running on large compute cluster that specialise in very different tasks to those carried out by humans. The architecture and structure we see in our artificial ‘brain’ is there because it is beneficial for handling the specific brain-like challenges it faces.”
    This means that robots that have to process a large amount of constantly changing information with finite energetic resources could benefit from having brain structures not dissimilar to ours.
    Achterberg added: “Brains of robots that are deployed in the real physical world are probably going to look more like our brains because they might face the same challenges as us. They need to constantly process new information coming in through their sensors while controlling their bodies to move through space towards a goal. Many systems will need to run all their computations with a limited supply of electric energy and so, to balance these energetic constraints with the amount of information it needs to process, it will probably need a brain structure similar to ours.”
    The research was funded by the Medical Research Council, Gates Cambridge, the James S McDonnell Foundation, Templeton World Charity Foundation and Google DeepMind. More