More stories

  • in

    AI can ‘lie and BS’ like its maker, but still not intelligent like humans

    The emergence of artificial intelligence has caused differing reactions from tech leaders, politicians and the public. While some excitedly tout AI technology such as ChatGPT as an advantageous tool with the potential to transform society, others are alarmed that any tool with the word “intelligent” in its name also has the potential to overtake humankind.
    The University of Cincinnati’s Anthony Chemero, a professor of philosophy and psychology in the UC College of Arts and Sciences, contends that the understanding of AI is muddled by linguistics: That while indeed intelligent, AI cannot be intelligent in the way that humans are, even though “it can lie and BS like its maker.”
    According to our everyday use of the word, AI is definitely intelligent, but there are intelligent computers and have been for years, Chemero explains in a paper he co-authored in the journal Nature Human Behaviour. To begin, the paper states that ChatGPT and other AI systems are large language models (LLM), trained on massive amounts of data mined from the internet, much of which shares the biases of the people who post the data.
    “LLMs generate impressive text, but often make things up whole cloth,” he states. “They learn to produce grammatical sentences, but require much, much more training than humans get. They don’t actually know what the things they say mean,” he says. “LLMs differ from human cognition because they are not embodied.”
    The people who made LLMs call it “hallucinating” when they make things up; although Chemero says, “it would be better to call it ‘bullsh*tting,'” because LLMs just make sentences by repeatedly adding the most statistically likely next word — and they don’t know or care whether what they say is true.
    And with a little prodding, he says, one can get an AI tool to say “nasty things that are racist, sexist and otherwise biased.”
    The intent of Chemero’s paper is to stress that the LLMs are not intelligent in the way humans are intelligent because humans are embodied: Living beings who are always surrounded by other humans and material and cultural environments.
    “This makes us care about our own survival and the world we live in,” he says, noting that LLMs aren’t really in the world and don’t care about anything.
    The main takeaway is that LLMs are not intelligent in the way that humans are because they “don’t give a damn,” Chemero says, adding “Things matter to us. We are committed to our survival. We care about the world we live in.” More

  • in

    Creativity in the age of generative AI: A new era of creative partnerships

    Recent advancements in generative artificial intelligence (AI) have showcased its potential in a wide range of creative activities such as to produce works of art, compose symphonies, and even draft legal texts, slide presentations or the like. These developments have raised concerns that AI will outperform humans in creativity tasks and make knowledge workers redundant. These comments are most recently underlined by a Fortune article entitled ‘Elon Musk says AI will create a future where ‘no job is needed’: ‘The AI will be able to do everything’.
    In a new paper in a Nature Human Behavior special issue on AI, researcher Janet Rafner from Aarhus Institute of Advanced Studies and Center for Hybrid Intelligence at Aarhus University and Prof. Jacob Sherson, Director of the Center for Hybrid Intelligence, together with international collaborators discuss research and societal implications of creativity and AI.
    The team of researchers argue that we should direct our attention to understanding and nurturing co-creativity, the interaction between humans and machines towards what is termed a ‘human-centered AI’ and ‘hybrid intelligence.’ In this way we will be able to develop interfaces that at the same time ensure both high degrees of automatization through AI and human control and hereby supporting a relationship that optimally empower each other.
    Rafner comments: To date, most studies on human-AI co-creativity come from the field of human-computer interaction and focus on the abilities of the AI, and the interaction design and dynamics. While these advances are key for understanding the dynamics between humans and algorithms and human attitudes towards the co-creative process and product, there is an urgent need to enrich these applications with the insights about creativity obtained over the past decades in the psychological sciences.
    “Right now, we need to move the conversation away from questions like Can AI be creative? One reason for this is that defining creativity is not cut and dry. When investigating human only, machine only, and human-AI co-creativity, we need to consider the type and level of creativity under question, from everyday creative activities (e.g. making new recipes, artwork or music) that are perhaps more amenable to machine automatization to paradigm-shifting contributions that may require higher-level human intervention. Additionally, it is much more meaningful to consider nuanced questions like, What are the similarities and differences in human cognition, behavior, motivation and self-efficacy between human-AI co-creativity and human creativity?” explains Rafner.
    Currently, we do not have sufficient knowledge of co-creativity between human-machines as the delineation between human and AI contributions (and processes) are not always clear. Looking ahead, researchers should balance predictive accuracy with theoretical understanding (i.e., explainability), towards the goal of developing intelligent systems to both measure and enhance human creativity. When designing co-creative systems such as virtual assistants, it will be essential to balance psychometric rigor with ecological validity. That is, co-creativity tasks should combine precise psychological measurement with state-of-the-art intuitive and engaging interface design.
    Interdisciplinary collaborations are needed
    The challenge of understanding and properly developing human-AI co-creative systems is not to be faced by a single discipline. Business and management scholars should be included to ensure that tasks sufficiently capture real-world professional challenges and to understand the implications of co-creativity for the future of work at macro and micro organizational scales, such as creativity in team dynamics with blended teams of humans and AI. Linguistics and learning scientists are needed to help us understand the impact and nuances of prompt engineering in text-to-x systems. Developmental psychologists will have to study the impact on human learning processes.

    Ethical and meaningful developments
    Is not only seen as more ethical to keep humans closely in-the-loop when working and developing AI, but also in most cases it is the most efficient long-term choice, the team of researchers argue.
    Beyond this, ethics and legal scholars will have to consider the costs and benefits of co-creativity in terms of intellectual property rights, human sense of purpose, and environmental impact. More

  • in

    Study reveals bias in AI tools when diagnosing women’s health issue

    Machine learning algorithms designed to diagnose a common infection that affects women showed a diagnostic bias among ethnic groups, University of Florida researchers found.
    While artificial intelligence tools offer great potential for improving health care delivery, practitioners and scientists warn of their risk for perpetuating racial inequities. Published Friday in the Nature journal Digital Medicine, this is the first paper to evaluate fairness among these tools in connection to a women’s health issue.
    “Machine learning can be a great tool in medical diagnostics, but we found it can show bias toward different ethnic groups,” said Ruogu Fang, an associate professor in the J. Crayton Pruitt Family Department of Biomedical Engineering and the study’s author. “This is alarming for women’s health as there already are existing disparities that vary by ethnicity.”
    The researchers evaluated the fairness of machine learning in diagnosing bacterial vaginosis, or BV, a common condition affecting women of reproductive age, which has clear diagnostic differences among ethnic groups.
    Fang and co-corresponding author Ivana Parker, both faculty members in the Herbert Wertheim College of Engineering, pulled data from 400 women, comprising 100 from each of the ethnic groups represented — white, Black, Asian, and Hispanic.
    In investigating the ability of four machine learning models to predict BV in women with no symptoms, researchers say the accuracy varied among ethnicities. Hispanic women had the most false-positive diagnoses, and Asian women received the most false-negative. Algorithm
    “The models performed highest for white women and lowest for Asian women,” said the Parker, an assistant professor of bioengineering. “This tells us machine learning methods are not treating ethnic groups equally well.”
    Parker said that while they were interested in understanding how AI tools predict disease for specific ethnicities, their study also helps medical scientists understand the factors associated with bacteria in women of varying ethnic backgrounds, which can lead to improved treatments.

    BV, one of the most common vaginal infections, can cause discomfort and pain and happens when natural bacteria levels are out of balance. While there are symptoms associate with BV, many people have no symptoms, making it difficult to diagnose.
    It doesn’t often cause complications, but in some cases, BV can increase the risk of sexually transmitted infections, miscarriage, and premature births.
    The researchers said their findings demonstrate the need for improved methods for building the AI tools to mitigate health care bias. More

  • in

    Personalized cancer medicine: Humans make better treatment decisions than AI

    Treating cancer is becoming increasingly complex, but also offers more and more possibilities. After all, the better a tumor’s biology and genetic features are understood, the more treatment approaches there are. To be able to offer patients personalized therapies tailored to their disease, laborious and time-consuming analysis and interpretation of various data is required. Researchers at Charité — Universitätsmedizin Berlin and Humboldt-Universität zu Berlin have now studied whether generative artificial intelligence (AI) tools such as ChatGPT can help with this step. This is one of many projects at Charité analyzing the opportunities unlocked by AI in patient care.
    If the body can no longer repair certain genetic mutations itself, cells begin to grow unchecked, producing a tumor. The crucial factor in this phenomenon is an imbalance of growth-inducing and growth-inhibiting factors, which can result from changes in oncogenes — genes with the potential to cause cancer — for example. Precision oncology, a specialized field of personalized medicine, leverages this knowledge by using specific treatments such as low-molecular weight inhibitors and antibodies to target and disable hyperactive oncogenes.
    The first step in identifying which genetic mutations are potential targets for treatment is to analyze the genetic makeup of the tumor tissue. The molecular variants of the tumor DNA that are necessary for precision diagnosis and treatment are determined. Then the doctors use this information to craft individual treatment recommendations. In especially complex cases, this requires knowledge from various fields of medicine. At Charité, this is when the “molecular tumor board” (MTB) meets: Experts from the fields of pathology, molecular pathology, oncology, human genetics, and bioinformatics work together to analyze which treatments seem most promising based on the latest studies. It is a very involved process, ultimately culminating in a personalized treatment recommendation.
    Can artificial intelligence help with treatment decisions?
    Dr. Damian Rieke, a doctor at Charité, Prof. Ulf Leser and Xing David Wang of Humboldt-Universität zu Berlin, and Dr. Manuela Benary, a bioinformatics specialist at Charité, wondered whether artificial intelligence might be able to help at this juncture. In a study just recently published in the journal JAMA Network Open, they worked with other researchers to examine the possibilities and limitations of large language models such as ChatGPT in automatically scanning scientific literature with an eye to selecting personalized treatments.
    “We prompted the models to identify personalized treatment options for fictitious cancer patients and then compared the results with the recommendations made by experts,” Rieke explains. His conclusion: “AI models were able to identify personalized treatment options in principle — but they weren’t even close to the abilities of human experts.”
    The team created ten molecular tumor profiles of fictitious patients for the experiment. A human physician specialist and four large language models were then tasked with identifying a personalized treatment option. These results were presented to the members of the MTB for assessment, without them knowing where which recommendation came from.

    Improved AI models hold promise for future uses
    “There were some surprisingly good treatment options identified by AI in isolated cases,” Benary reports. “But large language models perform much worse than human experts.” Beyond that, data protection, privacy, and reproducibility pose particular challenges in relation to the use of artificial intelligence with real-world patients, she notes.
    Still, Rieke is fundamentally optimistic about the potential uses of AI in medicine: “In the study, we also showed that the performance of AI models is continuing to improve as the models advance. This could mean that AI can provide more support for even complex diagnostic and treatment processes in the future — as long as humans are the ones to check the results generated by AI and have the final say about treatment.”
    AI projects at Charité aim to improve patient care
    Prof. Felix Balzer, Director of the Institute of Medical Informatics, is also certain medicine will benefit from AI. In his role as Chief Medical Information Officer (CMIO) within IT, he is responsible for the digital transformation of patient care at Charité. “One special area of focus when it comes to greater efficiency in patient care is digitalization, which also means the use of automation and artificial intelligence,” Balzer explains.
    His institute is working on AI models to help with fall prevention in long-term care, for example. Other areas at Charité are also conducting extensive research on AI: The Charité Lab for Artificial Intelligence in Medicine is working to develop tools for AI-based prognosis following strokes, and the TEF-Health project, led by Prof. Petra Ritter of the Berlin Institute of Health at Charité (BIH), is working to facilitate the validation and certification of AI and robotics in medical devices. More

  • in

    People watched other people shake boxes for science: Here’s why

    When researchers asked hundreds of people to watch other people shake boxes, it took just seconds for almost all of them to figure out what the shaking was for.
    The deceptively simple work by Johns Hopkins University perception researchers is the first to demonstrate that people can tell what others are trying to learn just by watching their actions. Published today in the journal Proceedings of the National Academy of Sciences, the study reveals a key yet neglected aspect of human cognition, and one with implications for artificial intelligence.
    “Just by looking at how someone’s body is moving, you can tell what they are trying to learn about their environment,” said author Chaz Firestone, an assistant professor of psychological and brain sciences who investigates how vision and thought interact. “We do this all the time, but there has been very little research on it.”
    Recognizing another person’s actions is something we do every day, whether it’s guessing which way someone is headed or figuring out what object they’re reaching for. These are known as “pragmatic actions.” Numerous studies have shown people can quickly and accurately identify these actions just by watching them. The new Johns Hopkins work investigates a different kind of behavior: “epistemic actions,” which are performed when someone is trying to learn something.
    For instance, someone might put their foot in a swimming pool because they’re going for a swim or they might put their foot in a pool to test the water. Though the actions are similar, there are differences and the Johns Hopkins team surmised observers would be able to detect another person’s “epistemic goals” just by watching them.
    Across several experiments, researchers asked a total of 500 participants to watch two videos in which someone picks up a box full of objects and shakes it around. One shows someone shaking a box to figure out the number of objects inside it. The other shows someone shaking a box to figure out the shape of the objects inside. Almost every participant knew who was shaking for the number and who was shaking for shape.
    “What is surprising to me is how intuitive this is,” said lead author Sholei Croom, a Johns Hopkins graduate student. “People really can suss out what others are trying to figure out, which shows how we can make these judgments even though what we’re looking at is very noisy and changes from person to person.”
    Added Firestone, “When you think about all the mental calculations someone must make to understand what someone else is trying to learn, it’s a remarkably complicated process. But our findings show it’s something people do easily.”

    The findings could also inform the development of artificial intelligence systems designed to interact with humans. A commercial robot assistant, for example, that can look at a customer and guess what they’re looking for.
    “It’s one thing to know where someone is headed or what product they are reaching for,” Firestone said. “But it’s another thing to infer whether someone is lost or what kind of information they are seeking.”
    In the future the team would like to pursue whether people can observe someone’s epistemic intent versus their pragmatic intent — what are they up to when they dip their foot in the pool. They’re also interested in when these observational skills emerge in human development and if it’s possible to build computational models to detail exactly how observed physical actions reveal epistemic intent.
    The Johns Hopkins team also included Hanbei Zhou, a sophomore studying neuroscience. More

  • in

    AI system self-organizes to develop features of brains of complex organisms

    Cambridge scientists have shown that placing physical constraints on an artificially-intelligent system — in much the same way that the human brain has to develop and operate within physical and biological constraints — allows it to develop features of the brains of complex organisms in order to solve tasks.
    As neural systems such as the brain organise themselves and make connections, they have to balance competing demands. For example, energy and resources are needed to grow and sustain the network in physical space, while at the same time optimising the network for information processing. This trade-off shapes all brains within and across species, which may help explain why many brains converge on similar organisational solutions.
    Jascha Achterberg, a Gates Scholar from the Medical Research Council Cognition and Brain Sciences Unit (MRC CBSU) at the University of Cambridge said: “Not only is the brain great at solving complex problems, it does so while using very little energy. In our new work we show that considering the brain’s problem solving abilities alongside its goal of spending as few resources as possible can help us understand why brains look like they do.”
    Co-lead author Dr Danyal Akarca, also from the MRC CBSU, added: “This stems from a broad principle, which is that biological systems commonly evolve to make the most of what energetic resources they have available to them. The solutions they come to are often very elegant and reflect the trade-offs between various forces imposed on them.”
    In a study published today in Nature Machine Intelligence, Achterberg, Akarca and colleagues created an artificial system intended to model a very simplified version of the brain and applied physical constraints. They found that their system went on to develop certain key characteristics and tactics similar to those found in human brains.
    Instead of real neurons, the system used computational nodes. Neurons and nodes are similar in function, in that each takes an input, transforms it, and produces an output, and a single node or neuron might connect to multiple others, all inputting information to be computed.
    In their system, however, the researchers applied a ‘physical’ constraint on the system. Each node was given a specific location in a virtual space, and the further away two nodes were, the more difficult it was for them to communicate. This is similar to how neurons in the human brain are organised.

    The researchers gave the system a simple task to complete — in this case a simplified version of a maze navigation task typically given to animals such as rats and macaques when studying the brain, where it has to combine multiple pieces of information to decide on the shortest route to get to the end point.
    One of the reasons the team chose this particular task is because to complete it, the system needs to maintain a number of elements — start location, end location and intermediate steps — and once it has learned to do the task reliably, it is possible to observe, at different moments in a trial, which nodes are important. For example, one particular cluster of nodes may encode the finish locations, while others encode the available routes, and it is possible to track which nodes are active at different stages of the task.
    Initially, the system does not know how to complete the task and makes mistakes. But when it is given feedback it gradually learns to get better at the task. It learns by changing the strength of the connections between its nodes, similar to how the strength of connections between brain cells changes as we learn. The system then repeats the task over and over again, until eventually it learns to perform it correctly.
    With their system, however, the physical constraint meant that the further away two nodes were, the more difficult it was to build a connection between the two nodes in response to the feedback. In the human brain, connections that span a large physical distance are expensive to form and maintain.
    When the system was asked to perform the task under these constraints, it used some of the same tricks used by real human brains to solve the task. For example, to get around the constraints, the artificial systems started to develop hubs — highly connected nodes that act as conduits for passing information across the network.
    More surprising, however, was that the response profiles of individual nodes themselves began to change: in other words, rather than having a system where each node codes for one particular property of the maze task, like the goal location or the next choice, nodes developed a flexible coding scheme. This means that at different moments in time nodes might be firing for a mix of the properties of the maze. For instance, the same node might be able to encode multiple locations of a maze, rather than needing specialised nodes for encoding specific locations. This is another feature seen in the brains of complex organisms.

    Co-author Professor Duncan Astle, from Cambridge’s Department of Psychiatry, said: “This simple constraint — it’s harder to wire nodes that are far apart — forces artificial systems to produce some quite complicated characteristics. Interestingly, they are characteristics shared by biological systems like the human brain. I think that tells us something fundamental about why our brains are organised the way they are.”
    Understanding the human brain
    The team are hopeful that their AI system could begin to shed light on how these constraints, shape differences between people’s brains, and contribute to differences seen in those that experience cognitive or mental health difficulties.
    Co-author Professor John Duncan from the MRC CBSU said: “These artificial brains give us a way to understand the rich and bewildering data we see when the activity of real neurons is recorded in real brains.”
    Achterberg added: “Artificial ‘brains’ allow us to ask questions that it would be impossible to look at in an actual biological system. We can train the system to perform tasks and then play around experimentally with the constraints we impose, to see if it begins to look more like the brains of particular individuals.”
    Implications for designing future AI systems
    The findings are likely to be of interest to the AI community, too, where they could allow for the development of more efficient systems, particularly in situations where there are likely to be physical constraints.
    Dr Akarca said: “AI researchers are constantly trying to work out how to make complex, neural systems that can encode and perform in a flexible way that is efficient. To achieve this, we think that neurobiology will give us a lot of inspiration. For example, the overall wiring cost of the system we’ve created is much lower than you would find in a typical AI system.”
    Many modern AI solutions involve using architectures that only superficially resemble a brain. The researchers say their works shows that the type of problem the AI is solving will influence which architecture is the most powerful to use.
    Achterberg said: “If you want to build an artificially-intelligent system that solves similar problems to humans, then ultimately the system will end up looking much closer to an actual brain than systems running on large compute cluster that specialise in very different tasks to those carried out by humans. The architecture and structure we see in our artificial ‘brain’ is there because it is beneficial for handling the specific brain-like challenges it faces.”
    This means that robots that have to process a large amount of constantly changing information with finite energetic resources could benefit from having brain structures not dissimilar to ours.
    Achterberg added: “Brains of robots that are deployed in the real physical world are probably going to look more like our brains because they might face the same challenges as us. They need to constantly process new information coming in through their sensors while controlling their bodies to move through space towards a goal. Many systems will need to run all their computations with a limited supply of electric energy and so, to balance these energetic constraints with the amount of information it needs to process, it will probably need a brain structure similar to ours.”
    The research was funded by the Medical Research Council, Gates Cambridge, the James S McDonnell Foundation, Templeton World Charity Foundation and Google DeepMind. More

  • in

    Want better AI? Get input from a real (human) expert

    Can AI be trusted? The question pops up wherever AI is used or discussed — which, these days, is everywhere.
    It’s a question that even some AI systems ask themselves.
    Many machine-learning systems create what experts call a “confidence score,” a value that reflects how confident the system is in its decisions. A low score tells the human user that there is some uncertainty about the recommendation; a high score indicates to the human user that the system, at least, is quite sure of its decisions. Savvy humans know to check the confidence score when deciding whether to trust the recommendation of a machine-learning system.
    Scientists at the Department of Energy’s Pacific Northwest National Laboratory have put forth a new way to evaluate an AI system’s recommendations. They bring human experts into the loop to view how the ML performed on a set of data. The expert learns which types of data the machine-learning system typically classifies correctly, and which data types lead to confusion and system errors. Armed with this knowledge, the experts then offer their own confidence score on future system recommendations.
    The result of having a human look over the shoulder of the AI system? Humans predicted the AI system’s performance more accurately.
    Minimal human effort — just a few hours — evaluating some of the decisions made by the AI program allowed researchers to vastly improve on the AI program’s ability to assess its decisions. In some analyses by the team, the accuracy of the confidence score doubled when a human provided the score.
    The PNNL team presented its results at a recent meeting of the Human Factors and Ergonomics Society in Washington, D.C., part of a session on human-AI robot teaming.

    “If you didn’t develop the machine-learning algorithm in the first place, then it can seem like a black box,” said Corey Fallon, the lead author of the study and an expert in human-machine interaction. “In some cases, the decisions seem fine. In other cases, you might get a recommendation that is a real head-scratcher. You may not understand why it’s making the decisions it is.”
    The grid and AI
    It’s a dilemma that power engineers working with the electric grid face. Their decisions based on reams of data that change every instant keep the lights on and the nation running. But power engineers may be reluctant to turn over decision-making authority to machine-learning systems.
    “There are hundreds of research papers about the use of machine learning in power systems, but almost none of them are applied in the real world. Many operators simply don’t trust ML. They have domain experience — something that ML can’t learn,” said coauthor Tianzhixi “Tim” Yin.
    The researchers at PNNL, which has a world-class team modernizing the grid, took a closer look at one machine-learning algorithm applied to power systems. They trained the SVM (support-vector machine) algorithm on real data from the grid’s Eastern Interconnection in the U.S. The program looked at 124 events, deciding whether a generator was malfunctioning, or whether the data was showing other types of events that are less noteworthy.
    The algorithm was 85% reliable in its decisions. Many of its errors occurred when there were complex power bumps or frequency shifts. Confidence scores created with a human in the loop were a marked improvement over the system’s assessment of its own decisions. The human expert’s input predicted the algorithm’s decisions with much greater accuracy.

    More human, better machine learning
    Fallon and Yin call the new score an “Expert-Derived Confidence” score, or EDC score.
    They found that, on average, when humans weighed in on the data, their EDC scores predicted model behavior that the algorithm’s confidence scores couldn’t predict.
    “The human expert fills in gaps in the ML’s knowledge,” said Yin. “The human provides information that the ML did not have, and we show that that information is significant. The bottom line is that we’ve shown that if you add human expertise to the ML results, you get much better confidence.”
    The work by Fallon and Yin was funded by PNNL through an initiative known as MARS — Mathematics for Artificial Reasoning in Science. The effort is part of a broader effort in artificial intelligence at PNNL. The initiative brought together Fallon, an expert on human-machine teaming and human factors research, and Yin, a data scientist and an expert on machine learning.
    “This is the type of research needed to prepare and equip an AI-ready workforce,” said Fallon. “If people don’t trust the tool, then you’ve wasted your time and money. You’ve got to know what will happen when you take a machine learning model out of the laboratory and put it to work in the real world.
    “I’m a big fan of human expertise and of human-machine teaming. Our EDC scores allow the human to better assess the situation and make the ultimate decision.” More

  • in

    Gold now has a golden future in revolutionizing wearable devices

    Top Olympic achievers are awarded the gold medal, a symbol revered for wealth and honor both in the East and the West. This metal also serves as a key element in diverse fields due to its stability in air, exceptional electrical conductivity, and biocompatibility. It’s highly favored in medical and energy sectors as the ‘preferred catalyst’ and is increasingly finding application in cutting-edge wearable technologies.
    A research team led by Professor Sei Kwang Hahn and Dr. Tae Yeon Kim from the Department of Materials Science and Engineering at Pohang University of Science and Technology (POSTECH) developed an integrated wearable sensor device that effectively measures and processes two bio-signals simultaneously. Their research findings were featured in Advanced Materials, an international top journal in the materials field.
    Wearable devices, available in various forms like attachments and patches, play a pivotal role in detecting physical, chemical, and electrophysiological signals for disease diagnosis and management. Recent strides in research focus on devising wearables capable of measuring multiple bio-signals concurrently. However, a major challenge has been the disparate materials needed for each signal measurement, leading to interface damage, complex fabrication, and reduced device stability. Additionally, these varied signals analysis requires further signal processing systems and algorithms.
    The team tackled this challenge using various shapes of gold (Au) nanowires. While silver (Ag) nanowires, known for their extreme thinness, lightness, and conductivity, are commonly used in wearable devices, the team fused them with gold. Initially, they developed bulk gold nanowires by coating the exterior of the silver nanowires, suppressing the galvanic phenomenon. Subsequently, they created hollow gold nanowires by selectively etching the silver from the gold-coated nanowires. The bulk gold nanowires responded sensitively to temperature variations, whereas the hollow gold nanowires showed high sensitivity to minute changes in strain.
    These nanowires were then patterned onto a substrate made of styrene-ethylene-butylene-styrene (SEBS) polymer, seamlessly integrated without separations. By leveraging two types of gold nanowires, each with distinct properties, they engineered an integrated sensor capable of measuring both temperature and strain. Additionally, they engineered a logic circuit for signal analysis, utilizing the negative gauge factor resulting from introducing micrometer-scale corrugations into the pattern. This approach led to the successful creation of an intelligent wearable device system that not only captures but also analyzes signals simultaneously, all using a single material of Au.
    The team’s sensors exhibited remarkable performance in detecting subtle muscle tremors, identifying heartbeat patterns, recognizing speech through vocal cord tremors, and monitoring changes in body temperature. Notably, these sensors maintained high stability without causing damage to the material interfaces. Their flexibility and excellent stretchability enabled them to conform to curved skin seamlessly.
    Professor Sei Kwang Hahn stated, “This research underscores the potential for the development of a futuristic bioelectronics platform capable of analyzing a diverse range of bio-signals.” He added, “We envision new prospects across various industries including healthcare and integrated electronic systems.”
    The research was sponsored by the Basic Research Program and the Biomedical Technology Development Program of the National Research Foundation of Korea, and POSCO Holdings.

    Top Olympic achievers are awarded the gold medal, a symbol revered for wealth and honor both in the East and the West. This metal also serves as a key element in diverse fields due to its stability in air, exceptional electrical conductivity, and biocompatibility. It’s highly favored in medical and energy sectors as the ‘preferred catalyst’ and is increasingly finding application in cutting-edge wearable technologies.
    A research team led by Professor Sei Kwang Hahn and Dr. Tae Yeon Kim from the Department of Materials Science and Engineering at Pohang University of Science and Technology (POSTECH) developed an integrated wearable sensor device that effectively measures and processes two bio-signals simultaneously. Their research findings were featured in Advanced Materials, an international top journal in the materials field.
    Wearable devices, available in various forms like attachments and patches, play a pivotal role in detecting physical, chemical, and electrophysiological signals for disease diagnosis and management. Recent strides in research focus on devising wearables capable of measuring multiple bio-signals concurrently. However, a major challenge has been the disparate materials needed for each signal measurement, leading to interface damage, complex fabrication, and reduced device stability. Additionally, these varied signals analysis requires further signal processing systems and algorithms.
    The team tackled this challenge using various shapes of gold (Au) nanowires. While silver (Ag) nanowires, known for their extreme thinness, lightness, and conductivity, are commonly used in wearable devices, the team fused them with gold. Initially, they developed bulk gold nanowires by coating the exterior of the silver nanowires, suppressing the galvanic phenomenon. Subsequently, they created hollow gold nanowires by selectively etching the silver from the gold-coated nanowires. The bulk gold nanowires responded sensitively to temperature variations, whereas the hollow gold nanowires showed high sensitivity to minute changes in strain.
    These nanowires were then patterned onto a substrate made of styrene-ethylene-butylene-styrene (SEBS) polymer, seamlessly integrated without separations. By leveraging two types of gold nanowires, each with distinct properties, they engineered an integrated sensor capable of measuring both temperature and strain. Additionally, they engineered a logic circuit for signal analysis, utilizing the negative gauge factor resulting from introducing micrometer-scale corrugations into the pattern. This approach led to the successful creation of an intelligent wearable device system that not only captures but also analyzes signals simultaneously, all using a single material of Au.
    The team’s sensors exhibited remarkable performance in detecting subtle muscle tremors, identifying heartbeat patterns, recognizing speech through vocal cord tremors, and monitoring changes in body temperature. Notably, these sensors maintained high stability without causing damage to the material interfaces. Their flexibility and excellent stretchability enabled them to conform to curved skin seamlessly.
    Professor Sei Kwang Hahn stated, “This research underscores the potential for the development of a futuristic bioelectronics platform capable of analyzing a diverse range of bio-signals.” He added, “We envision new prospects across various industries including healthcare and integrated electronic systems.”
    The research was sponsored by the Basic Research Program and the Biomedical Technology Development Program of the National Research Foundation of Korea, and POSCO Holdings. More