More stories

  • in

    Public have no difficulty getting to grips with an extra thumb, study finds

    Cambridge researchers have shown that members of the public have little trouble in learning very quickly how to use a third thumb — a controllable, prosthetic extra thumb — to pick up and manipulate objects.
    The team tested the robotic device on a diverse range of participants, which they say is essential for ensuring new technologies are inclusive and can work for everyone.
    An emerging area of future technology is motor augmentation — using motorised wearable devices such as exoskeletons or extra robotic body parts to advance our motor capabilities beyond current biological limitations.
    While such devices could improve the quality of life for healthy individuals who want to enhance their productivity, the same technologies can also provide people with disabilities new ways to interact with their environment.
    Professor Tamar Makin from the Medical Research Council (MRC) Cognition and Brain Sciences Unit at the University of Cambridge said: “Technology is changing our very definition of what it means to be human, with machines increasingly becoming a part of our everyday lives, and even our minds and bodies.
    “These technologies open up exciting new opportunities that can benefit society, but it’s vital that we consider how they can help all people equally, especially marginalised communities who are often excluded from innovation research and development. To ensure everyone will have the opportunity to participate and benefit from these exciting advances, we need to explicitly integrate and measure inclusivity during the earliest possible stages of the research and development process.”
    Dani Clode, a collaborator within Professor Makin’s lab, has developed the Third Thumb, an extra robotic thumb aimed at increasing the wearer’s range of movement, enhancing their grasping capability and expanding the carrying capacity of the hand. This allows the user to perform tasks that might be otherwise challenging or impossible to complete with one hand or to perform complex multi-handed tasks without having to coordinate with other people.

    The Third Thumb is worn on the opposite side of the palm to the biological thumb and controlled by a pressure sensor placed under each big toe or foot. Pressure from the right toe pulls the Thumb across the hand, while the pressure exerted with the left toe pulls the Thumb up toward the fingers. The extent of the Thumb’s movement is proportional to the pressure applied, and releasing pressure moves it back to its original position.
    In 2022, the team had the opportunity to test the Third Thumb at the annual Royal Society Summer Science Exhibition, where members of the public of all ages were able to use the device during different tasks. The results are published today in Science Robotics.
    Over the course of five days, the team tested 596 participants, ranging in age from three to 96 years old and from a wide range of demographic backgrounds. Of these, only four were unable to use the Third Thumb, either because it did not fit their hand securely, or because they were unable to control it with their feet (the pressure sensors developed specifically for the exhibition were not suitable for very lightweight children).
    Participants were given up to a minute to familiarise themselves with the device, during which time the team explained how to perform one of two tasks.
    The first task involved picking up pegs from a pegboard one at a time with just the Third Thumb and placing them in a basket. Participants were asked to move as many pegs as possible in 60 seconds. 333 participants completed this task.
    The second task involved using the Third Thumb together with the wearer’s biological hand to manipulate and move five or six different foam objects. The objects were of various shapes that required different manipulations to be used, increasing the dexterity of the task. Again, participants were asked to move as many objects as they could into the basket within a maximum of 60 seconds. 246 participants completed this task.

    Almost everyone was able to use the device straightaway. 98% of participants were able to successfully manipulate objects using the Third Thumb during the first minute of use, with only 13 participants unable to perform the task.
    Ability levels between participants were varied, but there were no differences in performance between genders, nor did handedness change performance — despite the Thumb always being worn on the right hand. There was no definitive evidence that people who might be considered ‘good with their hands’ — for example, they were learning to play a musical instrument, or their jobs involved manual dexterity — were any better at the tasks.
    Older and younger adults had a similar level of ability when using the new technology, though further investigation just within the older adults age bracket revealed a decline in performance with increasing age. The researchers say this effect could be due to the general degradation in sensorimotor and cognitive abilities that are associated with ageing and may also reflect a generational relationship to technology.
    Performance was generally poorer among younger children. Six out of the 13 participants that could not complete the task were below the age of 10 years old, and of those that did complete the task, the youngest children tended to perform worse compared to older children. But even older children (aged 12-16 years) struggled more than young adults.
    Dani said: “Augmentation is about designing a new relationship with technology — creating something that extends beyond being merely a tool to becoming an extension of the body itself. Given the diversity of bodies, it’s crucial that the design stage of wearable technology is as inclusive as possible. It’s equally important that these devices are accessible and functional for a wide range of users. Additionally, they should be easy for people to learn and use quickly.”
    Co-author Lucy Dowdall, also from the MRC Cognition and Brain Science Unit, added: “If motor augmentation — and even broader human-machine interactions — are to be successful, they’ll need to integrate seamlessly with the user’s motor and cognitive abilities. We’ll need to factor in different ages, genders, weight, lifestyles, disabilities — as well as people’s cultural, financial backgrounds, and even likes or dislikes of technology. Physical testing of large and diverse groups of individuals is essential to achieve this goal.”
    There are countless examples of where a lack of inclusive design considerations has led to technological failure: Automated speech recognition systems that convert spoken language to text have been shown to perform better listening to white voices over Black voices. Some augmented reality technologies have been found to be less effective for users with darker skin tones. Women face a higher health risk from car accidents, due to car seats and seatbelts being primarily designed to accommodate ‘average’ male-sized dummies during crash testing. Hazardous power and industrial tools designed for a right-hand dominant use or grip have resulted in more accidents when operated by left-handers forced to use their non-dominant hand.This research was funded by the European Research Council, Wellcome, the Medical Research Council and Engineering and Physical Sciences Research Council. More

  • in

    Tracking animals without markers in the wild

    Researchers from the Cluster of Excellence Collective Behaviour developed a computer vision framework for posture estimation and identity tracking which they can use in indoor environments as well as in the wild. They have thus taken an important step towards markerless tracking of animals in the wild using computer vision and machine learning.
    Two pigeons are pecking grains in a park in Konstanz. A third pigeon flies in. There are four cameras in the immediate vicinity. Doctoral students Alex Chan and Urs Waldmann from the Cluster of Excellence Collective Behaviour at the University of Konstanz are filming the scene. After an hour, they return with the footage to their office to analyze it with a computer vision framework for posture estimation and identity tracking. The framework detects and draws a box around all pigeons. It records central body parts and determines their posture, their position, and their interaction with the other pigeons around them. All of this happened without any markers being attached to pigeons or any need for human being called in to help. This would not have been possible just a few years ago.
    3D-MuPPET, a framework to estimate and track 3D poses of up to 10 pigeons
    Markerless methods for animal posture tracking have been rapidly developed recently, but frameworks and benchmarks for tracking large animal groups in 3D are still lacking. To overcome this gap, researchers from the Cluster of Excellence Collective Behaviour at the University of Konstanz and the Max Planck Institute of Animal Behavior present 3D-MuPPET, a framework to estimate and track 3D poses of up to 10 pigeons at interactive speed using multiple camera views. The related publication was recently published in the International Journal of Computer Vision (IJCV).
    Important milestone in animal posture tracking and automatic behavioural analysis
    Urs Waldmann and Alex Chan recently finalized a new method, called 3D-MuPPET, which stands for 3D Multi-Pigeon Pose Estimation and Tracking. 3D-MuPPET is a computer vision framework for posture estimation and identity tracking for up to 10 individual pigeons from 4 camera views, based on data collected both in captive environments and even in the wild. “We trained a 2D keypoint detector and triangulated points into 3D, and also show that models trained on single pigeon data work well with multi-pigeon data,” explains Urs Waldmann. This is a first example of 3D animal posture tracking for an entire group of up to 10 individuals. Thus, the new framework provides a concrete method for biologists to create experiments and measure animal posture for automatic behavioural analysis. “This framework is an important milestone in animal posture tracking and automatic behavioural analysis,” as Alex Chan and Urs Waldmann say.
    Framework can be used in the wild
    In addition to tracking pigeons indoors, the framework is also extended to pigeons in the wild. “Using a model that can identify the outline of any object in an image called the Segment Anything Model, we further trained a 2D keypoint detector with a masked pigeon from the captive data, then applied the model to pigeon videos outdoors without any extra model finetuning,” states Alex Chan. 3D-MuPPET presents one of the first case-studies on how to transition from tracking animals in captivity towards tracking animals in the wild, allowing fine-scaled behaviours of animals to be measured in their natural habitats. The developed methods can potentially be applied across other species in future work, with potential application for large scale collective behaviour research and species monitoring in a non-invasive way.
    3D-MuPPET showcases a powerful and flexible framework for researchers who would like to use 3D posture reconstruction for multiple individuals to study collective behaviour in any environments or species. As long as a multi-camera setup and a 2D posture estimator is available, the framework can be applied to track 3D postures of any animals. More

  • in

    Research finds improving AI large language models helps better align with human brain activity

    With generative artificial intelligence (GenAI) transforming the social interaction landscape in recent years, large language models (LLMs), which use deep-learning algorithms to train GenAI platforms to process language, have been put in the spotlight. A recent study by The Hong Kong Polytechnic University (PolyU) found that LLMs perform more like the human brain when being trained in more similar ways as humans process language, which has brought important insights to brain studies and the development of AI models.
    Current large language models (LLMs) mostly rely on a single type of pretraining — contextual word prediction. This simple learning strategy has achieved surprising success when combined with massive training data and model parameters, as shown by popular LLMs such as ChatGPT. Recent studies also suggest that word prediction in LLMs can serve as a plausible model for how humans process language. However, humans do not simply predict the next word but also integrate high-level information in natural language comprehension.
    A research team led by Prof. LI Ping, Dean of the Faculty of Humanities and Sin Wai Kin Foundation Professor in Humanities and Technology at PolyU, has investigated the next sentence prediction (NSP) task, which simulates one central process of discourse-level comprehension in the human brain to evaluate if a pair of sentences is coherent, into model pretraining and examined the correlation between the model’s data and brain activation. The study has been recently published in the academic journal Sciences Advances.
    The research team trained two models, one with NSP enhancement and the other without, both also learned word prediction. Functional magnetic resonance imaging (fMRI) data were collected from people reading connected sentences or disconnected sentences. The research team examined how closely the patterns from each model matched up with the brain patterns from the fMRI brain data.
    It was clear that training with NSP provided benefits. The model with NSP matched human brain activity in multiple areas much better than the model trained only on word prediction. Its mechanism also nicely maps onto established neural models of human discourse comprehension. The results gave new insights into how our brains process full discourse such as conversations. For example, parts of the right side of the brain, not just the left, helped understand longer discourse. The model trained with NSP could also better predict how fast someone read — showing that simulating discourse comprehension through NSP helped AI understand humans better.
    Recent LLMs, including ChatGPT, have relied on vastly increasing the training data and model size to achieve better performance. Prof. Li Ping said, “There are limitations in just relying on such scaling. Advances should also be aimed at making the models more efficient, relying on less rather than more data. Our findings suggest that diverse learning tasks such as NSP can improve LLMs to be more human-like and potentially closer to human intelligence.”
    He added, “More importantly, the findings show how neurocognitive researchers can leverage LLMs to study higher-level language mechanisms of our brain. They also promote interaction and collaboration between researchers in the fields of AI and neurocognition, which will lead to future studies on AI-informed brain studies as well as brain-inspired AI.” More

  • in

    Close to 1 in 2 surveyed say they would use air taxis in the future

    A study by researchers from Nanyang Technological University, Singapore (NTU Singapore) has found that Singaporeans are open to ride air taxis, which are small autonomous aircraft that carry passengers over short distances. Through a study of 1,002 participants, the NTU Singapore team found that almost half (45.7 per cent) say they intend to use this mode of transport when it becomes available, with over one-third (36.2 per cent) planning to do so regularly.
    According to the findings published online in the journal Technology in Society in April, the intention to take autonomous air taxis is associated with factors such as trust in the AI technology deployed in air taxis, hedonic motivation (the fun or pleasure derived from using technology), performance expectancy (the degree to which users expect that using the system will benefit them), and news media attention (the amount of attention paid to news about air taxis).
    Air taxis and autonomous drone services are close to becoming a reality: China’s aviation authority issued its first safety approval certification last year to a Chinese drone maker for trial operations, and in Europe, authorities are working to certify air taxis safe to serve passengers at the Paris Olympics this year.
    For Singapore, which is looking to become a base for air taxi companies1, the study findings could help the sector achieve lift-off, said the research team from NTU’s Wee Kim Wee School of Communication and Information (WKWSCI) led by Professor Shirley Ho. Professor Ho, who is also NTU’s Associate Vice President for Humanities, Social Sciences & Research Communication, said: “Even though air taxis have yet to be deployed in Singapore, close to half of those surveyed said they would be keen to take one. Singapore aims to become base for air taxi firms[1]; CAAS working with regional counterparts on guidelines, CNA 2 air taxis in the future. This signifies a positive step forward for a nascent technology. Our study represents a significant step forward in understanding the factors that influence one’s intention to take air taxis. Insights into the public perception of air taxis will enable policymakers and tech developers to design targeted interventions that encourage air taxi use as they look to build up an air taxi industry in Singapore.”
    The study aligns with NTU’s goal of pursuing research aligned with national priorities and with the potential for significant intellectual and societal impact, as articulated in the NTU 2025 five-year strategic plan.
    How the study was conducted
    To gauge the public perception of air taxis, the NTU WKWSCI team surveyed 1,002 Singaporeans and permanent residents, drawing on a validated model[2] that measures technology acceptance and use and the factors driving this behaviour.

    Participants were asked to score on a five-point scale in response to various statements about factors such as their trust in the AI system used in air taxis, their attention to news reports on air taxis, their perceived ease of use and usefulness of air taxis, as well as their attitudes and intention to take air taxis in the future.
    The scores for each participant were then tabulated and used in statistical analyses to find out how these factors related to the participant’s intention to take air taxis.
    “Generally positive” sentiment about air taxis
    Upon tabulating the scores, the researchers found that sentiments around air taxis are generally positive among the participants. Almost half (45.7 per cent) said they intend to use this mode of transport when it becomes available. Close to four in 10 (36.2 per cent) said they plan to do so regularly.
    Close to six in 10 (57 per cent) thought taking air taxis would be fun, and 53 per cent said they were excited about taking air taxis.
    Six in 10 (60.9 per cent) agreed that taking air taxis would help to get things done more quickly, and 61.2 per cent believed that it would increase productivity.

    Half the participants also trusted the competency of the AI technology used in air taxis, and the AI engineers building the technology. Five in 10 (52.9 per cent) agreed that the AI system in air taxis would be competent and effective at helping to transport people.
    Factors that predict air taxi use
    Upon conducting statistical analyses on the survey data, the researchers found that the following factors directly impacted participants’ intention to take air taxis: news media attention; trust in the AI system used in air taxis; attitude towards air taxis; performance expectancy; hedonic motivation; price value; social influence; habit (the perception that taking air taxis could become a habit).These findings suggest that when Singaporeans consider whether they would use autonomous air taxis, not only do they value the practical aspects of the technology, but also how much they can trust the AI system, said NTU WKWSCI’s PhD student Justin Cheung, a co-author of the study.
    Surprisingly, habit was the most robust predictor of people’s intention to use air taxis, despite the relatively smaller number of participants who agreed that taking the vehicles would become a habit for them, he said. This suggests that while the user base for autonomous passenger drones may be small, it could be a loyal one, he added.
    Another robust predictor of use intention was attention to news media. In addition, the researchers found that news media attention could shape intentions to use air taxis and attitudes towards them by influencing trust in the AI systems, as well as the engineers who develop the AI systems behind air taxis.
    Prof Ho said: “When technologies are yet to be deployed in the public sphere, news media offers the main and, in many instances, the only source of information for members of the public. Our findings suggest that policymakers could leverage positive news media reporting when introducing air taxis to shape public perceptions and thereby use intention.”
    Credibility affects trust in media reports on AI technology
    These findings build on a study authored by Prof Ho and WKWSCI research fellow Goh Tong Jee. Published online in journal Science Communication in May, the study identified considerations that could affect the public’s trust in media organisations, policymakers and tech developers that introduce AI in autonomous vehicles (AVs).
    Through six focus group discussions with 56 drivers and non-drivers, the researchers found that media credibility is a foundation upon which the public would evaluate the trustworthiness of media organisations.
    The focus group discussion participants said they would consider qualities such as balance, comprehensiveness, persuasiveness and objectivity of media organisations when assessing their ability to create quality content.
    The researchers also found that non-drivers raised more qualities than drivers regarding trust in media organisations. The researchers attributed this observation to the enthusiasm non-drivers could have over the prospective use of AVs, which drove the non-drivers’ tendency to seek information.
    Some qualities raised only by non-drivers during the focus group discussions include a media organisation’s ability to spur discussions on whether AV is a need or a want. Another consideration is a media organisation’s ability to create varied content.
    Non-drivers also shared their expectations that media organisations should be transparent and reveal “unflattering” information in the public’s interest during crises, even if it means affecting the reputation of policymakers or tech developers.
    The findings from these two studies reaffirm the need for accurate and balanced reporting on AVs such as air taxis, due to the role news media can play in shaping public perception, and the public’s expectations of media organisations, said Prof Ho.
    Prof Ho added: “The two studies highlight the importance for media organisations to translate emerging scientific evidence accurately to facilitate informed decision-making. Given the speed at which innovative technologies emerge in the age of digitalisation, accurate science communication has never been more crucial.”
    Notes;
    [1]Singapore aims to become base for air taxi firms; CAAS working with regional counterparts on guidelines, CNA
    [2] This model, called the Unified Theory of Acceptance and Use of Technology 2, is a validated technology acceptance model that aims to explain user intentions to use an information system and subsequent usage behaviour. More

  • in

    Charge your laptop in a minute or your EV in 10? Supercapacitors can help

    Imagine if your dead laptop or phone could charge in a minute or if an electric car could be fully powered in 10 minutes.
    While not possible yet, new research by a team of CU Boulder scientists could potentially lead to such advances.
    Published today in the Proceedings of the National Academy of Sciences, researchers in Ankur Gupta’s lab discovered how tiny charged particles, called ions, move within a complex network of minuscule pores. The breakthrough could lead to the development of more efficient energy storage devices, such as supercapacitors, said Gupta, an assistant professor of chemical and biological engineering.
    “Given the critical role of energy in the future of the planet, I felt inspired to apply my chemical engineering knowledge to advancing energy storage devices,” Gupta said. “It felt like the topic was somewhat underexplored and as such, the perfect opportunity.”
    Gupta explained that several chemical engineering techniques are used to study flow in porous materials such as oil reservoirs and water filtration, but they have not been fully utilized in some energy storage systems.
    The discovery is significant not only for storing energy in vehicles and electronic devices but also for power grids, where fluctuating energy demand requires efficient storage to avoid waste during periods of low demand and to ensure rapid supply during high demand.
    Supercapacitors, energy storage devices that rely on ion accumulation in their pores, have rapid charging times and longer life spans compared to batteries.

    “The primary appeal of supercapacitors lies in their speed,” Gupta said. “So how can we make their charging and release of energy faster? By the more efficient movement of ions.”
    Their findings modify Kirchhoff’s law, which has governed current flow in electrical circuits since 1845 and is a staple in high school students’ science classes. Unlike electrons, ions move due to both electric fields and diffusion, and the researchers determined that their movements at pore intersections are different from what was described in Kirchhoff’s law.
    Prior to the study, ion movements were only described in the literature in one straight pore. Through this research, ion movement in a complex network of thousands of interconnected pores can be simulated and predicted in a few minutes.
    “That’s the leap of the work,” Gupta said. “We found the missing link.” More

  • in

    AI headphones let wearer listen to a single person in a crowd, by looking at them just once

    Noise-canceling headphones have gotten very good at creating an auditory blank slate. But allowing certain sounds from a wearer’s environment through the erasure still challenges researchers. The latest edition of Apple’s AirPods Pro, for instance, automatically adjusts sound levels for wearers — sensing when they’re in conversation, for instance — but the user has little control over whom to listen to or when this happens.
    A University of Washington team has developed an artificial intelligence system that lets a user wearing headphones look at a person speaking for three to five seconds to “enroll” them. The system, called “Target Speech Hearing,” then cancels all other sounds in the environment and plays just the enrolled speaker’s voice in real time even as the listener moves around in noisy places and no longer faces the speaker.
    The team presented its findings May 14 in Honolulu at the ACM CHI Conference on Human Factors in Computing Systems. The code for the proof-of-concept device is available for others to build on. The system is not commercially available.
    “We tend to think of AI now as web-based chatbots that answer questions,” said senior author Shyam Gollakota, a UW professor in the Paul G. Allen School of Computer Science & Engineering. “But in this project, we develop AI to modify the auditory perception of anyone wearing headphones, given their preferences. With our devices you can now hear a single speaker clearly even if you are in a noisy environment with lots of other people talking.”
    To use the system, a person wearing off-the-shelf headphones fitted with microphones taps a button while directing their head at someone talking. The sound waves from that speaker’s voice then should reach the microphones on both sides of the headset simultaneously; there’s a 16-degree margin of error. The headphones send that signal to an on-board embedded computer, where the team’s machine learning software learns the desired speaker’s vocal patterns. The system latches onto that speaker’s voice and continues to play it back to the listener, even as the pair moves around. The system’s ability to focus on the enrolled voice improves as the speaker keeps talking, giving the system more training data.
    The team tested its system on 21 subjects, who rated the clarity of the enrolled speaker’s voice nearly twice as high as the unfiltered audio on average.
    This work builds on the team’s previous “semantic hearing” research, which allowed users to select specific sound classes — such as birds or voices — that they wanted to hear and canceled other sounds in the environment.
    Currently the TSH system can enroll only one speaker at a time, and it’s only able to enroll a speaker when there is not another loud voice coming from the same direction as the target speaker’s voice. If a user isn’t happy with the sound quality, they can run another enrollment on the speaker to improve the clarity.
    The team is working to expand the system to earbuds and hearing aids in the future.
    Additional co-authors on the paper were Bandhav Veluri, Malek Itani and Tuochao Chen, UW doctoral students in the Allen School, and Takuya Yoshioka, director of research at AssemblyAI. This research was funded by a Moore Inventor Fellow award, a Thomas J. Cabel Endowed Professorship and a UW CoMotion Innovation Gap Fund. More

  • in

    More than spins: Exploring uncharted territory in quantum devices

    Many of today’s quantum devices rely on collections of qubits, also called spins. These quantum bits have only two energy levels, the ‘0’ and the ‘1’. However, spins in real devices also interact with light and vibrations known as bosons, greatly complicating calculations. In a new publication in Physical Review Letters, researchers in Amsterdam demonstrate a way to describe spin-boson systems and use this to efficiently configure quantum devices in a desired state.
    Quantum devices use the quirky behaviour of quantum particles to perform tasks that go beyond what ‘classical’ machines can do, including quantum computing, simulation, sensing, communication and metrology. These devices can take many forms, such as a collection of superconducting circuits, or a lattice of atoms or ions held in place by lasers or electric fields.
    Regardless of their physical realisation, quantum devices are typically described in simplified terms as a collection of interacting two-level quantum bits or spins. However, these spins also interact with other things in their surroundings, such as light in superconducting circuits or oscillations in the lattice of atoms or ions. Particles of light (photons) and vibrational modes of a lattice (phonons) are examples of bosons.
    Unlike spins, which have only two possible energy levels (‘0’ or ‘1’), the number of levels for each boson is infinite. Consequently, there are very few computational tools for describing spins coupled to bosons. In their new work, physicists Liam Bond, Arghavan Safavi-Naini and Jiří Minář of the University of Amsterdam, QuSoft and Centrum Wiskunde & Informatica work around this limitation by describing systems composed of spins and bosons using so-called non-Gaussian states. Each non-Gaussian state is a combination (a superposition) of much simpler Gaussian states.
    Each blue-red pattern in the image above represents a possible quantum state of the spin-boson system. “A Gaussian state would look like a plain red circle, without any interesting blue-red patterns,” explains PhD candidate Liam Bond. An example of a Gaussian state is laser light, in which all light-waves are perfectly in sync. “If we take many of these Gaussian states and start overlapping them (so that they’re in a superposition), these beautifully intricate patterns emerge. We were particularly excited because these non-Gaussian states allow us to retain a lot of the powerful mathematical machinery that exists for Gaussian states, whilst enabling us to describe a far more diverse set of quantum states.”
    Bond continues: “There are so many possible patterns that classical computers often struggle to compute and process them. Instead, in this publication we use a method that identifies the most important of these patterns and ignores the others. This lets us study these quantum systems, and design new ways of preparing interesting quantum states.”
    The new approach can be exploited to efficiently prepare quantum states in a way that outperforms other traditionally used protocols. “Fast quantum state preparation might be useful for a wide range of applications, such as quantum simulation or even quantum error correction,” notes Bond. The researchers also demonstrate that they can use non-Gaussian states to prepare ‘critical’ quantum states which correspond to a system undergoing a phase transition. In addition to fundamental interest, such states can greatly enhance the sensitivity of quantum sensors.
    While these results are very encouraging, they are only a first step towards more ambitious goals. So far, the method has been demonstrated for a single spin. A natural, but challenging extension is to include many spins and many bosonic modes at the same time. A parallel direction is to account for the effects of the environment disturbing the spin-boson systems. Both of these approaches are under active development. More

  • in

    Imperceptible sensors made from ‘electronic spider silk’ can be printed directly on human skin

    Researchers have developed a method to make adaptive and eco-friendly sensors that can be directly and imperceptibly printed onto a wide range of biological surfaces, whether that’s a finger or a flower petal.
    The method, developed by researchers from the University of Cambridge, takes its inspiration from spider silk, which can conform and stick to a range of surfaces. These ‘spider silks’ also incorporate bioelectronics, so that different sensing capabilities can be added to the ‘web’.
    The fibres, at least 50 times smaller than a human hair, are so lightweight that the researchers printed them directly onto the fluffy seedhead of a dandelion without collapsing its structure. When printed on human skin, the fibre sensors conform to the skin and expose the sweat pores, so the wearer doesn’t detect their presence. Tests of the fibres printed onto a human finger suggest they could be used as continuous health monitors.
    This low-waste and low-emission method for augmenting living structures could be used in a range of fields, from healthcare and virtual reality, to electronic textiles and environmental monitoring. The results are reported in the journal Nature Electronics.
    Although human skin is remarkably sensitive, augmenting it with electronic sensors could fundamentally change how we interact with the world around us. For example, sensors printed directly onto the skin could be used for continuous health monitoring, for understanding skin sensations, or could improve the sensation of ‘reality’ in gaming or virtual reality application.
    While wearable technologies with embedded sensors, such as smartwatches, are widely available, these devices can be uncomfortable, obtrusive and can inhibit the skin’s intrinsic sensations.
    “If you want to accurately sense anything on a biological surface like skin or a leaf, the interface between the device and the surface is vital,” said Professor Yan Yan Shery Huang from Cambridge’s Department of Engineering, who led the research. “We also want bioelectronics that are completely imperceptible to the user, so they don’t in any way interfere with how the user interacts with the world, and we want them to be sustainable and low waste.”
    There are multiple methods for making wearable sensors, but these all have drawbacks. Flexible electronics, for example, are normally printed on plastic films that don’t allow gas or moisture to pass through, so it would be like wrapping your skin in cling film. Other researchers have recently developed flexible electronics that are gas-permeable, like artificial skins, but these still interfere with normal sensation, and rely on energy- and waste-intensive manufacturing techniques.

    3D printing is another potential route for bioelectronics since it is less wasteful than other production methods, but leads to thicker devices that can interfere with normal behaviour. Spinning electronic fibres results in devices that are imperceptible to the user, but without a high degree of sensitivity or sophistication, and they’re difficult to transfer onto the object in question.
    Now, the Cambridge-led team has developed a new way of making high-performance bioelectronics that can be customised to a wide range of biological surfaces, from a fingertip to the fluffy seedhead of a dandelion, by printing them directly onto that surface. Their technique takes its inspiration in part from spiders, who create sophisticated and strong web structures adapted to their environment, using minimal material.
    The researchers spun their bioelectronic ‘spider silk’ from PEDOT:PSS (a biocompatible conducting polymer), hyaluronic acid and polyethylene oxide. The high-performance fibres were produced from water-based solution at room temperature, which enabled the researchers to control the ‘spinnability’ of the fibres. The researchers then designed an orbital spinning approach to allow the fibres to morph to living surfaces, even down to microstructures such as fingerprints.
    Tests of the bioelectronic fibres, on surfaces including human fingers and dandelion seedheads, showed that they provided high-quality sensor performance while remaining imperceptible to the host.
    “Our spinning approach allows the bioelectronic fibres to follow the anatomy of different shapes, at both the micro and macro scale, without the need for any image recognition,” said Andy Wang, the first author of the paper. “It opens up a whole different angle in terms of how sustainable electronics and sensors can be made. It’s a much easier way to produce large area sensors.”
    Most high-resolution sensors are made in an industrial cleanroom and require toxic chemicals in a multi-step and energy-intensive fabrication process. The Cambridge-developed sensors can be made anywhere and use a tiny fraction of the energy that regular sensors require.

    The bioelectronic fibres, which are repairable, can be simply washed away when they have reached the end of their useful lifetime, and generate less than a single milligram of waste: by comparison, a typical single load of laundry produces between 600 and 1500 milligrams of fibre waste.
    “Using our simple fabrication technique, we can put sensors almost anywhere and repair them where and when they need it, without needing a big printing machine or a centralised manufacturing facility,” said Huang. “These sensors can be made on-demand, right where they’re needed, and produce minimal waste and emissions.”
    The researchers say their devices could be used in applications from health monitoring and virtual reality, to precision agriculture and environmental monitoring. In future, other functional materials could be incorporated into this fibre printing method, to build integrated fibre sensors for augmenting the living systems with display, computation, and energy conversion functions. The research is being commercialised with the support of Cambridge Enterprise, the University’s commercialisation arm.
    The research was supported in part by the European Research Council, Wellcome, the Royal Society, and the Biotechnology and Biological Sciences Research Council (BBSRC), part of UK Research and Innovation (UKRI). More