More stories

  • in

    Google’s quantum computer reached an error-correcting milestone

    To shrink error rates in quantum computers, sometimes more is better. More qubits, that is.

    The quantum bits, or qubits, that make up a quantum computer are prone to mistakes that could render a calculation useless if not corrected. To reduce that error rate, scientists aim to build a computer that can correct its own errors. Such a machine would combine the powers of multiple fallible qubits into one improved qubit, called a “logical qubit,” that can be used to make calculations (SN: 6/22/20).  

    Scientists now have demonstrated a key milestone in quantum error correction. Scaling up the number of qubits in a logical qubit can make it less error-prone, researchers at Google report February 22 in Nature.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Future quantum computers could solve problems impossible for even the most powerful traditional computers (SN: 6/29/17). To build those mighty quantum machines, researchers agree that they’ll need to use error correction to dramatically shrink error rates. While scientists have previously demonstrated that they can detect and correct simple errors in small-scale quantum computers, error correction is still in its early stages (SN: 10/4/21).

    The new advance doesn’t mean researchers are ready to build a fully error-corrected quantum computer, “however, it does demonstrate that it is indeed possible, that error correction fundamentally works,” physicist Julian Kelly of Google Quantum AI said in a news briefing February 21.

    Quantum computers like Google’s require a dilution refrigerator (pictured) that can cool the quantum processor (which is installed at the bottom of the refrigerator) to frigid temperatures.Google Quantum AI

    Logical qubits store information redundantly in multiple physical qubits. That redundancy allows a quantum computer to check if any mistakes have cropped up and fix them on the fly. Ideally, the larger the logical qubit, the smaller the error rate should be. But if the original qubits are too faulty, adding in more of them will cause more problems than it solves.

    Using Google’s Sycamore quantum chip, the researchers studied two different sizes of logical qubits, one consisting of 17 qubits and the other of 49 qubits. After making steady improvements to the performance of the original physical qubits that make up the device, the researchers tallied up the errors that still slipped through. The larger logical qubit had a lower error rate, about 2.9 percent per round of error correction, compared to the smaller logical qubit’s rate of about 3.0 percent, the researchers found.

    Subscribe to Science News

    Get great science journalism, from the most trusted source, delivered to your doorstep.

    That small improvement suggests scientists are finally tiptoeing into the regime where error correction can begin to squelch errors by scaling up. “It’s a major goal to achieve,” says physicist Andreas Wallraff of ETH Zurich, who was not involved with the research.

    However, the result is only on the cusp of showing that error correction improves as scientists scale up. A computer simulation of the quantum computer’s performance suggests that, if the logical qubit’s size were increased even more, its error rate would actually get worse. Additional improvement to the original faulty qubits will be needed to enable scientists to really capitalize on the benefits of error correction.

    Still, milestones in quantum computation are so difficult to achieve that they’re treated like pole jumping, Wallraff says. You just aim to barely clear the bar. More

  • in

    Improving the performance of satellites in low Earth orbit

    A database updated in 2022 reported around 4,852 active satellites orbiting the earth. These satellites serve many different purposes in space, from GPS and weather tracking to military reconnaissance and early warning systems. Given the wide array of uses for satellites, especially in low Earth orbit (LEO), researchers are constantly trying to develop better ones. In this regard, small satellites have a lot of potential. They can reduce launch costs and increase the number of satellites in orbit, providing a better network with wider coverage. However, due to their smaller size, these satellites have lesser radiation shield. They also have a deployable membrane attached to the main body for a large phased-array transceiver, which causes non-uniform radiation degradation across the transceiver. This affects the performance of the satellite’s radio due to the variation in the strength of signal they can sense — also known as gain variation. Thus, there is a need to mitigate radiation degradation to make small satellites more viable.

    advertisement More

  • in

    Robot helps students with learning disabilities stay focused

    Engineering researchers at the University of Waterloo are successfully using a robot to help keep children with learning disabilities focused on their work.
    This was one of the key results in a new study that also found both the youngsters and their instructors valued the positive classroom contributions made by the robot.
    “There is definitely a great potential for using robots in the public education system,” said Dr. Kerstin Dautenhahn, a professor of electrical and computer engineering. “Overall, the findings imply that the robot has a positive effect on students.”
    Dautenhahn has been working on robotics in the context of disability for many years and incorporates principles of equity, inclusion and diversity in research projects.
    Students with learning disabilities may benefit from additional learning support, such as one-on-one instruction and the use of smartphones and tablets.
    Educators have in recent years explored the use of social robots to help students learn, but most often, their research has focused on children with Autism Spectrum Disorder. As a result, little work has been done on the use of socially assistive robots for students with learning disabilities.

    Along with two other Waterloo engineering researchers and three experts from the Learning Disabilities Society in Vancouver, Dautenhahn decided to change this, conducting a series of tests with a small humanoid robot called QT.
    Dautenhahn, the Canada 150 Research Chair in Intelligent Robotics, said the robot’s ability to perform gestures using its head and hands, accompanied by its speech and facial features, makes it very suitable for use with children with learning disabilities.
    Building on promising earlier research, the researchers divided 16 students with learning disabilities into two groups. In one group, students worked one-on-one with an instructor only. In the other group, the students worked one-on-one with an instructor and a QT robot. In the latter group, the instructor used a tablet to direct the robot, which then autonomously performed various activities using its speech and gestures.
    While the instructor controlled the sessions, the robot took over at certain times, triggered by the instructor, to lead the student.
    Besides introducing the session, the robot set goals and provided self-regulating strategies, if necessary. If the learning process was getting off-track, the robot used strategies such as games, riddles, jokes, breathing exercises and physical movements to redirect the student back to the task.
    Students who worked with the robot, Dautenhahn said, “were generally more engaged with their tasks and could complete their tasks at a higher rate compared” to the students who weren’t assisted by a robot. Further studies using the robot are planned.
    A paper on the study, User Evaluation of Social Robots as a Tool in One-to-one Instructional Settings for Students with Learning Disabilities, was recently presented at the International Conference on Social Robotics in Florence, Italy. More

  • in

    The switch made from a single molecule

    For the first time, an international team of researchers, including those from the University of Tokyo’s Institute for Solid State Physics, has demonstrated a switch, analogous to a transistor, made from a single molecule called fullerene. By using a carefully tuned laser pulse, the researchers are able to use fullerene to switch the path of an incoming electron in a predictable way. This switching process can be three to six orders of magnitude faster than switches in microchips, depending on the laser pulses used. Fullerene switches in a network could produce a computer beyond what is possible with electronic transistors, and they could also lead to unprecedented levels of resolution in microscopic imaging devices.
    Over 70 years ago, physicists discovered that molecules emit electrons in the presence of electric fields, and later on, certain wavelengths of light. The electron emissions created patterns that enticed curiosity but eluded explanation. But this has changed thanks to a new theoretical analysis, the ramification of which could not only lead to new high-tech applications, but also improve our ability to scrutinize the physical world itself. Project Researcher Hirofumi Yanagisawa and his team theorized how the emission of electrons from excited molecules of fullerene should behave when exposed to specific kinds of laser light, and when testing their predictions, found they were correct.
    “What we’ve managed to do here is control the way a molecule directs the path of an incoming electron using a very short pulse of red laser light,” said Yanagisawa. “Depending on the pulse of light, the electron can either remain on its default course or be redirected in a predictable way. So, it’s a little like the switching points on a train track, or an electronic transistor, only much faster. We think we can achieve a switching speed 1 million times faster than a classical transistor. And this could translate to real world performance in computing. But equally important is that if we can tune the laser to coax the fullerene molecule to switch in multiple ways at the same time, it could be like having multiple microscopic transistors in a single molecule. That could increase the complexity of a system without increasing its physical size.”
    The fullerene molecule underlying the switch is related to the perhaps slightly more famous carbon nanotube, though instead of a tube, fullerene is a sphere of carbon atoms. When placed on a metal point — essentially the end of a pin — the fullerenes orientate a certain way so they will direct electrons predictably. Fast laser pulses on the scale of femtoseconds, quadrillionths of a second, or even attoseconds, quintillionths of a second, are focused on the fullerene molecules to trigger the emission of electrons. This is the first time laser light has been used to control the emission of electrons from a molecule in this way.
    “This technique is similar to the way a photoelectron emission microscope produces images,” said Yanagisawa. “However, those can achieve resolutions at best around 10 nanometers, or ten-billionths of a meter. Our fullerene switch enhances this and allows for resolutions of around 300 picometers, or three-hundred-trillionths of a meter.”
    In principle, as multiple ultrafast electron switches can be combined into a single molecule, it would only take a small network of fullerene switches to perform computational tasks potentially much faster than conventional microchips. But there are several hurdles to overcome, such as how to miniaturize the laser component, which would be essential to create this new kind of integrated circuit. So, it may still be many years before we see a fullerene switch-based smartphone. More

  • in

    How can the metaverse improve public health?

    The “metaverse” has captured the public imagination as a world of limitless possibilities that can influence all aspects of life. Discussions about the utility of completely immersible virtual environments were initially limited to a small number of tech and Sci-Fi circles until the rebranding of Facebook as “Meta” in 2021. The concept of the metaverse has gained a lot of attention since then, and researchers are now starting to explore ways in which virtual environments can be used to improve scientific and health research.
    What are the key opportunities and uncertainties in the metaverse that can help us better manage non-communicable diseases? This is the subject of a paper recently published in the Journal of Medical Internet Research, authored by Associate Professor Javad Koohsari from the School of Knowledge Science at Japan Advanced Institute of Science and Technology (JAIST), who is also an adjunct researcher at the Faculty of Sport Sciences at Waseda University, along with Professor Yukari Nagai from JAIST; Professor Tomoki Nakaya from Tohoku University; Professor Akitomo Yasunaga from Bunka Gakuen University; Associate Professor Gavin R. McCormack from University of Calgary; Associate Professor Daniel Fuller from University of Saskatchewan; and Professor Koichiro Oka from Waseda University. The team lists three ways in which the metaverse might potentially be used for large-scale health interventions targeting non-communicable diseases.
    Non-communicable diseases like diabetes, heart disease, strokes, chronic respiratory disease, cancers, and mental illness are greatly influenced by the “built environment,” i.e., the human-made surroundings we constantly interact with. Built environments can affect health directly through acute effects like pollution or indirectly, by influencing physical activity, sedentary behaviour, diet and sleep. Therefore, health interventions that modify built environments can be used to reduce the health burden of non-communicable diseases.
    This is where the metaverse can be of assistance. Experiments conducted in virtual settings within the metaverse can be used to investigate the effectiveness of large-scale interventions before they are implemented, saving time and money. “Within a metaverse, study participants could be randomised to experience different built environment exposures such as high and low density, high and low walkability, or different levels of nature or urban environments,”explains Prof. Koohsari, the lead author of the paper, who is among the top 2% of most influential researchers worldwide across all scientific disciplines in 2021. He further adds, “This article will be of particular interest to experts in public health, urban design, epidemiology, medicine, and environmental sciences, especially those considering using the metaverse for research and intervention purposes.”
    Secondly, the article notes that the metaverse itself can be used to implement health interventions. For instance, the metaverse can give people exposure to natural “green” environments even when they have little or no access to these environments in the real world. In this way, the metaverse may reduce the negative mental health effects associated with crowded, stress-inducing environments.
    Virtual living spaces and offices within the metaverse can be endlessly customised. Moreover, changes to environments within the metaverse can be implemented with the click of a button. Hence, thirdly, the metaverse may also offer a virtual space to test new office and built environment designs in real-time. Prof. Koohsari adds, “A metaverse could allow stakeholders to experience, build, and collaboratively modify the proposed changes to the built environment before these interventions are implemented in the physical world.”
    Although it lists several ways in which the metaverse can transform public health interventions by modifying built environments, the article notes key limitations of the metaverse in simulating the real world. In particular, the current state of the metaverse does now allow for the testing of many human behaviours or their interaction with built environments. In addition, the population of the metaverse may not be representative, as people from economically lower strata have limited access to virtual reality technology.
    The article also explores ways in which the metaverse can negatively affect population health. For example, excessive immersion in virtual environments may lead to social isolation, anti-social behaviours, and negative health effects associated with physical inactivity or increased screen time. Finally, the article notes that excess reliance on artificial intelligence may lead to the replication of real-world biases and social inequalities in the virtual world. In conclusion, Prof. Koohsari remarks, “It is best, sooner rather than later, to face the prospects and challenges that the metaverse can offer to different scientific fields, and in our case, to public health.” More

  • in

    Infants outperform AI in 'commonsense psychology'

    Infants outperform artificial intelligence in detecting what motivates other people’s actions, finds a new study by a team of psychology and data science researchers. Its results, which highlight fundamental differences between cognition and computation, point to shortcomings in today’s technologies and where improvements are needed for AI to more fully replicate human behavior.
    “Adults and even infants can easily make reliable inferences about what drives other people’s actions,” explains Moira Dillon, an assistant professor in New York University’s Department of Psychology and the senior author of the paper, which appears in the journal Cognition. “Current AI finds these inferences challenging to make.”
    “The novel idea of putting infants and AI head-to-head on the same tasks is allowing researchers to better describe infants’ intuitive knowledge about other people and suggest ways of integrating that knowledge into AI,” she adds.
    “If AI aims to build flexible, commonsense thinkers like human adults become, then machines should draw upon the same core abilities infants possess in detecting goals and preferences,” says Brenden Lake, an assistant professor in NYU’s Center for Data Science and Department of Psychology and one of the paper’s authors.
    It’s been well-established that infants are fascinated by other people — as evidenced by how long they look at others to observe their actions and to engage with them socially. In addition, previous studies focused on infants’ “commonsense psychology” — their understanding of the intentions, goals, preferences, and rationality underlying others’ actions — have indicated that infants are able to attribute goals to others and expect others to pursue goals rationally and efficiently. The ability to make these predictions is foundational to human social intelligence.
    Conversely, “commonsense AI” — driven by machine-learning algorithms — predicts actions directly. This is why, for example, an ad touting San Francisco as a travel destination pops up on your computer screen after you read a news story on a newly elected city official. However, what AI lacks is flexibility in recognizing different contexts and situations that guide human behavior.
    To develop a foundational understanding of the differences between humans’ and AI’s abilities, the researchers conducted a series of experiments with 11-month-old infants and compared their responses to those yielded by state-of-the-art learning-driven neural-network models.
    To do so, they deployed the previously established “Baby Intuitions Benchmark” (BIB) — six tasks probing commonsense psychology. BIB was designed to allow for testing both infant and machine intelligence, allowing for a comparison of performance between infants and machines and, significantly, providing an empirical foundation for building human-like AI.
    Specifically, infants on Zoom watched a series of videos of simple animated shapes moving around the screen — similar to a video game. The shapes’ actions simulated human behavior and decision-making through the retrieval of objects on the screen and other movements. Similarly, the researchers built and trained learning-driven neural-network models — AI tools that help computers recognize patterns and simulate human intelligence — and tested the models’ responses to the exact same videos.
    Their results showed that infants recognize human-like motivations even in the simplified actions of animated shapes. Infants predict that these actions are driven by hidden but consistent goals — for example, the on-screen retrieval of the same object no matter what location it’s in and the movement of that shape efficiently even when the surrounding environment changes. Infants demonstrate such predictions through their longer looking to such events that violate their predictions — a common and decades-old measurement for gauging the nature of infants’ knowledge. Adopting this “surprise paradigm” to study machine intelligence allows for direct comparisons between an algorithm’s quantitative measure of surprise and a well-established human psychological measure of surprise — infants’ looking time. The models showed no such evidence of understanding the motivations underlying such actions, revealing that they are missing key foundational principles of commonsense psychology that infants possess.
    “A human infant’s foundational knowledge is limited, abstract, and reflects our evolutionary inheritance, yet it can accommodate any context or culture in which that infant might live and learn,” observes Dillon.
    The research was supported by grants from the National Science Foundation (DRL1845924) and the Defense Advanced Projects Research Agency (HR001119S0005). More

  • in

    Solid-state thermal transistor demonstrated

    An effective, stable solid-state electrochemical transistor has been developed, heralding a new era in thermal management technology.
    In modern electronics, a large amount of heat is produced as waste during usage — this is why devices such as laptops and mobile phones become warm during use, and require cooling solutions. In the last decade, the concept of managing this heat using electricity has been tested, leading to the development of electrochemical thermal transistors — devices that can be used to control heat flow with electrical signals. Currently, liquid-state thermal transistors are in use, but have critical limitations: chiefly, any leakage causes the device to stop working.
    A research team at Hokkaido University lead by Professor Hiromichi Ohta at the Research Institute for Electronic science has developed the first solid-state electrochemical thermal transistor. Their invention, described in the journal Advanced Functional Materials, is much more stable than and just as effective as current liquid-state thermal transistors.
    “A thermal transistor consists broadly of two materials, the active material and the switching material,” explains Ohta. “The active material has changeable thermal conductivity, and the switching material is used to control the thermal conductivity of the active material.”
    The team constructed their thermal transistor on a yttrium oxide-stabilized zirconium oxide base, which also functioned as the switching material, and used strontium cobalt oxide as the active material. Platinum electrodes were used to supply the power required to control the transistor.
    The thermal conductivity of the active material in the “on” state was comparable to some liquid-state thermal transistors. In general, thermal conductivity of the active material was four times higher in the “on” state compared to the “off” state. Further, the transistor was stable over 10 use cycles, better than some current liquid-state thermal transistors. This behavior was tested across more than 20 separately fabricated thermal transistors, ensuring the results were reproducible. The only drawback was the operating temperature of around 300°C.
    “Our findings show that solid-state electrochemical thermal transistors have the potential to be just as effective as liquid-state electrochemical thermal transistors, with none of their limitations,” concludes Ohta. “The main hurdle to developing practical thermal transistors is the high resistance of the switching material, and hence a high operating temperature. This will be the focus of our future research.” More

  • in

    First wearable device for vocal fatigue senses when your voice needs a break

    Northwestern University researchers have developed the first smart wearable device to continuously track how much people use their voices, alerting them to overuse before vocal fatigue and potential injury set in.
    The first-of-its-kind, battery-powered, wireless device and accompanying algorithms could be a game-changer for professional singers, teachers, politicians, call-center workers, coaches and anyone who relies on their voices to communicate effectively and make a living. It also could help clinicians remotely and continuously monitor patients with voice disorders throughout their treatment.
    Developed by an interdisciplinary team of materials scientists, biomedical engineers, opera singers and a speech-language pathologist, the research behind the new technology will be published during the week of Feb. 20 in the Proceedings of the National Academy of Sciences.
    The soft, flexible, postage-stamp-sized device comfortably adheres to the upper chest to sense the subtle vibrations associated with talking and singing. From there, the captured data is instantaneously streamed via Bluetooth to the users’ smartphone or tablet, so they can monitor their vocal activities in real time throughout the day and measure cumulative total vocal usage. Custom machine-learning algorithms distinguish the difference between speaking and singing, enabling singers to separately track each activity.
    With the app, users can set their personalized vocal thresholds. When they near that threshold, their smartphone, smartwatch or an accompanying device located on the wrist provides real-time haptic feedback as an alert. Then, they can rest their voices before pushing it too far.
    “The device precisely measures the amplitude and frequency for speaking and singing,” said Northwestern’s John A. Rogers, a bioelectronics pioneer who led the device’s development. “Those two parameters are most important in determining the overall load that’s occurring on the vocal folds. Being aware of those parameters, both at a given instant and cumulatively over time, is essential for managing healthy patterns of vocalization.”
    “It’s easy for people to forget how much they use their voice,” said Northwestern’s Theresa Brancaccio, a voice expert who co-led the study. “Seasoned classical singers tend to be more aware of their vocal usage because they have lived and learned. But some people — especially singers with less training or people, like teachers, politicians and sports coaches, who must speak a lot for their jobs — often don’t realize how much they are pushing it. We want to give them greater awareness to help prevent injury.”

    Rogers is the Louis Simpson and Kimberly Querrey Professor of Materials Science and Engineering, Biomedical Engineering and Neurological Surgery in the McCormick School of Engineering and Northwestern University Feinberg School of Medicine. He also is director of the Querrey Simpson Institute for Bioelectronics. A distinguished operatic performer, mezzo-soprano, Brancaccio is a senior lecturer at Northwestern’s Bienen School of Music, where she teaches voice and vocal pedagogy.
    Unaware of overuse
    For the millions of people in the U.S. who make their livings by speaking or singing, vocal fatigue is a constant, looming threat. The common condition occurs when overused vocal folds swell, making the voice sound raspy and lose endurance. Vocal fatigue negatively affects singers, in particular, altering their abilities to sing clearly or hit the same notes as their healthy voice can. At best, one short period of vocal fatigue can briefly interrupt a singer’s plans. At worst, it can lead to enough damage to derail a career.
    Lack of awareness is the underlying problem. People rarely make the connection between vocal activities and how those activities affect their voices. Although one in 13 U.S. adults have experienced vocal fatigue, most people don’t notice they are overusing their voices until hoarseness already has set in.
    “What leads people into trouble is when events stack up,” Brancaccio said. “They might have rehearsals, teach lessons, talk during class discussions and then go to a loud party, where they have to yell over the background noise. Then, throw a cold or illness into the mix. People have no idea how much they are coughing or clearing their throats. When these events stack up for days, that can put major stress on the voice.”
    Cross-disciplinary connection

    As an advocate for vocal health, Brancaccio has spent decades exploring ways to keep her students mindful of how much they use their voices. In 2009, she challenged her students to keep a paper budget — physically writing down every time they spoke, sang and drank water, among other things. About 10 years later, she converted the system into Singer Savvy, an app that offers a personalized vocal budget for each user and helps users stay within that budget.
    Separately, Rogers, in collaboration with researchers at the Shirley Ryan AbilityLab, had developed a wireless wearable device to track swallowing and speech in stroke patients. The bandage-like sensor measures swallowing abilities and speech patterns to monitor stroke patients’ recovery processes. In the early weeks of the COVID-19 pandemic, Rogers’ team modified the technology to monitor coughing, as a key symptom of the illness.
    “I wanted to gather more data and make our tracking system more precise and more accurate,” Brancaccio said. “So, I reached out to John to see if his sensors could help us gather more information.”
    “I thought it was a great opportunity for us to extend our technologies beyond our very important, but narrowly targeted, uses in health care to something that might capture a broader population of users,” Rogers said. “Anyone who uses their voice extensively could benefit.”
    The pair also partnered with speech pathologist and voice expert Aaron M. Johnson to explore how the devices could be used to evaluate and monitor treatment for patients with vocal disorders. Johnson, who co-directs the NYU Langone’s Voice Center, said the small, wireless device could help track patients’ voices in the real world — outside of a clinical setting.
    “A key part of voice therapy is helping people change how — and how much — they use their voice,” said Johnson, study co-author and associate professor in the department of otolaryngology at NYU Grossman School of Medicine. “This device will enable patients and their clinicians to understand voice use patterns and make adjustments in vocal demand to reduce vocal fatigue and speed recovery from voice disorders. Generalizing vocal techniques and exercises from therapy sessions into daily life is one of the most challenging aspects of voice therapy, and this device could greatly enhance that process.”
    Singer-trained algorithms
    The team modified Rogers’ existing devices to precisely measure vocal load over time. That includes frequency, volume, amplitude, duration and time of day. Like Rogers’ previous devices for COVID-19 and stroke patients, the new device also senses vibrations rather than recording audio. This enables the device to detect vocal activity precisely from the user, rather than the ambient noise surrounding them.
    The biggest challenge was to develop algorithms capable of distinguishing speaking from singing. To overcome this challenge, Brancaccio recruited voice and opera students to undertake a variety of singing exercises to train the machine-learning algorithms. A team of classical singers with different vocal ranges — varying from bass to soprano — wore the devices while humming, singing staccato scales and songs, reading and more. Each singer generated 2,500 one-second-long windows of singing and 2,500 one-second-long windows of speaking.
    The resulting algorithm can separate singing from speaking with more than 95% accuracy. And, when used in a choir setting, the device captures only data from the wearer and not noise from nearby singers.
    “Prolonged talking is one of the most fatiguing activities for people who are training to become professional singers,” Brancaccio said. “By separating singing and speaking, it can help people develop more awareness around how much they are speaking. There is evidence that even brief 15- to 20-minute periods of total silence interspersed throughout the day can help vocal fold tissues recover and repair.”
    How to use it
    To use the device, the wearer simply adheres it to the sternum, below the neck, and syncs the device with the accompanying app. Rogers’ team currently is working on a method to personalize vocal budgets for each user. Here, users will press a button in the app if they experience vocal discomfort at any point during the day, effectively capturing the instantaneous and cumulative vocal load at the time. These data can serve as a personalized threshold for vocal fatigue. When the user nears or exceeds their personalized threshold, a haptic device will vibrate as an alert.
    Similar in size and form to a wristwatch, this haptic device includes multiple motors that can activate in different patterns and with varying levels of intensity to convey different messages. Users also can monitor a graphical display within the app, which splits information into speaking and singing categories.
    “It uses Bluetooth, so it can talk to any device that has a haptic motor embedded,” Rogers said. “So, you don’t have to use our wristband. You could just leverage a standard smart watch for haptic feedback.”
    Although other vocal-monitoring devices do exist, those use big collars, tethering wires and bulky equipment. Some also use embedded microphones to capture audible vocal data, leading to privacy concerns.
    “Those don’t work for continuous monitoring in a real environment,” Brancaccio said. “Instead of wearing cumbersome, wired equipment, I can stick on this soft, wearable device. Once it’s on, I don’t even notice it. It’s super light and easy.”
    What’s next
    Because Rogers’ previous devices capture body temperature, heart rates and respiratory activity, the researchers included those capabilities in the vocal-monitoring device. They believe these extra data will help to explore fundamental research questions concerning vocal fatigue.
    “This is more speculative, but it might be interesting to see how physical activity affects vocal fatigue,” Rogers said. “If someone is dancing while singing, is that more stressful on the vocal folds compared to someone who is not physically exerting themselves? Those are the kinds of questions we can ask and quantitatively answer.”
    In the meantime, Brancaccio is excited for her students to have a tool that can help prevent injury. She hopes others — including non-singers — will see the benefit to keeping their vocal cords healthy.
    “Your voice is part of your identity — whether you are a singer or not,” she said. “It’s integral to daily life, and it’s worth protecting.”
    The study, “Closed-loop network of skin-interfaced wireless devices for quantifying vocal fatigue and providing user feedback,” was supported by the Querrey Simpson Institute for Bioelectronics at Northwestern University. More