More stories

  • in

    Engineers place molecule-scale devices in precise orientation

    Engineers have developed a technique that allows them to precisely place microscopic devices formed from folded DNA molecules in not only a specific location but also in a specific orientation.
    As a proof-of-concept, they arranged more than 3,000 glowing moon-shaped nanoscale molecular devices into a flower-shaped instrument for indicating the polarization of light. Each of 12 petals pointed in a different direction around the center of the flower, and within in each petal about 250 moons were aligned to the direction of the petal. Because each moon only glows when struck by polarized light matching its orientation, the end result is a flower whose petals light up in sequence as the polarization of light shined upon it is rotated. The flower, which spans a distance smaller than the width of a human hair, demonstrates that thousands of molecules can be reliably oriented on the surface of a chip.
    This method for precisely placing and orienting DNA-based molecular devices may make it possible to use these molecular devices to power new kinds of chips that integrate molecular biosensors with optics and electronics for applications such as DNA sequencing or measuring the concentrations of thousands of proteins at once.
    The research, published on February 19 by the journal Science, builds on more than 15 years of work by Caltech’s Paul Rothemund (BS ’94), research professor of bioengineering, computing and mathematical sciences, and computation and neural systems, and his colleagues. In 2006, Rothemund showed that DNA could be directed to fold itself into precise shapes through a technique dubbed DNA origami. In 2009, Rothemund and colleagues at IBM Research Almaden described a technique through which DNA origami could be positioned at precise locations on surfaces. To do so, they used a printing process based on electron beams and created “sticky” patches having the same size and shape as the origami did. In particular, they showed that origami triangles bound precisely at the location of triangular sticky patches.
    Next, Rothemund and Ashwin Gopinath, formerly a Caltech senior postdoctoral scholar and now an assistant professor at MIT, refined and extended this technique to demonstrate that molecular devices constructed from DNA origami could be reliably integrated into larger optical devices. “The technological barrier has been how to reproducibly organize vast numbers of molecular devices into the right patterns on the kinds of materials used for chips,” says Rothemund.
    In 2016, Rothemund and Gopinath showed that triangular origami carrying fluorescent molecules could be used to reproduce a 65,000-pixel version of Vincent van Gogh’s The Starry Night. In that work, triangular DNA origami were used to position fluorescent molecules within bacterium-sized optical resonators; precise placement of the fluorescent molecules was critical since a move of just 100 nanometers to the left or right would dim or brighten the pixel by more than five times.
    But the technique had an Achilles’ heel: “Because the triangles were equilateral and were free to rotate and flip upside-down, they could stick flat onto the triangular sticky patch on the surface in any of six different ways. This meant we couldn’t use any devices that required a particular orientation to function. We were stuck with devices that would work equally well when pointed up, down, or in any direction,” says Gopinath. Molecular devices intended for DNA sequencing or measuring proteins absolutely have to land right side up, so the team’s older techniques would ruin 50 percent of the devices. For devices also requiring a unique rotational orientation, such as transistors, only 16 percent would function.
    The first problem to solve, then, was to get the DNA origami to reliably land with the correct side facing up. “It’s a bit like guaranteeing toast always magically lands butter side up when thrown on the floor,” says Rothemund. To the researchers surprise, coating origami with a carpet of flexible DNA strands on one side enabled more than 95 percent of them to land face up. But the problem of controlling rotation remained. Right triangles with three different edge lengths were the researchers’ first attempt at a shape that might land in the preferred rotation.
    However, after wrestling to get just 40 percent of right triangles to point in the correct orientation, Gopinath recruited computer scientists Chris Thachuk of the University of Washington, co-author of the Science paper, and a former Caltech postdoc; and David Kirkpatrick of the University of British Columbia, also a co-author of the Science paper. Their job was to find a shape which would only get stuck in the intended orientation, no matter what orientation it might land in. The computer scientists’ solution was a disk with an off-center hole, which the researchers termed a “small moon.” Mathematical proofs suggested that, unlike a right triangle, small moons could smoothly rotate to find the best alignment with their sticky patch without getting stuck. Lab experiments verified that over 98 percent of the small moons found the correct orientation on their sticky patches.
    The team then added special fluorescent molecules that jam themselves tightly into the DNA helices of the small moons, perpendicular to the axis of the helices. This ensured that the fluorescent molecules within a moon were all oriented in the same direction and would glow most brightly when stimulated with light of a particular polarization. “It’s as if every molecule carries a little antenna, which can accept energy from light most efficiently only when the polarization of light matches the orientation of the antenna,” says Gopinath. This simple effect is what enabled the construction of the polarization-sensitive flower.
    With robust methods for controlling the up-down and rotational orientation of DNA origami, a wide range of molecular devices may now be cheaply integrated into computer chips in high yield for a variety of potential applications. For example, Rothemund and Gopinath have founded a company, Palamedrix, to commercialize the technology for building semiconductor chips that enable simultaneous study of all the proteins relevant to human health. Caltech has filed patent applications for the work. More

  • in

    Smartphone study points to new ways to measure food consumption

    A team of researchers has devised a method using smartphones in order to measure food consumption — an approach that also offers new ways to predict physical well-being.
    “We’ve harnessed the expanding presence of mobile and smartphones around the globe to measure food consumption over time with precision and with the potential to capture seasonal shifts in diet and food consumption patterns,” explains Andrew Reid Bell, an assistant professor in New York University’s Department of Environmental Studies and an author of the paper, which appears in the journal Environmental Research Letters.
    Food consumption has traditionally been measured by questionnaires that require respondents to recall what they ate over the previous 24 hours, to keep detailed consumption records over a three-to-four-day period, or to indicate their typical consumption patterns over one-week to one-month periods. Because these methods ask for participants to report behaviors over extended periods of time, they raise concerns about the accuracy of such documentation.
    Moreover, these forms of data collection don’t capture “real-time” food consumption, preventing analyses that directly link nutrition with physical activity and other measures of well-being — a notable shortcoming given the estimated two billion people in the world who are affected by moderate to severe food insecurity.
    Finally, while food consumption as well as food production have a significant impact on the environment, “we do not yet have the tools to analyze food consumption in the same ways as we do for environmental variables and food production,” write the study’s authors, who also include Mary Killilea, a clinical professor in NYU’s Department of Environmental Studies, and Mari Roberts, an NYU graduate student. “This is a critical gap, as it hampers our understanding of how environmental shocks carry through to become consumption shocks to households, communities, or regions and how responses to these shocks feed back into further environmental stress.”
    The team, which also included researchers from the University of Minnesota, Imperial College London, the Palli Karma-Sahayak Foundation, and Duke Kunshan University, turned to smartphones as an alternative means to track food consumption and its relationship to physical activity.
    “Access to mobile devices is changing how we gather information in many ways, all the way down to the possibility of reaching respondents on their own time, on their own devices, and in their own spaces,” explains Bell.
    Participants included nearly 200 adults in Bangladesh who reported which among a set of general food types (e.g., nuts and seeds, oils, vegetables, leafy vegetables, fruits, meat and eggs, fish, etc.) their household had consumed in the immediately preceding 24 hours as well as which specific food items within the more general food types they had consumed (e.g., rice, wheat, barley, maize, etc.) and how much they ate. Finally, participants reported the age, gender, literacy, education level, occupation, height, and weight of each member of their household and as well as the following measures of their own physical well-being: whether they could stand up on their own after sitting down, whether they could walk for 5 kilometers (3.1 miles), and whether they could carry 20 liters (5.3 gallons) of water for 20 meters (65.6 feet). All of the information was entered by the participants on their phones using a data-collection app, with response rates as high as 90 percent.
    “Food stress is dynamic, and people’s needs — particularly for expectant mothers and young children — can change quickly,” explains Bell. “Reaching respondents in real time allows us to map those changes in a way conventional approaches don’t capture.”
    “Mainstreaming data collection by respondents themselves, through their own devices, would be transformative for understanding food security and for empirical social science in general,” he adds. “It would mean their voices being counted through participation on their own time and terms, and not only by giving up a half-day or longer of work. For researchers, it would mean having connections to rural communities and a picture of their well-being all the time, not just when resources flow to a place in response to crisis, potentially unearthing an understanding of resilience in the face of stressors that has never before been possible.”
    The authors recognize concerns about smartphone availability in both rural and impoverished communities. However, they point to recent studies that show how digital technologies, such as mobile phones and satellites, have offered new ways for rural populations in developing countries to access savings, credit, and insurance.
    “We now see mobile phone penetration almost everywhere in the world, with smartphone and mobile broadband subscriptions following the same trend,” says Bell. More

  • in

    Quantum computing: When ignorance is wanted

    Quantum computers promise not only to outperform classical machines in certain important tasks, but also to maintain the privacy of data processing. The secure delegation of computations has been an increasingly important issue since the possibility of utilizing cloud computing and cloud networks. Of particular interest is the ability to exploit quantum technology that allows for unconditional security, meaning that no assumptions about the computational power of a potential adversary need to be made.
    Different quantum protocols have been proposed, all of which make trade-offs between computational performance, security, and resources. Classical protocols, for example, are either limited to trivial computations or are restricted in their security. In contrast, homomorphic quantum encryption is one of the most promising schemes for secure delegated computation. Here, the client’s data is encrypted in such a way that the server can process it even though he cannot decrypt it. Moreover, opposed to other protocols, the client and server do not need to communicate during the computation which dramatically boosts the protocol’s performance and practicality.
    In an international collaboration led by Prof. Philip Walther from the University of Vienna scientists from Austria, Singapore and Italy teamed up to implement a new quantum computation protocol where the client has the option of encrypting his input data so that the computer cannot learn anything about them, yet can still perform the calculation. After the computation, the client can then decrypt the output data again to read out the result of the calculation. For the experimental demonstration, the team used quantum light, which consists of individual photons, to implement this so-called homomorphic quantum encryption in a quantum walk process. Quantum walks are interesting special-purpose examples of quantum computation because they are hard for classical computers, whereas being feasible for single photons.
    By combining an integrated photonic platform built at the Polytechnic University of Milan, together with a novel theoretical proposal developed at the Singapore University of Technology and Design, scientist from the University of Vienna demonstrated the security of the encrypted data and investigated the behavior increasing the complexity of the computations.
    The team was able to show that the security of the encrypted data improves the larger the dimension of the quantum walk calculation becomes. Furthermore, recent theoretical work indicates that future experiments taking advantage of various photonic degrees of freedom would also contribute to an improvement in data security; one can anticipate further optimizations in the future. “Our results indicate that the level of security improves even further, when increasing the number of photons that carry the data,” says Philip Walther and concludes “this is exciting and we anticipate further developments of secure quantum computing in the future.”

    Story Source:
    Materials provided by University of Vienna. Note: Content may be edited for style and length. More

  • in

    Blueprint for fault-tolerant qubits

    Building a universal quantum computer is a challenging task because of the fragility of quantum bits, or qubits for short. To deal with this problem, various types of error correction have been developed. Conventional methods do this by active correction techniques. In contrast, researchers led by Prof. David DiVincenzo from Forschungszentrum Jülich and RWTH Aachen University, together with partners from the University of Basel and QuTech Delft, have now proposed a design for a circuit with passive error correction. Such a circuit would already be inherently fault protected and could significantly accelerate the construction of a quantum computer with a large number of qubits.
    In order to encode quantum information in a reliable way, usually, several imperfect qubits are combined to form a so-called logical qubit. Quantum error correction codes, or QEC codes for short, thus make it possible to detect errors and subsequently correct them, so that the quantum information is preserved over a longer period of time.
    In principle, the techniques work in a similar way to active noise cancellation in headphones: In a first step, any fault is detected. Then, a corrective operation is performed to remove the error and restore the information to its original pure form.
    However, the application of such active error correction in a quantum computer is very complex and comes with an extensive use of hardware. Typically, complex error-correcting electronics are required for each qubit, making it difficult to build circuits with many qubits, as required to build a universal quantum computer.
    The proposed design for a superconducting circuit, on the other hand, has a kind of built-in error correction. The circuit is designed in such a way that it is already inherently protected against environmental noise while still controllable. The concept thus bypasses the need for active stabilization in a highly hardware-efficient manner, and would therefore be a promising candidate for a future large-scale quantum processor that has a large number of qubits.
    “By implementing a gyrator – a two port device that couples current on one port to voltage on the other – in between two superconducting devices (so called Josephson junctions), we could waive the demand of active error detection and stabilization: when cooled down, the qubit is inherently protected against common types of noise,” said Martin Rymarz, a PhD student in the group of David DiVincenzo and first author of the paper, published in Physical Review X.
    “I hope that our work will inspire efforts in the lab; I recognize that this, like many of our proposals, may be a bit ahead of its time”, said David DiVincenzo, Founding Director of the JARA-Institute for Quantum Information at RWTH Aachen University and Director of the Institute of Theoretical Nanoelectronics (PGI-2) at Forschungszentrum Jülich. “Nevertheless, given the professional expertise available, we recognize the possibility to test our proposal in the lab in the foreseeable future”.
    David DiVincenzo is considered a pioneer in the development of quantum computers. Among other things, his name is associated with the criteria that a quantum computer must fulfil, the so-called “DiVincenzo criteria”.
     

    Story Source:
    Materials provided by Forschungszentrum Juelich. Note: Content may be edited for style and length. More

  • in

    Identifying 'ugly ducklings' to catch skin cancer earlier

    Melanoma is by far the deadliest form of skin cancer, killing more than 7,000 people in the United States in 2019 alone. Early detection of the disease dramatically reduces the risk of death and the costs of treatment, but widespread melanoma screening is not currently feasible. There are about 12,000 practicing dermatologists in the US, and they would each need to see 27,416 patients per year to screen the entire population for suspicious pigmented lesions (SPLs) that can indicate cancer.
    Computer-aided diagnosis (CAD) systems have been developed in recent years to try to solve this problem by analyzing images of skin lesions and automatically identifying SPLs, but so far have failed to meaningfully impact melanoma diagnosis. These CAD algorithms are trained to evaluate each skin lesion individually for suspicious features, but dermatologists compare multiple lesions from an individual patient to determine whether they are cancerous — a method commonly called the “ugly duckling” criteria. No CAD systems in dermatology, to date, have been designed to replicate this diagnosis process.
    Now, that oversight has been corrected thanks to a new CAD system for skin lesions based on convolutional deep neural networks (CDNNs) developed by researchers at the Wyss Institute for Biologically Inspired Engineering at Harvard University and the Massachusetts Institute of Technology (MIT). The new system successfully distinguished SPLs from non-suspicious lesions in photos of patients’ skin with ~90% accuracy, and for the first time established an “ugly duckling” metric capable of matching the consensus of three dermatologists 88% of the time.
    “We essentially provide a well-defined mathematical proxy for the deep intuition a dermatologist relies on when determining whether a skin lesion is suspicious enough to warrant closer examination,” said the study’s first author Luis Soenksen, Ph.D., a Postdoctoral Fellow at the Wyss Institute who is also a Venture Builder at MIT. “This innovation allows photos of patients’ skin to be quickly analyzed to identify lesions that should be evaluated by a dermatologist, allowing effective screening for melanoma at the population level.”
    The technology is described in Science Translational Medicine, and the CDNN’s source code is openly available on GitHub (https://github.com/lrsoenksen/SPL_UD_DL).
    Bringing ugly ducklings into focus
    Melanoma is personal for Soenksen, who has watched several close friends and family members suffer from the disease. “It amazed me that people can die from melanoma simply because primary care doctors and patients currently don’t have the tools to find the “odd” ones efficiently. I decided to take on that problem by leveraging many of the techniques I learned from my work in artificial intelligence at the Wyss and MIT,” he said.

    advertisement

    Soenksen and his collaborators discovered that all the existing CAD systems created for identifying SPLs only analyzed lesions individually, completely omitting the ugly duckling criteria that dermatologists use to compare several of a patient’s moles during an exam. So they decided to build their own.
    To ensure that their system could be used by people without specialized dermatology training, the team created a database of more than 33,000 “wide field” images of patients’ skin that included backgrounds and other non-skin objects, so that the CDNN would be able to use photos taken from consumer-grade cameras for diagnosis. The images contained both SPLs and non-suspicious skin lesions that were labeled and confirmed by a consensus of three board-certified dermatologists. After training on the database and subsequent refinement and testing, the system was able to distinguish between suspicious from non-suspicious lesions with 90.3% sensitivity and 89.9% specificity, improving upon previously published systems.
    But this baseline system was still analyzing the features of individual lesions, rather than features across multiple lesions as dermatologists do. To add the ugly duckling criteria into their model, the team used the extracted features in a secondary stage to create a 3D “map” of all of the lesions in a given image, and calculated how far away from “typical” each lesion’s features were. The more “odd” a given lesion was compared to the others in an image, the further away it was from the center of the 3D space. This distance is the first quantifiable definition of the ugly duckling criteria, and serves as a gateway to leveraging deep learning networks to overcome the challenging and time-consuming task of identifying and scrutinizing the differences between all the pigmented lesions in a single patient.
    Deep learning vs. dermatologists
    Their DCNN still had to pass one final test: performing as well as living, breathing dermatologists at the task of identifying SPLs from images of patients’ skin. Three dermatologists examined 135 wide-field photos from 68 patients, and assigned each lesion an “oddness” score that indicated how concerning it looked. The same images were analyzed and scored by the algorithm. When the assessments were compared, the researchers found that the algorithm agreed with the dermatologists’ consensus 88% of the time, and with the individual dermatologists 86% of the time.

    advertisement

    “This high level of consensus between artificial intelligence and human clinicians is an important advance in this field, because dermatologists’ agreement with each other is typically very high, around 90%,” said co-author Jim Collins, Ph.D., a Core Faculty member of the Wyss Institute and co-leader of its Predictive Bioanalytics Initiative who is also the Termeer Professor of Medical Engineering and Science at MIT. “Essentially, we’ve been able to achieve dermatologist-level accuracy in diagnosing potential skin cancer lesions from images that can be taken by anybody with a smartphone, which opens up huge potential for finding and treating melanoma earlier.”
    Recognizing that such a technology should be made available to as many people as possible for maximum benefit, the team has made their algorithm open-source on GitHub. They hope to partner with medical centers to launch clinical trials further demonstrating their system’s efficacy, and with industry to turn it into a product that could be used by primary care providers around the world. They also recognize that in order to be universally helpful, their algorithm needs to be able to function equally well across the full spectrum of human skin tones, which they plan to incorporate into future development.
    “Allowing our scientists to purse their passions and visions is key to the success of the Wyss Institute, and it’s wonderful to see this advance that can impact all of us in such a meaningful way emerge from a collaboration with our newly formed Predictive Bioanalytics Initiative,” said Wyss Founding Director Don Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and Boston Children’s Hospital, and Professor of Bioengineering at the Harvard John A. Paulson School of Engineering and Applied Sciences.
    Additional authors of the paper include Regina Barzilay, Martha L. Gray, Timothy Kassis, Susan T. Conover, Berta Marti-Fuster, Judith S. Birkenfeld, Jason Tucker-Schwartz, and Asif Naseem from MIT, Robert R. Stavert from the Beth Israel Deaconess Medical Center, Caroline C. Kim from Tufts Medical Center, Maryanne M. Senna from Massachusetts General Hospital, and José Avilés-Izquierdo from Hospital General Universitario Gregorio Marañón.
    This research was supported by the Abdul Latif Jameel Clinic for Machine Learning in Health, the Consejería de Educación, Juventud y Deportes de la Comunidad de Madrid through the Madrid-MIT M+Visión Consortium and the People Programme of the European Union’s Seventh Framework Programme, the Mexico CONACyT grant 342369/40897, and the US DOE training grant DE-SC0008430. More

  • in

    This robot doesn't need any electronics

    Engineers at the University of California San Diego have created a four-legged soft robot that doesn’t need any electronics to work. The robot only needs a constant source of pressurized air for all its functions, including its controls and locomotion systems.
    The team, led by Michael T. Tolley, a professor of mechanical engineering at the Jacobs School of Engineering at UC San Diego, details its findings in the Feb. 17, 2021 issue of the journal Science Robotics.
    “This work represents a fundamental yet significant step towards fully-autonomous, electronics-free walking robots,” said Dylan Drotman, a Ph.D. student in Tolley’s research group and the paper’s first author.
    Applications include low-cost robotics for entertainment, such as toys, and robots that can operate in environments where electronics cannot function, such as MRI machines or mine shafts. Soft robots are of particular interest because they easily adapt to their environment and operate safely near humans.
    Most soft robots are powered by pressurized air and are controlled by electronic circuits. But this approach requires complex components like circuit boards, valves and pumps — often outside the robot’s body. These components, which constitute the robot’s brains and nervous system, are typically bulky and expensive. By contrast, the UC San Diego robot is controlled by a light-weight, low-cost system of pneumatic circuits, made up of tubes and soft valves, onboard the robot itself. The robot can walk on command or in response to signals it senses from the environment.
    “With our approach, you could make a very complex robotic brain,” said Tolley, the study’s senior author. “Our focus here was to make the simplest air-powered nervous system needed to control walking.”
    The robot’s computational power roughly mimics mammalian reflexes that are driven by a neural response from the spine rather than the brain. The team was inspired by neural circuits found in animals, called central pattern generators, made of very simple elements that can generate rhythmic patterns to control motions like walking and running.

    advertisement

    To mimic the generator’s functions, engineers built a system of valves that act as oscillators, controlling the order in which pressurized air enters air-powered muscles in the robot’s four limbs. Researchers built an innovative component that coordinates the robot’s gait by delaying the injection of air into the robot’s legs. The robot’s gait was inspired by sideneck turtles.
    The robot is also equipped with simple mechanical sensors — little soft bubbles filled with fluid placed at the end of booms protruding from the robot’s body. When the bubbles are depressed, the fluid flips a valve in the robot that causes it to reverse direction.
    The Science Robotics paper builds on previous work by other research groups that developed oscillators and sensors based on pneumatic valves, and adds the components necessary to achieve high-level functions like walking.
    How it works
    The robot is equipped with three valves acting as inverters that cause a high pressure state to spread around the air-powered circuit, with a delay at each inverter.

    advertisement

    Each of the robot’s four legs has three degrees of freedom powered by three muscles. The legs are angled downward at 45 degrees and composed of three parallel, connected pneumatic cylindrical chambers with bellows. When a chamber is pressurized, the limb bends in the opposite direction. As a result, the three chambers of each limb provide multi-axis bending required for walking. Researchers paired chambers from each leg diagonally across from one another, simplifying the control problem.
    A soft valve switches the direction of rotation of the limbs between counterclockwise and clockwise. That valve acts as what’s known as a latching double pole, double throw switch — a switch with two inputs and four outputs, so each input has two corresponding outputs it’s connected to. That mechanism is a little like taking two nerves and swapping their connections in the brain.
    Next steps
    In the future, researchers want to improve the robot’s gait so it can walk on natural terrains and uneven surfaces. This would allow the robot to navigate over a variety of obstacles. This would require a more sophisticated network of sensors and as a result a more complex pneumatic system.
    The team will also look at how the technology could be used to create robots, which are in part controlled by pneumatic circuits for some functions, such as walking, while traditional electronic circuits handle higher functions.
    This work is supported by the Office of Naval Research, grant numbers N00014-17-1-2062 and N00014-18-1-2277.
    Video: https://www.youtube.com/watch?v=X5caSAb4kz0&feature=emb_logo More

  • in

    Do sweat it! Wearable microfluidic sensor to measure lactate concentration in real time

    With the seemingly unstoppable advancement in the fields of miniaturization and materials science, all sorts of electronic devices have emerged to help us lead easier and healthier lives. Wearable sensors fall in this category, and they have received much attention lately as useful tools to monitor a person’s health in real time. Many such sensors operate by quantifying biomarkers, that is, measurable indicators that reflect one’s health condition. Widely used biomarkers are heartrate and body temperature, which can be monitored continuously with relative ease. On the contrary, chemical biomarkers in bodily fluids, such as blood, saliva, and sweat, are more challenging to quantify with wearable sensors.
    For instance, lactate, which is produced during the breakdown of glucose in the absence of oxygen in tissues, is an important biomarker present in both blood and sweat that reflects the intensity of physical exercise done as well as the oxygenation of muscles. During exercise, muscles requiring energy can rapidly run out of oxygen and fall back to a different metabolic pathway that provides energy at the ‘cost’ of accumulating lactate, which causes pain and fatigue. Lactate is then released into the bloodstream and part of it is eliminated through sweat. This means that a wearable chemical sensor could measure the concentration of lactate in sweat to give a real-time picture of the intensity of exercise or the condition of muscles.
    Although lactate-measuring wearable sensors have already been proposed, most of them are composed of materials that can cause irritation of the skin. To address this problem, a team of scientists in Japan recently carried out a study to bring us a more comfortable and practical sensor. Their work, which was published in Electrochimica Acta, was led by Associate Professor Isao Shitanda, Mr. Masaya Mitsumoto, and Dr. Noya Loew from the Department of Pure and Applied Chemistry at the Tokyo University of Science, Japan.
    The team first focused on the sensing mechanism that they would employ in the sensor. Most lactate biosensors are made by immobilizing lactate oxidase (an enzyme) and an appropriate mediator on an electrode. A chemical reaction involving lactate oxidase, the mediator, and free lactate results in the generation of a measurable current between electrodes — a current that is roughly proportional to the concentration of lactate.
    A tricky aspect here is how to immobilize the enzyme and mediator on an electrode. To do this, the scientists employed a method called “electron beam-induced graft polymerization,” by which functional molecules were bonded to a carbon-based material that can spontaneously bind to the enzyme. The researchers then turned the material into a liquid ink that can be used to print electrodes. This last part turns out to be an important aspect for the future commercialization of the sensor, as Dr. Shitanda explains, “The fabrication of our sensor is compatible with screen printing, an excellent method for fabricating lightweight, flexible electrodes that can be scaled up for mass production.”
    With the sensing mechanism complete, the team then designed an appropriate system for collecting sweat and delivering it to the sensor. They achieved this with a microfluidic sweat collection system made out of polydimethylsiloxane (PDMS); it comprised multiple small inlets, an outlet, and a chamber for the sensor in between. “We decided to use PDMS because it is a soft, nonirritating material suitable for our microfluidic sweat collection system, which is to be in direct contact with the skin,” comments Mr. Mitsumoto.
    The detection limits of the sensor and its operating range for lactate concentrations was confirmed to be suitable for investigating the “lactate threshold” — the point at which aerobic (with oxygen) metabolism turns into anaerobic (without oxygen) metabolism during exercise. Real-time monitoring of this bodily phenomenon is important for several applications, as Dr. Loew remarks, “Monitoring the lactate threshold will help optimize the training of athletes and the exercise routines of rehabilitation patients and the elderly, as well as control the exertion of high-performance workers such as firefighters.”
    The team is already testing the implementation of this sensor in practical scenarios. With any luck, the progress made in this study will help develop the field of wearable chemical sensors, helping us to keep better track of our bodily processes and maintain better health.

    Story Source:
    Materials provided by Tokyo University of Science. Note: Content may be edited for style and length. More

  • in

    You snooze, you lose – with some sleep trackers

    Wearable sleep tracking devices — from Fitbit to Apple Watch to never-heard-of brands stashed away in the electronics clearance bin — have infiltrated the market at a rapid pace in recent years.
    And like any consumer products, not all sleep trackers are created equal, according to West Virginia University neuroscientists.
    Prompted by a lack of independent, third-party evaluations of these devices, a research team led by Joshua Hagen, director of the Human Performance Innovation Center at the WVU Rockefeller Neuroscience Institute, tested the efficacy of eight commercial sleep trackers.
    Fitbit and Oura came out on top in measuring total sleep time, total wake time and sleep efficiency, the results indicate. All other devices, however, either overestimated or underestimated at least one of those sleep metrics, and none of the eight could quantify sleep stages (REM, non-REM) with effective accuracy to be useful when compared to an electroencephalogram, or EEG, which records electrical activity in the brain.
    The study is published in the Nature and Science of Sleep.
    “The biggest takeaway is that not all consumer devices are created equal, and for the end user to take care in selecting the technology to suit their application based on the data,” Hagen said. “Some devices are currently performing well for total sleep time and sleep efficiency, but the community at large seems to still struggle with sleep staging (deep, REM, light). This is not surprising, since typically brain waves are needed to properly measure this. However, when thinking about what you generally have control over with your sleep — time to bed, time in bed, choices before bed that impact sleep efficiency — these can be accurately measured in some devices.”
    Researchers observed five healthy adults — two males, ages 26 and 41, and three females, ages 22, 23 and 27 — who participated by wearing the sleep trackers for a combined total of 98 nights.

    advertisement

    The commercial sleep technologies displayed lower error and bias values when quantifying sleep/wake states as compared to sleep staging durations. Still, these findings revealed that there is a remarkably high degree of variability in the accuracy of commercial sleep technologies, the researchers stated.
    “While technology, both hardware and software, continually advances, it is critical to evaluate the accuracy of these devices in an ongoing fashion,” Hagen said. “Updates to hardware, firmware and algorithms happen continuously, and we must understand how this affects accuracy.”
    Research in this area will evolve with the technology, added Hagen, who himself utilizes four to five sleep devices to keep monitoring his ZZZs.
    “I’m a big believer in living the research,” he said. “I need to understand what the consumer sees in the smartphone apps, what the usability of the devices is, etc. Without that objective sleep data, you can only rely on how you feel when you wake up — and while that is important, that doesn’t tell the whole story. If your alarm goes off and you happen to be in a deep sleep stage, you will wake up very groggy, and could feel as though that sleep was not restorative, when in fact it could have been. It’s just not subjectively noticeable right at that moment.”
    At the end of the day, however, it’s up to the user’s needs as to which product may be most suited for that person, Hagen added.

    advertisement

    “After accuracy, it comes down to logistics. Do you prefer a watch with a display? A ring? A mattress sensor? What is the price of each? Which smartphone app is most appealing? But again, that is if all accuracies are close to equal. If the price is right and the form factor is ideal, but the data accuracy is extremely poor, then those factors don’t matter.”
    The Human Performance Innovation Center works with members of the US military along with collegiate and professional athletes to better understand and optimize human performance, resiliency, and recovery, applying these findings to solutions for the general and clinical populations.
    Joining Hagen in the study from WVU were Jason Stone, Lauren Rentz, Jillian Forsey, Jad Ramadan, Victor Finomore, Scott Galster and Ali Rezai.
    Citation: Evaluations of Commercial Sleep Technologies for Objective Monitoring During Routine Sleeping Conditions More