More stories

  • in

    Personal health trackers may include smart face mask, other wearables

    For years, automotive companies have developed intelligent sensors to provide real-time monitoring of a vehicle’s health, including engine oil pressure, tire pressure and air-fuel mixture. Together, these sensors can provide an early warning system for a driver to identify a potential problem before it may need to be repaired.
    Now, in a similar vein biologically, Zheng Yan, an assistant professor in the MU College of Engineering at the University of Missouri, has recently published two studies demonstrating different ways to improve wearable bioelectronic devices and materials to provide better real-time monitoring of a person’s health, including vital signs.
    Developing a ‘smart’ face mask
    The onset of the COVID-19 pandemic has brought the idea of mask-wearing to the forefront of many people’s minds. In response, one focus of Yan’s lab has been to develop breathable soft bioelectronics. He said it was natural for him and his team to come up with the idea for integrating bioelectronics in a breathable face mask, which can monitor someone’s physiological status based on the nature of the person’s cough. Their findings were recently published in ACS Nano, a journal of the American Chemical Society.
    “Different respiratory problems lead to different cough frequencies and degrees,” Yan said. “Taking chronic obstructive pulmonary disease (COPD) as an example, the frequency of cough in the early morning is higher than that in the daytime and night. Our smart face mask can effectively monitor cough frequencies, which may assist physicians with knowing disease development and providing timely, customized interventions.”
    In addition to monitoring someone’s physiological status, the mask can also help identify proper mask wearing in public places using a bioelectronic sensor, Yan said. At this time, the mask does not have the capability to provide automatic reminders, but they would like to develop that function in the future. More

  • in

    Are babies the key to the next generation of artificial intelligence?

    Babies can help unlock the next generation of artificial intelligence (AI), according to Trinity College neuroscientists and colleagues who have just published new guiding principles for improving AI.
    The research, published today  in the journal Nature Machine Intelligence, examines the neuroscience and psychology of infant learning and distills three principles to guide the next generation of AI, which will help overcome the most pressing limitations of machine learning.
    Dr Lorijn Zaadnoordijk, Marie Skłodowska-Curie Research Fellow at Trinity College explained:
    “Artificial intelligence (AI) has made tremendous progress in the last decade, giving us smart speakers, autopilots in cars, ever-smarter apps, and enhanced medical diagnosis. These exciting developments in AI have been achieved thanks to machine learning which uses enormous datasets to train artificial neural network models. However, progress is stalling in many areas because the datasets that machines learn from must be painstakingly curated by humans. But we know that learning can be done much more efficiently, because infants don’t learn this way! They learn by experiencing the world around them, sometimes by even seeing something just once.”
    In their article “Lessons from infant learning for unsupervised machine learning,” Dr Lorijn Zaadnoordijk and Professor Rhodri Cusack, from the Trinity College Institute of Neuroscience, and Dr Tarek R. Besold from TU Eindhoven, the Netherlands, argue that better ways to learn from unstructured data are needed. For the first time, they make concrete proposals about what particular insights from infant learning can be fruitfully applied in machine learning and how exactly to apply these learnings.
    Machines, they say, will need in-built preferences to shape their learning from the beginning. They will need to learn from richer datasets that capture how the world is looking, sounding, smelling, tasting and feeling. And, like infants, they will need to have a developmental trajectory, where experiences and networks change as they “grow up.”
    Dr. Tarek R. Besold, Researcher, Philosophy & Ethics group at TU Eindhoven, said:
    “As AI researchers we often draw metaphorical parallels between our systems and the mental development of human babies and children. It is high time to take these analogies more seriously and look at the rich knowledge of infant development from psychology and neuroscience, which may help us overcome the most pressing limitations of machine learning.”
    Professor Rhodri Cusack, The Thomas Mitchell Professor of Cognitive Neuroscience, Director of Trinity College Institute of Neuroscience, added:
    “Artificial neural networks were in parts inspired by the brain. Similar to infants, they rely on learning, but current implementations are very different from human (and animal) learning. Through interdisciplinary research, babies can help unlock the next generation of AI.”
    Story Source:
    Materials provided by Trinity College Dublin. Note: Content may be edited for style and length. More

  • in

    Modeling a devastating childhood disease on a chip

    Millions of children in low- and middle-income nations suffer from environmental enteric dysfunction (EED), a chronic inflammatory disease of the intestine that is the second leading cause of death in children under five years old. EED is a devastating condition that is associated with malnutrition, stunted growth, and poor cognitive development, permanently impacting patients’ quality of life. In addition, oral vaccines are less effective in children with EED, leaving them vulnerable to otherwise preventable diseases. While some cases of EED are treatable by simply improving a patient’s diet, better nutrition doesn’t help all children. A lack of adequate nutrients and exposure to contaminated water and food contribute to EED, but the underlying mechanism of the disease remains unknown.
    Now, a team of researchers at the Wyss Institute at Harvard University has created an in vitro human model of EED in a microengineered Intestine Chip device, providing a window into the complex interplay between malnutrition and genetic factors driving the disease. Their EED Chips recapitulate several features of EED found in biopsies from human patients, including inflammation, intestinal barrier dysfunction, reduced nutrient absorption, and atrophy of the villi (tiny hair-like projections) on intestinal cells.
    They also found that depriving healthy Intestine Chips of two crucial nutrients — niacinamide (a vitamin) and tryptophan (an essential amino acid) — caused morphological, functional, and genetic changes similar to those found in EED patients, suggesting that their model could be used to identify and test the effects of potential treatments.
    “Functionally, there is something very wrong with these kids’ digestive system and its ability to absorb nutrients and fight infections, which you can’t cure simply by giving them the nutrients that are missing from their diet. Our EED model allowed us to decipher what has happened to the intestine, both physically and genetically, that so dramatically affects its normal function in patients with EED,” said co-first author Amir Bein, R.D., Ph.D., a former Senior Postdoctoral Research Fellow at the Wyss Institute who is now the VP of Biology at Quris Technologies.
    The research is published today in Nature Biomedical Engineering.
    Modeling a complex disease on-a-chip
    The EED Chip project grew out of conversations between the Wyss Institute’s Founding Director Donald Ingber, M.D., Ph.D. and the Bill and Melinda Gates Foundation, which has an established interest in supporting research to understand and treat enteric diseases. Recognizing that there had been no in vitro studies of EED to study its molecular mechanisms, a Wyss team of more than 20 people set about creating a model of EED using its Human Organ Chip technology developed in Ingber’s lab. More

  • in

    Where once were black boxes, new LANTERN illuminates

    Researchers at the National Institute of Standards and Technology (NIST) have developed a new statistical tool that they have used to predict protein function. Not only could it help with the difficult job of altering proteins in practically useful ways, but it also works by methods that are fully interpretable — an advantage over the conventional artificial intelligence (AI) that has aided with protein engineering in the past.
    The new tool, called LANTERN, could prove useful in work ranging from producing biofuels to improving crops to developing new disease treatments. Proteins, as building blocks of biology, are a key element in all these tasks. But while it is comparatively easy to make changes to the strand of DNA that serves as the blueprint for a given protein, it remains challenging to determine which specific base pairs — rungs on the DNA ladder — are the keys to producing a desired effect. Finding these keys has been the purview of AI built of deep neural networks (DNNs), which, though effective, are notoriously opaque to human understanding.
    Described in a new paper published in the Proceedings of the National Academy of Sciences, LANTERN shows the ability to predict the genetic edits needed to create useful differences in three different proteins. One is the spike-shaped protein from the surface of the SARS-CoV-2 virus that causes COVID-19; understanding how changes in the DNA can alter this spike protein might help epidemiologists predict the future of the pandemic. The other two are well-known lab workhorses: the LacI protein from the E. coli bacterium and the green fluorescent protein (GFP) used as a marker in biology experiments. Selecting these three subjects allowed the NIST team to show not only that their tool works, but also that its results are interpretable — an important characteristic for industry, which needs predictive methods that help with understanding of the underlying system.
    “We have an approach that is fully interpretable and that also has no loss in predictive power,” said Peter Tonner, a statistician and computational biologist at NIST and LANTERN’s main developer. “There’s a widespread assumption that if you want one of those things you can’t have the other. We’ve shown that sometimes, you can have both.”
    The problem the NIST team is tackling might be imagined as interacting with a complex machine that sports a vast control panel filled with thousands of unlabeled switches: The device is a gene, a strand of DNA that encodes a protein; the switches are base pairs on the strand. The switches all affect the device’s output somehow. If your job is to make the machine work differently in a specific way, which switches should you flip?
    Because the answer might require changes to multiple base pairs, scientists have to flip some combination of them, measure the result, then choose a new combination and measure again. The number of permutations is daunting. More

  • in

    Organic bipolar transistor developed

    Prof. Karl Leo has been thinking about the realization of this component for more than 20 years, now it has become reality: His research group at the Institute for Applied Physics at the TU Dresden has presented the first highly efficient organic bipolar transistor. This opens up completely new perspectives for organic electronics — both in data processing and transmission, as well as in medical technology applications. The results of the research work have now been published in the leading specialist journal Nature.
    The invention of the transistor in 1947 by Shockley, Bardeen and Brattain at Bell Laboratories ushered in the age of microelectronics and revolutionized our lives. First, so-called bipolar transistors were invented, in which negative and positive charge carriers contribute to the current transport, unipolar field effect transistors were only added later. The increasing performance due to the scaling of silicon electronics in the nanometer range has immensely accelerated the processing of data. However, this very rigid technology is less suitable for new types of flexible electronic components, such as rollable TV displays or for medical applications on or even in the body.
    For such applications, transistors made of organic, i.e. carbon-based semiconductors, have come into focus in recent years. Organic field effect transistors were introduced as early as 1986, but their performance still lags far behind silicon components.
    A research group led by Prof. Karl Leo and Dr. Hans Kleemann at the TU Dresden has now succeeded for the first time in demonstrating an organic, highly efficient bipolar transistor. Crucial to this was the use of highly ordered thin organic layers. This new technology is many times faster than previous organic transistors, and for the first time the components have reached operating frequencies in the gigahertz range, i.e. more than a billion switching operations per second. Dr Shu-Jen Wang, who co-led the project with Dr. Michael Sawatzki, explains: “The first realization of the organic bipolar transistor was a great challenge, since we had to create layers of very high quality and new structures. However, the excellent parameters of the component reward these efforts!” Prof. Karl Leo adds: “We have been thinking about this device for 20 years and I am thrilled that we have now been able to demonstrate it with the novel highly ordered layers. The organic bipolar transistor and its potential open up completely new perspectives for organic electronics, since they also make demanding tasks in data processing and transmission possible.” Conceivable future applications are, for example, intelligent patches equipped with sensors that process the sensor data locally and wirelessly communicate to the outside.
    Story Source:
    Materials provided by Technische Universität Dresden. Note: Content may be edited for style and length. More

  • in

    Can robotics help us achieve sustainable development?

    An international team of scientists, led by the University of Leeds, have assessed how robotics and autonomous systems might facilitate or impede the delivery of the UN Sustainable Development Goals (SDGs).
    Their findings identify key opportunities and key threats that need to be considered while developing, deploying and governing robotics and autonomous systems’.
    The key opportunities robotics and autonomous systems present are through autonomous task completion, supporting human activities, fostering innovation, enhancing remote access and improving monitoring.
    Emerging threats relate to reinforcing inequalities, exacerbating environmental change, diverting resources from tried-and-tested solutions, and reducing freedom and privacy through inadequate governance.
    Technological advancements have already profoundly altered how economies operate and how people, society and environments inter-relate. Robotics and autonomous systems are reshaping the world, changing healthcare, food production and biodiversity management.
    However, the associated potential positive and negative effects caused by their involvement in the SDGs had not been considered systematically. Now, international researchers conducted a horizon scan to evaluate the impact this cutting-edge technology could have on SDGs delivery. It involved more than 102 experts from around the world, including 44 experts from low- and middle-income countries. More

  • in

    Engineers devise a recipe for improving any autonomous robotic system

    Autonomous robots have come a long way since the fastidious Roomba. In recent years, artificially intelligent systems have been deployed in self-driving cars, last-mile food delivery, restaurant service, patient screening, hospital cleaning, meal prep, building security, and warehouse packing.
    Each of these robotic systems is a product of an ad hoc design process specific to that particular system. In designing an autonomous robot, engineers must run countless trial-and-error simulations, often informed by intuition. These simulations are tailored to a particular robot’s components and tasks, in order to tune and optimize its performance. In some respects, designing an autonomous robot today is like baking a cake from scratch, with no recipe or prepared mix to ensure a successful outcome.
    Now, MIT engineers have developed a general design tool for roboticists to use as a sort of automated recipe for success. The team has devised an optimization code that can be applied to simulations of virtually any autonomous robotic system and can be used to automatically identify how and where to tweak a system to improve a robot’s performance.
    The team showed that the tool was able to quickly improve the performance of two very different autonomous systems: one in which a robot navigated a path between two obstacles, and another in which a pair of robots worked together to move a heavy box.
    The researchers hope the new general-purpose optimizer can help to speed up the development of a wide range of autonomous systems, from walking robots and self-driving vehicles, to soft and dexterous robots, and teams of collaborative robots.
    The team, composed of Charles Dawson, an MIT graduate student, and ChuChu Fan, assistant professor in MIT’s Department of Aeronautics and Astronautics, will present its findings later this month at the annual Robotics: Science and Systems conference in New York. More

  • in

    Optical microphone sees sound like never before

    A camera system developed by Carnegie Mellon University researchers can see sound vibrations with such precision and detail that it can reconstruct the music of a single instrument in a band or orchestra.
    Even the most high-powered and directed microphones can’t eliminate nearby sounds, ambient noise and the effect of acoustics when they capture audio. The novel system developed in the School of Computer Science’s Robotics Institute (RI) uses two cameras and a laser to sense high-speed, low-amplitude surface vibrations. These vibrations can be used to reconstruct sound, capturing isolated audio without inference or a microphone.
    “We’ve invented a new way to see sound,” said Mark Sheinin, a post-doctoral research associate at the Illumination and Imaging Laboratory (ILIM) in the RI. “It’s a new type of camera system, a new imaging device, that is able to see something invisible to the naked eye.”
    The team completed several successful demos of their system’s effectiveness in sensing vibrations and the quality of the sound reconstruction. They captured isolated audio of separate guitars playing at the same time and individual speakers playing different music simultaneously. They analyzed the vibrations of a tuning fork, and used the vibrations of a bag of Doritos near a speaker to capture the sound coming from a speaker. This demo pays tribute to prior work done by MIT researchers who developed one of the first visual microphones in 2014.
    The CMU system dramatically improves upon past attempts to capture sound using computer vision. The team’s work uses ordinary cameras that cost a fraction of the high-speed versions employed in past research while producing a higher quality recording. The dual-camera system can capture vibrations from objects in motion, such as the movements of a guitar while a musician plays it, and simultaneously sense individual sounds from multiple points.
    “We’ve made the optical microphone much more practical and usable,” said Srinivasa Narasimhan, a professor in the RI and head of the ILIM. “We’ve made the quality better while bringing the cost down.”
    The system works by analyzing the differences in speckle patterns from images captured with a rolling shutter and a global shutter. An algorithm computes the difference in the speckle patterns from the two video streams and converts those differences into vibrations to reconstruct the sound. More