More stories

  • in

    Modeling a devastating childhood disease on a chip

    Millions of children in low- and middle-income nations suffer from environmental enteric dysfunction (EED), a chronic inflammatory disease of the intestine that is the second leading cause of death in children under five years old. EED is a devastating condition that is associated with malnutrition, stunted growth, and poor cognitive development, permanently impacting patients’ quality of life. In addition, oral vaccines are less effective in children with EED, leaving them vulnerable to otherwise preventable diseases. While some cases of EED are treatable by simply improving a patient’s diet, better nutrition doesn’t help all children. A lack of adequate nutrients and exposure to contaminated water and food contribute to EED, but the underlying mechanism of the disease remains unknown.
    Now, a team of researchers at the Wyss Institute at Harvard University has created an in vitro human model of EED in a microengineered Intestine Chip device, providing a window into the complex interplay between malnutrition and genetic factors driving the disease. Their EED Chips recapitulate several features of EED found in biopsies from human patients, including inflammation, intestinal barrier dysfunction, reduced nutrient absorption, and atrophy of the villi (tiny hair-like projections) on intestinal cells.
    They also found that depriving healthy Intestine Chips of two crucial nutrients — niacinamide (a vitamin) and tryptophan (an essential amino acid) — caused morphological, functional, and genetic changes similar to those found in EED patients, suggesting that their model could be used to identify and test the effects of potential treatments.
    “Functionally, there is something very wrong with these kids’ digestive system and its ability to absorb nutrients and fight infections, which you can’t cure simply by giving them the nutrients that are missing from their diet. Our EED model allowed us to decipher what has happened to the intestine, both physically and genetically, that so dramatically affects its normal function in patients with EED,” said co-first author Amir Bein, R.D., Ph.D., a former Senior Postdoctoral Research Fellow at the Wyss Institute who is now the VP of Biology at Quris Technologies.
    The research is published today in Nature Biomedical Engineering.
    Modeling a complex disease on-a-chip
    The EED Chip project grew out of conversations between the Wyss Institute’s Founding Director Donald Ingber, M.D., Ph.D. and the Bill and Melinda Gates Foundation, which has an established interest in supporting research to understand and treat enteric diseases. Recognizing that there had been no in vitro studies of EED to study its molecular mechanisms, a Wyss team of more than 20 people set about creating a model of EED using its Human Organ Chip technology developed in Ingber’s lab. More

  • in

    Where once were black boxes, new LANTERN illuminates

    Researchers at the National Institute of Standards and Technology (NIST) have developed a new statistical tool that they have used to predict protein function. Not only could it help with the difficult job of altering proteins in practically useful ways, but it also works by methods that are fully interpretable — an advantage over the conventional artificial intelligence (AI) that has aided with protein engineering in the past.
    The new tool, called LANTERN, could prove useful in work ranging from producing biofuels to improving crops to developing new disease treatments. Proteins, as building blocks of biology, are a key element in all these tasks. But while it is comparatively easy to make changes to the strand of DNA that serves as the blueprint for a given protein, it remains challenging to determine which specific base pairs — rungs on the DNA ladder — are the keys to producing a desired effect. Finding these keys has been the purview of AI built of deep neural networks (DNNs), which, though effective, are notoriously opaque to human understanding.
    Described in a new paper published in the Proceedings of the National Academy of Sciences, LANTERN shows the ability to predict the genetic edits needed to create useful differences in three different proteins. One is the spike-shaped protein from the surface of the SARS-CoV-2 virus that causes COVID-19; understanding how changes in the DNA can alter this spike protein might help epidemiologists predict the future of the pandemic. The other two are well-known lab workhorses: the LacI protein from the E. coli bacterium and the green fluorescent protein (GFP) used as a marker in biology experiments. Selecting these three subjects allowed the NIST team to show not only that their tool works, but also that its results are interpretable — an important characteristic for industry, which needs predictive methods that help with understanding of the underlying system.
    “We have an approach that is fully interpretable and that also has no loss in predictive power,” said Peter Tonner, a statistician and computational biologist at NIST and LANTERN’s main developer. “There’s a widespread assumption that if you want one of those things you can’t have the other. We’ve shown that sometimes, you can have both.”
    The problem the NIST team is tackling might be imagined as interacting with a complex machine that sports a vast control panel filled with thousands of unlabeled switches: The device is a gene, a strand of DNA that encodes a protein; the switches are base pairs on the strand. The switches all affect the device’s output somehow. If your job is to make the machine work differently in a specific way, which switches should you flip?
    Because the answer might require changes to multiple base pairs, scientists have to flip some combination of them, measure the result, then choose a new combination and measure again. The number of permutations is daunting. More

  • in

    Organic bipolar transistor developed

    Prof. Karl Leo has been thinking about the realization of this component for more than 20 years, now it has become reality: His research group at the Institute for Applied Physics at the TU Dresden has presented the first highly efficient organic bipolar transistor. This opens up completely new perspectives for organic electronics — both in data processing and transmission, as well as in medical technology applications. The results of the research work have now been published in the leading specialist journal Nature.
    The invention of the transistor in 1947 by Shockley, Bardeen and Brattain at Bell Laboratories ushered in the age of microelectronics and revolutionized our lives. First, so-called bipolar transistors were invented, in which negative and positive charge carriers contribute to the current transport, unipolar field effect transistors were only added later. The increasing performance due to the scaling of silicon electronics in the nanometer range has immensely accelerated the processing of data. However, this very rigid technology is less suitable for new types of flexible electronic components, such as rollable TV displays or for medical applications on or even in the body.
    For such applications, transistors made of organic, i.e. carbon-based semiconductors, have come into focus in recent years. Organic field effect transistors were introduced as early as 1986, but their performance still lags far behind silicon components.
    A research group led by Prof. Karl Leo and Dr. Hans Kleemann at the TU Dresden has now succeeded for the first time in demonstrating an organic, highly efficient bipolar transistor. Crucial to this was the use of highly ordered thin organic layers. This new technology is many times faster than previous organic transistors, and for the first time the components have reached operating frequencies in the gigahertz range, i.e. more than a billion switching operations per second. Dr Shu-Jen Wang, who co-led the project with Dr. Michael Sawatzki, explains: “The first realization of the organic bipolar transistor was a great challenge, since we had to create layers of very high quality and new structures. However, the excellent parameters of the component reward these efforts!” Prof. Karl Leo adds: “We have been thinking about this device for 20 years and I am thrilled that we have now been able to demonstrate it with the novel highly ordered layers. The organic bipolar transistor and its potential open up completely new perspectives for organic electronics, since they also make demanding tasks in data processing and transmission possible.” Conceivable future applications are, for example, intelligent patches equipped with sensors that process the sensor data locally and wirelessly communicate to the outside.
    Story Source:
    Materials provided by Technische Universität Dresden. Note: Content may be edited for style and length. More

  • in

    Can robotics help us achieve sustainable development?

    An international team of scientists, led by the University of Leeds, have assessed how robotics and autonomous systems might facilitate or impede the delivery of the UN Sustainable Development Goals (SDGs).
    Their findings identify key opportunities and key threats that need to be considered while developing, deploying and governing robotics and autonomous systems’.
    The key opportunities robotics and autonomous systems present are through autonomous task completion, supporting human activities, fostering innovation, enhancing remote access and improving monitoring.
    Emerging threats relate to reinforcing inequalities, exacerbating environmental change, diverting resources from tried-and-tested solutions, and reducing freedom and privacy through inadequate governance.
    Technological advancements have already profoundly altered how economies operate and how people, society and environments inter-relate. Robotics and autonomous systems are reshaping the world, changing healthcare, food production and biodiversity management.
    However, the associated potential positive and negative effects caused by their involvement in the SDGs had not been considered systematically. Now, international researchers conducted a horizon scan to evaluate the impact this cutting-edge technology could have on SDGs delivery. It involved more than 102 experts from around the world, including 44 experts from low- and middle-income countries. More

  • in

    Engineers devise a recipe for improving any autonomous robotic system

    Autonomous robots have come a long way since the fastidious Roomba. In recent years, artificially intelligent systems have been deployed in self-driving cars, last-mile food delivery, restaurant service, patient screening, hospital cleaning, meal prep, building security, and warehouse packing.
    Each of these robotic systems is a product of an ad hoc design process specific to that particular system. In designing an autonomous robot, engineers must run countless trial-and-error simulations, often informed by intuition. These simulations are tailored to a particular robot’s components and tasks, in order to tune and optimize its performance. In some respects, designing an autonomous robot today is like baking a cake from scratch, with no recipe or prepared mix to ensure a successful outcome.
    Now, MIT engineers have developed a general design tool for roboticists to use as a sort of automated recipe for success. The team has devised an optimization code that can be applied to simulations of virtually any autonomous robotic system and can be used to automatically identify how and where to tweak a system to improve a robot’s performance.
    The team showed that the tool was able to quickly improve the performance of two very different autonomous systems: one in which a robot navigated a path between two obstacles, and another in which a pair of robots worked together to move a heavy box.
    The researchers hope the new general-purpose optimizer can help to speed up the development of a wide range of autonomous systems, from walking robots and self-driving vehicles, to soft and dexterous robots, and teams of collaborative robots.
    The team, composed of Charles Dawson, an MIT graduate student, and ChuChu Fan, assistant professor in MIT’s Department of Aeronautics and Astronautics, will present its findings later this month at the annual Robotics: Science and Systems conference in New York. More

  • in

    Optical microphone sees sound like never before

    A camera system developed by Carnegie Mellon University researchers can see sound vibrations with such precision and detail that it can reconstruct the music of a single instrument in a band or orchestra.
    Even the most high-powered and directed microphones can’t eliminate nearby sounds, ambient noise and the effect of acoustics when they capture audio. The novel system developed in the School of Computer Science’s Robotics Institute (RI) uses two cameras and a laser to sense high-speed, low-amplitude surface vibrations. These vibrations can be used to reconstruct sound, capturing isolated audio without inference or a microphone.
    “We’ve invented a new way to see sound,” said Mark Sheinin, a post-doctoral research associate at the Illumination and Imaging Laboratory (ILIM) in the RI. “It’s a new type of camera system, a new imaging device, that is able to see something invisible to the naked eye.”
    The team completed several successful demos of their system’s effectiveness in sensing vibrations and the quality of the sound reconstruction. They captured isolated audio of separate guitars playing at the same time and individual speakers playing different music simultaneously. They analyzed the vibrations of a tuning fork, and used the vibrations of a bag of Doritos near a speaker to capture the sound coming from a speaker. This demo pays tribute to prior work done by MIT researchers who developed one of the first visual microphones in 2014.
    The CMU system dramatically improves upon past attempts to capture sound using computer vision. The team’s work uses ordinary cameras that cost a fraction of the high-speed versions employed in past research while producing a higher quality recording. The dual-camera system can capture vibrations from objects in motion, such as the movements of a guitar while a musician plays it, and simultaneously sense individual sounds from multiple points.
    “We’ve made the optical microphone much more practical and usable,” said Srinivasa Narasimhan, a professor in the RI and head of the ILIM. “We’ve made the quality better while bringing the cost down.”
    The system works by analyzing the differences in speckle patterns from images captured with a rolling shutter and a global shutter. An algorithm computes the difference in the speckle patterns from the two video streams and converts those differences into vibrations to reconstruct the sound. More

  • in

    Topological superconductors: Fertile ground for elusive Majorana ('angel') particle

    A new, multi-node FLEET review investigates the search for Majorana fermions in iron-based superconductors.
    The elusive Majorana fermion, or ‘angel particle’ proposed by Ettore Majorana in 1937, simultaneously behaves like a particle and an antiparticle — and surprisingly remains stable rather than being self-destructive.
    Majorana fermions promise information and communications technology with zero resistance, addressing the rising energy consumption of modern electronics (already 8% of global electricity consumption), and promising a sustainable future for computing.
    Additionally, it is the presence of Majorana zero-energy modes in topological superconductors that have made those exotic quantum materials the main candidate materials for realizing topological quantum computing.
    The existence of Majorana fermions in condensed-matter systems will help in FLEET’s search for future low-energy electronic technologies.
    The angel particle: both matter and antimatter
    Fundamental particles such as electrons, protons, neutrons, quarks and neutrinos (called fermions) each have their distinct antiparticles. An antiparticle has the same mass as it’s ordinary partner, but opposite electric charge and magnetic moment. More

  • in

    Nanostructured surfaces for future quantum computer chips

    Quantum computers are one of the key future technologies of the 21st century. Researchers at Paderborn University, working under Professor Thomas Zentgraf and in cooperation with colleagues from the Australian National University and Singapore University of Technology and Design, have developed a new technology for manipulating light that can be used as a basis for future optical quantum computers. The results have now been published in the journal Nature Photonics.
    New optical elements for manipulating light will allow for more advanced applications in modern information technology, particularly in quantum computers. However, a major challenge that remains is non-reciprocal light propagation through nanostructured surfaces, where these surfaces have been manipulated at a tiny scale. Professor Thomas Zentgraf, head of the working group for ultrafast nanophotonics at Paderborn University, explains, “In reciprocal propagation, light can take the same path forward and backward through a structure; however, non-reciprocal propagation is comparable to a one-way street where it can only spread out in one direction.” Non-reciprocity is a special characteristic in optics that causes light to produce different material characteristics when its direction is reversed. One example would be a window made of glass that is transparent from one side and lets light through, but which acts as a mirror on the other side and reflects the light. This is known as duality. “In the field of photonics, such a duality can be very helpful in developing innovative optical elements for manipulating light,” says Zentgraf.
    In a current collaboration between his working group at Paderborn University and researchers at the Australian National University and Singapore University of Technology and Design, non-reciprocal light propagation was combined with a frequency conversion of laser light, in other words a change in the frequency and thus also the colour of the light. “We used the frequency conversion in the specially designed structures, with dimensions in the range of a few hundred nanometres, to convert infrared light — which is invisible to the human eye — into visible light,” explains Dr. Sergey Kruk, Marie Curie Fellow in Zentgraf’s group. The experiments show that this conversion process takes place only in one illumination direction for the nanostructured surface, while it is completely suppressed in the opposite illumination direction. This duality in the frequency conversion characteristics was used to code images into an otherwise transparent surface. “We arranged the various nanostructures in such a way that they produce a different image depending on whether the sample surface is illuminated from the front or the back,” says Zentgraf, adding, “The images only became visible when we used infrared laser light for the illumination.”
    In their first experiments, the intensity of the frequency-converted light within the visible range was still very small. The next step, therefore, is to further improve efficiency so that less infrared light is needed for the frequency conversion. In future optically integrated circuits, the direction control for the frequency conversion could be used to switch light directly with a different light, or to produce specific photon conditions for quantum-optical calculations directly on a small chip. “Maybe we will see an application in future optical quantum computers where the directed production of individual photons using frequency conversion plays an important role,” says Zentgraf.
    Story Source:
    Materials provided by Universität Paderborn. Note: Content may be edited for style and length. More