More stories

  • in

    Made-to-order diagnostic tests may be on the horizon

    McGill University researchers have made a breakthrough in diagnostic technology, inventing a ‘lab on a chip’ that can be 3D-printed in just 30 minutes. The chip has the potential to make on-the-spot testing widely accessible.
    As part of a recent study, the results of which were published in the journal Advanced Materials, the McGill team developed capillaric chips that act as miniature laboratories. Unlike other computer microprocessors, these chips are single-use and require no external power source — a simple paper strip suffices. They function through capillary action — the very phenomena by which a spilled liquid on the kitchen table spontaneously wicks into the paper towel used to wipe it up.
    “Traditional diagnostics require peripherals, while ours can circumvent them. Our diagnostics are a bit what the cell phone was to traditional desktop computers that required a separate monitor, keyboard and power supply to operate,” explains Prof. David Juncker, Chair of the Department of Biomedical Engineering at McGill and senior author on the study.
    At-home testing became crucial during the COVID-19 pandemic. But rapid tests have limited availability and can only drive one liquid across the strip, meaning most diagnostics are still done in central labs. Notably, the capillaric chips can be 3D-printed for various tests, including COVID-19 antibody quantification.
    The study brings 3D-printed home diagnostics one step closer to reality, though some challenges remain, such as regulatory approvals and securing necessary test materials. The team is actively working to make their technology more accessible, adapting it for use with affordable 3D printers. The innovation aims to speed up diagnoses, enhance patient care, and usher in a new era of accessible testing.
    “This advancement has the capacity to empower individuals, researchers, and industries to explore new possibilities and applications in a more cost-effective and user-friendly manner,” says Prof. Juncker. “This innovation also holds the potential to eventually empower health professionals with the ability to rapidly create tailored solutions for specific needs right at the point-of-care.” More

  • in

    New conductive, cotton-based fiber developed for smart textiles

    A single strand of fiber developed at Washington State University has the flexibility of cotton and the electric conductivity of a polymer, called polyaniline.
    The newly developed material showed good potential for wearable e-textiles. The WSU researchers tested the fibers with a system that powered an LED light and another that sensed ammonia gas, detailing their findings in the journal Carbohydrate Polymers.
    “We have one fiber in two sections: one section is the conventional cotton: flexible and strong enough for everyday use, and the other side is the conductive material,” said Hang Liu, WSU textile researcher and the study’s corresponding author. “The cotton can support the conductive material which can provide the needed function.”
    While more development is needed, the idea is to integrate fibers like these into apparel as sensor patches with flexible circuits. These patches could be part of uniforms for firefighters, soldiers or workers who handle chemicals to detect for hazardous exposures. Other applications include health monitoring or exercise shirts that can do more than current fitness monitors.
    “We have some smart wearables, like smart watches, that can track your movement and human vital signs, but we hope that in the future your everyday clothing can do these functions as well,” said Liu. “Fashion is not just color and style, as a lot of people think about it: fashion is science.”
    In this study, the WSU team worked to overcome the challenges of mixing the conductive polymer with cotton cellulose. Polymers are substances with very large molecules that have repeating patterns. In this case, the researchers used polyaniline, also known as PANI, a synthetic polymer with conductive properties already used in applications such as printed circuit board manufacturing.
    While intrinsically conductive, polyaniline is brittle and by itself, cannot be made into a fiber for textiles. To solve this, the WSU researchers dissolved cotton cellulose from recycled t-shirts into a solution and the conductive polymer into another separate solution. These two solutions were then merged together side-by-side, and the material was extruded to make one fiber.

    The result showed good interfacial bonding, meaning the molecules from the different materials would stay together through stretching and bending.
    Achieving the right mixture at the interface of cotton cellulose and polyaniline was a delicate balance, Liu said.
    “We wanted these two solutions to work so that when the cotton and the conductive polymer contact each other they mix to a certain degree to kind of glue together, but we didn’t want them to mix too much, otherwise the conductivity would be reduced,” she said.
    Additional WSU authors on this study included first author Wangcheng Liu as well as Zihui Zhao, Dan Liang, Wei-Hong Zhong and Jinwen Zhang. This research received support from the National Science Foundation and the Walmart Foundation Project. More

  • in

    AI chatbot shows potential as diagnostic partner

    Physician-investigators at Beth Israel Deaconess Medical Center (BIDMC) compared a chatbot’s probabilistic reasoning to that of human clinicians. The findings, published in JAMA Network Open, suggest that artificial intelligence could serve as useful clinical decision support tools for physicians.
    “Humans struggle with probabilistic reasoning, the practice of making decisions based on calculating odds,” said the study’s corresponding author Adam Rodman, MD, an internal medicine physician and investigator in the department of Medicine at BIDMC. “Probabilistic reasoning is one of several components of making a diagnosis, which is an incredibly complex process that uses a variety of different cognitive strategies. We chose to evaluate probabilistic reasoning in isolation because it is a well-known area where humans could use support.”
    Basing their study on a previously published national survey of more than 550 practitioners performing probabilistic reasoning on five medical cases, Rodman and colleagues fed the publicly available Large Language Model (LLM), Chat GPT-4, the same series of cases and ran an identical prompt 100 times to generate a range of responses.
    The chatbot — just like the practitioners before them — was tasked with estimating the likelihood of a given diagnosis based on patients’ presentation. Then, given test results such as chest radiography for pneumonia, mammography for breast cancer, stress test for coronary artery disease and a urine culture for urinary tract infection, the chatbot program updated its estimates.
    When test results were positive, it was something of a draw; the chatbot was more accurate in making diagnoses than the humans in two cases, similarly accurate in two cases and less accurate in one case. But when tests came back negative, the chatbot shone, demonstrating more accuracy in making diagnoses than humans in all five cases.
    “Humans sometimes feel the risk is higher than it is after a negative test result, which can lead to overtreatment, more tests and too many medications,” said Rodman.
    But Rodman is less interested in how chatbots and humans perform toe-to-toe than in how highly skilled physicians’ performance might change in response to having these new supportive technologies available to them in the clinic, added Rodman. He and colleagues are looking into it.
    “LLMs can’t access the outside world — they aren’t calculating probabilities the way that epidemiologists, or even poker players, do. What they’re doing has a lot more in common with how humans make spot probabilistic decisions,” he said. “But that’s what is exciting. Even if imperfect, their ease of use and ability to be integrated into clinical workflows could theoretically make humans make better decisions,” he said. “Future research into collective human and artificial intelligence is sorely needed.”
    Co-authors included Thomas A. Buckley, University of Massachusetts Amherst; Arun K. Manrai, PhD, Harvard Medical School; Daniel J. Morgan, MD, MS, University of Maryland School of Medicine.
    Rodman reported receiving grants from the Gordon and Betty Moore Foundation. Morgan reported receiving grants from the Department of Veterans Affairs, the Agency for Healthcare Research and Quality, the Centers for Disease Control and Prevention, and the National Institutes of Health, and receiving travel reimbursement from the Infectious Diseases Society of America, the Society for Healthcare Epidemiology of America. The American College of Physicians and the World Heart Health Organization outside the submitted work. More

  • in

    Battle of the AIs in medical research: ChatGPT vs Elicit

    The use of generative AI in literature search suggests the possibility of efficiently collecting a vast amount of medical information, provided that users are well aware that the performance of generative AI is still in its infancy and that not all information presented is necessarily reliable. It is advised to use different generative AIs depending on the type of information needed.
    Can AI save us from the arduous and time-consuming task of academic research collection? An international team of researchers investigated the credibility and efficiency of generative AI as an information-gathering tool in the medical field.
    The research team, led by Professor Masaru Enomoto of the Graduate School of Medicine at Osaka Metropolitan University, fed identical clinical questions and literature selection criteria to two generative AIs; ChatGPT and Elicit. The results showed that while ChatGPT suggested fictitious articles, Elicit was efficient, suggesting multiple references within a few minutes with the same level of accuracy as the researchers.
    “This research was conceived out of our experience with managing vast amounts of medical literature over long periods of time. Access to information using generative AI is still in its infancy, so we need to exercise caution as the current information is not accurate or up-to-date.” Said Dr. Enomoto. “However, ChatGPT and other generative AIs are constantly evolving and are expected to revolutionize the field of medical research in the future.”
    Their findings were published in Hepatology Communications. More

  • in

    Researchers safely integrate fragile 2D materials into devices

    Two-dimensional materials, which are only a few atoms thick, can exhibit some incredible properties, such as the ability to carry electric charge extremely efficiently, which could boost the performance of next-generation electronic devices.
    But integrating 2D materials into devices and systems like computer chips is notoriously difficult. These ultrathin structures can be damaged by conventional fabrication techniques, which often rely on the use of chemicals, high temperatures, or destructive processes like etching.
    To overcome this challenge, researchers from MIT and elsewhere have developed a new technique to integrate 2D materials into devices in a single step while keeping the surfaces of the materials and the resulting interfaces pristine and free from defects.
    Their method relies on engineering surface forces available at the nanoscale to allow the 2D material to be physically stacked onto other prebuilt device layers. Because the 2D material remains undamaged, the researchers can take full advantage of its unique optical and electrical properties.
    They used this approach to fabricate arrays of 2D transistors that achieved new functionalities compared to devices produced using conventional fabrication techniques. Their method, which is versatile enough to be used with many materials, could have diverse applications in high-performance computing, sensing, and flexible electronics.
    Core to unlocking these new functionalities is the ability to form clean interfaces, held together by special forces that exist between all matter, called van der Waals forces.
    However, such van der Waals integration of materials into fully functional devices is not always easy, says Farnaz Niroui, assistant professor of electrical engineering and computer science (EECS), a member of the Research Laboratory of Electronics (RLE), and senior author of a new paper describing the work.

    “Van der Waals integration has a fundamental limit,” she explains. “Since these forces depend on the intrinsic properties of the materials, they cannot be readily tuned. As a result, there are some materials that cannot be directly integrated with each other using their van der Waals interactions alone. We have come up with a platform to address this limit to help make van der Waals integration more versatile, to promote the development of 2D-materials-based devices with new and improved functionalities.”
    Niroui wrote the paper with lead author Peter Satterthwaite, an electrical engineering and computer science graduate student; Jing Kong, professor of EECS and a member of RLE; and others at MIT, Boston University, National Tsing Hua University in Taiwan, the National Science and Technology Council of Taiwan, and National Cheng Kung University in Taiwan. The research will be published in Nature Electronics.
    Advantageous attraction
    Making complex systems such as a computer chip with conventional fabrication techniques can get messy. Typically, a rigid material like silicon is chiseled down to the nanoscale, then interfaced with other components like metal electrodes and insulating layers to form an active device. Such processing can cause damage to the materials.
    Recently, researchers have focused on building devices and systems from the bottom up, using 2D materials and a process that requires sequential physical stacking. In this approach, rather than using chemical glues or high temperatures to bond a fragile 2D material to a conventional surface like silicon, researchers leverage van der Waals forces to physically integrate a layer of 2D material onto a device.
    Van der Waals forces are natural forces of attraction that exist between all matter. For example, a gecko’s feet can stick to the wall temporarily due to van der Waals forces. Though all materials exhibit a van der Waals interaction, depending on the material, the forces are not always strong enough to hold them together. For instance, a popular semiconducting 2D material known as molybdenum disulfide will stick to gold, a metal, but won’t directly transfer to insulators like silicon dioxide by just coming into physical contact with that surface.

    However, heterostructures made by integrating semiconductor and insulating layers are key building blocks of an electronic device. Previously, this integration has been enabled by bonding the 2D material to an intermediate layer like gold, then using this intermediate layer to transfer the 2D material onto the insulator, before removing the intermediate layer using chemicals or high temperatures.
    Instead of using this sacrificial layer, the MIT researchers embed the low-adhesion insulator in a high-adhesion matrix. This adhesive matrix is what makes the 2D material stick to the embedded low-adhesion surface, providing the forces needed to create a van der Waals interface between the 2D material and the insulator.
    Making the matrix
    To make electronic devices, they form a hybrid surface of metals and insulators on a carrier substrate. This surface is then peeled off and flipped over to reveal a completely smooth top surface that contains the building blocks of the desired device.
    This smoothness is important, since gaps between the surface and 2D material can hamper van der Waals interactions. Then, the researchers prepare the 2D material separately, in a completely clean environment, and bring it into direct contact with the prepared device stack.
    “Once the hybrid surface is brought into contact with the 2D layer, without needing any high-temperatures, solvents, or sacrificial layers, it can pick up the 2D layer and integrate it with the surface. This way, we are allowing a van der Waals integration that would be traditionally forbidden, but now is possible and allows formation of fully functioning devices in a single step,” Satterthwaite explains.
    This single-step process keeps the 2D material interface completely clean, which enables the material to reach its fundamental limits of performance without being held back by defects or contamination.
    And because the surfaces also remain pristine, researchers can engineer the surface of the 2D material to form features or connections to other components. For example, they used this technique to create p-type transistors, which are generally challenging to make with 2D materials. Their transistors have improved on previous studies, and can provide a platform toward studying and achieving the performance needed for practical electronics.
    Their approach can be done at scale to make larger arrays of devices. The adhesive matrix technique can also be used with a range of materials, and even with other forces to enhance the versatility of this platform. For instance, the researchers integrated graphene onto a device, forming the desired van der Waals interfaces using a matrix made with a polymer. In this case, adhesion relies on chemical interactions rather than van der Waals forces alone.
    In the future, the researchers want to build on this platform to enable integration of a diverse library of 2D materials to study their intrinsic properties without the influence of processing damage, and develop new device platforms that leverage these superior functionalities.
    This research is funded, in part, by the U.S. National Science Foundation, the U.S. Department of Energy, the BUnano Cross-Disciplinary Fellowship at Boston University, and the U.S. Army Research Office. The fabrication and characterization procedures were carried out, largely, in the MIT.nano shared facilities. More

  • in

    Immersive VR goggles for mice unlock new potential for brain science

    Northwestern University researchers have developed new virtual reality (VR) goggles for mice.
    Besides just being cute, these miniature goggles provide more immersive experiences for mice living in laboratory settings. By more faithfully simulating natural environments, the researchers can more accurately and precisely study the neural circuitry that underlies behavior.
    Compared to current state-of-the-art systems, which simply surround mice with computer or projection screens, the new goggles provide a leap in advancement. In current systems, mice can still see the lab environment peeking out from behind the screens, and the screens’ flat nature cannot convey three-dimensional (3D) depth. In another disadvantage, researchers have been unable to easily mount screens above mice’s heads to simulate overhead threats, such as looming birds of prey.
    The new VR goggles bypass all those issues. And, as VR grows in popularity, the goggles also could help researchers glean new insights into how the human brain adapts and reacts to repeated VR exposure — an area that is currently little understood.
    The research will be published on Friday (Dec. 8) in the journal Neuron. It marks the first time researchers have used a VR system to simulate an overhead threat.
    “For the past 15 years, we have been using VR systems for mice,” said Northwestern’s Daniel Dombeck, the study’s senior author. “So far, labs have been using big computer or projection screens to surround an animal. For humans, this is like watching a TV in your living room. You still see your couch and your walls. There are cues around you, telling you that you aren’t inside the scene. Now think about putting on VR goggles, like Oculus Rift, that take up your full vision. You don’t see anything but the projected scene, and a different scene is projected into each eye to create depth information. That’s been missing for mice.”
    Dombeck is a professor of neurobiology at Northwestern’s Weinberg College of Arts and Sciences. His laboratory is a leader in developing VR-based systems and high-resolution, laser-based imaging systems for animal research.

    The value of VR
    Although researchers can observe animals in nature, it is incredibly difficult to image patterns of real-time brain activity while animals engage with the real world. To overcome this challenge, researchers have integrated VR into laboratory settings. In these experimental setups, an animal uses a treadmill to navigate scenes, such as a virtual maze, projected onto surrounding screens.
    By keeping the mouse in place on the treadmill — rather than allowing it to run through a natural environment or physical maze — neurobiologists can use tools to view and map the brain as the mouse traverses a virtual space. Ultimately, this helps researchers grasp general principles of how activated neural circuits encode information during various behaviors.
    “VR basically reproduces real environments,” Dombeck said. “We’ve had a lot of success with this VR system, but it’s possible the animals aren’t as immersed as they would be in a real environment. It takes a lot of training just to get the mice to pay attention to the screens and ignore the lab around them.”
    Introducing iMRSIV
    With recent advances in hardware miniaturization, Dombeck and his team wondered if they could develop VR goggles to more faithfully replicate a real environment. Using custom-designed lenses and miniature organic light-emitting diode (OLED) displays, they created compact goggles.

    Called Miniature Rodent Stereo Illumination VR (iMRSIV), the system comprises two lenses and two screens — one for each side of the head to separately illuminate each eye for 3D vision. This provides each eye with a 180-degree field-of-view that fully immerses the mouse and excludes the surrounding environment.
    Unlike VR goggles for a human, the iMRSIV (pronounced “immersive”) system does not wrap around the mouse’s head. Instead, the goggles are attached to the experimental setup and closely perch directly in front of the mouse’s face. Because the mouse runs in place on a treadmill, the goggles still cover the mouse’s field of view.
    “We designed and built a custom holder for the goggles,” said John Issa, a postdoctoral fellow in Dombeck’s laboratory and study co-first author. “The whole optical display — the screens and the lenses — go all the way around the mouse.”
    Reduced training times
    By mapping the mice’s brains, Dombeck and his team found that the brains of goggle-wearing mice were activated in very similar ways as in freely moving animals. And, in side-by-side comparisons, the researchers noticed that goggle-wearing mice engaged with the scene much more quickly than mice with traditional VR systems.
    “We went through the same kind of training paradigms that we have done in the past, but mice with the goggles learned more quickly,” Dombeck said. “After the first session, they could already complete the task. They knew where to run and looked to the right places for rewards. We think they actually might not need as much training because they can engage with the environment in a more natural way.”
    Simulating overhead threats for the first time
    Next, the researchers used the goggles to simulate an overhead threat — something that had been previously impossible with current systems. Because hardware for imaging technology already sits above the mouse, there is nowhere to mount a computer screen. The sky above a mouse, however, is an area where animals often look for vital — sometimes life-or-death — information.
    “The top of a mouse’s field of view is very sensitive to detect predators from above, like a bird,” said co-first author Dom Pinke, a research specialist in Dombeck’s lab. “It’s not a learned behavior; it’s an imprinted behavior. It’s wired inside the mouse’s brain.”
    To create a looming threat, the researchers projected a dark, expanding disk into the top of the goggles — and the top of the mice’s fields of view. In experiments, mice — upon noticing the disk — either ran faster or froze. Both behaviors are common responses to overhead threats. Researchers were able to record neural activity to study these reactions in detail.
    “In the future, we’d like to look at situations where the mouse isn’t prey but is the predator,” Issa said. “We could watch brain activity while it chases a fly, for example. That activity involves a lot of depth perception and estimating distances. Those are things that we can start to capture.”
    Making neurobiology accessible
    In addition to opening the door for more research, Dombeck hopes the goggles open the door to new researchers. Because the goggles are relatively inexpensive and require less intensive laboratory setups, he thinks they could make neurobiology research more accessible.
    “Traditional VR systems are pretty complicated,” Dombeck said. “They’re expensive, and they’re big. They require a big lab with a lot of space. And, on top of that, if it takes a long time to train a mouse to do a task, that limits how many experiments you can do. We’re still working on improvements, but our goggles are small, relatively cheap and pretty user friendly as well. This could make VR technology more available to other labs.”
    The study, “Full field-of-view virtual reality goggles for mice,” was supported by the National Institutes of Health (award number R01-MH101297), the National Science Foundation (award number ECCS-1835389), the Hartwell Foundation and the Brain and Behavior Research Foundation. More

  • in

    World’s first logical quantum processor

    A Harvard team has realized a key milestone in the quest for stable, scalable quantum computing. For the first time, the team has created a programmable, logical quantum processor, capable of encoding up to 48 logical qubits, and executing hundreds of logical gate operations. Their system is the first demonstration of large-scale algorithm execution on an error-corrected quantum computer, heralding the advent of early fault-tolerant, or reliably uninterrupted, quantum computation.
    In quantum computing, a quantum bit or “qubit” is one unit of information, just like a binary bit in classical computing. For more than two decades, physicists and engineers have shown the world that quantum computing is, in principle, possible by manipulating quantum particles – be they atoms, ions or photons — to create physical qubits.
    But successfully exploiting the weirdness of quantum mechanics for computation is more complicated than simply amassing a large-enough number of physical qubits, which are inherently unstable and prone to collapse out of their quantum states.
    The real coins of the realm in useful quantum computing are so-called logical qubits: bundles of redundant, error-corrected physical qubits, which can store information for use in a quantum algorithm. Creating logical qubits as controllable units — like classical bits — has been a fundamental obstacle for the field, and it’s generally accepted that until quantum computers can run reliably on logical qubits, technologies can’t really take off. To date, the best computing systems have demonstrated one or two logical qubits, and one quantum gate operation — akin to just one unit of code — between them.
    A Harvard team led by Mikhail Lukin, the Joshua and Beth Friedman University Professor in physics and co-director of the Harvard Quantum Initiative, has realized a key milestone in the quest for stable, scalable quantum computing. For the first time, the team has created a programmable, logical quantum processor, capable of encoding up to 48 logical qubits, and executing hundreds of logical gate operations. Their system is the first demonstration of large-scale algorithm execution on an error-corrected quantum computer, heralding the advent of early fault-tolerant, or reliably uninterrupted, quantum computation.
    Published in Nature, the work was performed in collaboration with Markus Greiner, the George Vasmer Leverett Professor of Physics; colleagues from MIT; and Boston-based QuEra Computing, a company founded on technology from Harvard labs. Harvard’s Office of Technology Development recently entered into a licensing agreement with QuEra for a patent portfolio based on innovations developed in Lukin’s group.
    Lukin described the achievement as a possible inflection point akin to the early days in the field of artificial intelligence: the ideas of quantum error correction and fault tolerance, long theorized, are starting to bear fruit.

    “I think this is one of the moments in which it is clear that something very special is coming,” Lukin said. “Although there are still challenges ahead, we expect that this new advance will greatly accelerate the progress towards large-scale, useful quantum computers.”
    The breakthrough builds on several years of work on a quantum computing architecture known as a neutral atom array, pioneered in Lukin’s lab and now being commercialized by QuEra. The key components of the system are a block of ultra-cold, suspended rubidium atoms, in which the atoms — the system’s physical qubits — can move about and be connected into pairs — or “entangled” – mid-computation. Entangled pairs of atoms form gates, which are units of computing power. Previously, the team had demonstrated low error rates in their entangling operations, proving the reliability of their neutral atom array system.
    “This breakthrough is a tour de force of quantum engineering and design,” said Denise Caldwell, acting assistant director of the National Science Foundation’s Mathematical and Physical Sciences Directorate, which supported the research through NSF’s Physics Frontiers Centers and Quantum Leap Challenge Institutes programs. “The team has not only accelerated the development of quantum information processing by using neutral atoms, but opened a new door to explorations of large-scale logical qubit devices which could enable transformative benefits for science and society as a whole.”
    With their logical quantum processor, the researchers now demonstrate parallel, multiplexed control of an entire patch of logical qubits, using lasers. This result is more efficient and scalable than having to control individual physical qubits.
    “We are trying to mark a transition in the field, toward starting to test algorithms with error-corrected qubits instead of physical ones, and enabling a path toward larger devices,” said paper first author Dolev Bluvstein, a Griffin School of Arts and Sciences Ph.D. student in Lukin’s lab.
    The team will continue to work toward demonstrating more types of operations on their 48 logical qubits, and to configure their system to run continuously, as opposed to manual cycling as it does now.
    The work was supported by the Defense Advanced Research Projects Agency through the Optimization with Noisy Intermediate-Scale Quantum devices program; the Center for Ultracold Atoms, a National Science Foundation Physics Frontiers Center; the Army Research Office; and QuEra Computing. More

  • in

    Engineers design a robotic replica of the heart’s right chamber

    MIT engineers have developed a robotic replica of the heart’s right ventricle, which mimics the beating and blood-pumping action of live hearts.
    The robo-ventricle combines real heart tissue with synthetic, balloon-like artificial muscles that enable scientists to control the ventricle’s contractions while observing how its natural valves and other intricate structures function.
    The artificial ventricle can be tuned to mimic healthy and diseased states. The team manipulated the model to simulate conditions of right ventricular dysfunction, including pulmonary hypertension and myocardial infarction. They also used the model to test cardiac devices. For instance, the team implanted a mechanical valve to repair a natural malfunctioning valve, then observed how the ventricle’s pumping changed in response.
    They say the new robotic right ventricle, or RRV, can be used as a realistic platform to study right ventricle disorders and test devices and therapies aimed at treating those disorders.
    “The right ventricle is particularly susceptible to dysfunction in intensive care unit settings, especially in patients on mechanical ventilation,” says Manisha Singh, a postdoc at MIT’s Institute for Medical Engineering and Science (IMES). “The RRV simulator can be used in the future to study the effects of mechanical ventilation on the right ventricle and to develop strategies to prevent right heart failure in these vulnerable patients.”
    Singh and her colleagues report details of the new design in a paper appearing today in Nature Cardiovascular Research. Her co-authors include Associate Professor Ellen Roche, who is a core member of IMES and the associate head for research in the Department of Mechanical Engineering at MIT, along with Jean Bonnemain, Caglar Ozturk, Clara Park, Diego Quevedo-Moreno, Meagan Rowlett, and Yiling Fan of MIT, Brian Ayers of Massachusetts General Hospital, Christopher Nguyen of Cleveland Clinic, and Mossab Saeed of Boston Children’s Hospital.
    A ballet of beats
    The right ventricle is one of the heart’s four chambers, along with the left ventricle and the left and right atria. Of the four chambers, the left ventricle is the heavy lifter, as its thick, cone-shaped musculature is built for pumping blood through the entire body. The right ventricle, Roche says, is a “ballerina” in comparison, as it handles a lighter though no-less-crucial load.

    “The right ventricle pumps deoxygenated blood to the lungs, so it doesn’t have to pump as hard,” Roche notes. “It’s a thinner muscle, with more complex architecture and motion.”
    This anatomical complexity has made it difficult for clinicians to accurately observe and assess right ventricle function in patients with heart disease.
    “Conventional tools often fail to capture the intricate mechanics and dynamics of the right ventricle, leading to potential misdiagnoses and inadequate treatment strategies,” Singh says.
    To improve understanding of the lesser-known chamber and speed the development of cardiac devices to treat its dysfunction, the team designed a realistic, functional model of the right ventricle that both captures its anatomical intricacies and reproduces its pumping function.
    The model includes real heart tissue, which the team chose to incorporate because it retains natural structures that are too complex to reproduce synthetically.
    “There are thin, tiny chordae and valve leaflets with different material properties that are all moving in concert with the ventricle’s muscle.Trying to cast or print these very delicate structures is quite challenging,” Roche explains.

    A heart’s shelf-life
    In the new study, the team reports explanting a pig’s right ventricle, which they treated to carefully preserve its internal structures. They then fit a silicone wrapping around it, which acted as a soft, synthetic myocardium, or muscular lining. Within this lining, the team embedded several long, balloon-like tubes, which encircled the real heart tissue, in positions that the team determined through computational modeling to be optimal for reproducing the ventricle’s contractions. The researchers connected each tube to a control system, which they then set to inflate and deflate each tube at rates that mimicked the heart’s real rhythm and motion.
    To test its pumping ability, the team infused the model with a liquid similar in viscosity to blood. This particular liquid was also transparent, allowing the engineers to observe with an internal camera how internal valves and structures responded as the ventricle pumped liquid through.
    They found that the artificial ventricle’s pumping power and the function of its internal structures were similar to what they previously observed in live, healthy animals, demonstrating that the model can realistically simulate the right ventricle’s action and anatomy. The researchers could also tune the frequency and power of the pumping tubes to mimic various cardiac conditions, such as irregular heartbeats, muscle weakening, and hypertension.
    “We’re reanimating the heart, in some sense, and in a way that we can study and potentially treat its dysfunction,” Roche says.
    To show that the artificial ventricle can be used to test cardiac devices, the team surgically implanted ring-like medical devices of various sizes to repair the chamber’s tricuspid valve — a leafy, one-way valve that lets blood into the right ventricle. When this valve is leaky, or physically compromised, it can cause right heart failure or atrial fibrillation, and leads to symptoms such as reduced exercise capacity, swelling of the legs and abdomen, and liver enlargement
    The researchers surgically manipulated the robo-ventricle’s valve to simulate this condition, then either replaced it by implanting a mechanical valve or repaired it using ring-like devices of different sizes. They observed which device improved the ventricle’s fluid flow as it continued to pump.
    “With its ability to accurately replicate tricuspid valve dysfunction, the RRV serves as an ideal training ground for surgeons and interventional cardiologists,” Singh says. “They can practice new surgical techniques for repairing or replacing the tricuspid valve on our model before performing them on actual patients.”
    Currently, the RRV can simulate realistic function over a few months. The team is working to extend that performance and enable the model to run continuously for longer stretches. They are also working with designers of implantable devices to test their prototypes on the artificial ventricle and possibly speed their path to patients. And looking far in the future, Roche plans to pair the RRV with a similar artificial, functional model of the left ventricle, which the group is currently fine-tuning.
    “We envision pairing this with the left ventricle to make a fully tunable, artificial heart, that could potentially function in people,” Roche says. “We’re quite a while off, but that’s the overarching vision.”
    This research was supported in part by the National Science Foundation. More