More stories

  • in

    Can evolution be predicted?

    Scientists created a framework to test the predictions of biological optimality theories, including evolution.
    Evolution adapts and optimizes organisms to their ecological niche. This could be used to predict how an organism evolves, but how can such predictions be rigorously tested? The Biophysics and Computational Neuroscience group led by professor Gašper Tkačik at the Institute of Science and Technology (IST) Austria has now created a mathematical framework to do exactly that.
    Evolutionary adaptation often finds clever solutions to challenges posed by different environments, from how to survive in the dark depths of the oceans to creating intricate organs such as an eye or an ear. But can we mathematically predict these outcomes?
    This is the key question that motivates the Tkačik research group. Working at the intersection of biology, physics, and mathematics, they apply theoretical concepts to complex biological systems, or as Tkačik puts it: “We simply want to show that it is sometimes possible to predict change in biological systems, even when dealing with such a complex beast as evolution.”
    Climbing mountains in many dimensions
    In a joint work by the postdoctoral fellow Wiktor Młynarski and PhD student Michal Hledík, assisted by group alumnus Thomas Sokolowski, who is now working at the Frankfurt Institute for Advanced Studies, the scientists spearheaded an essential advance towards their goal. They developed a statistical framework that uses experimental data from complex biological systems to rigorously test and quantify how well such a system is adapted to its environment. An example of such an adaptation is the design of the eye’s retina that optimally collects light to form a sharp image, or the wiring diagram of a worm’s nervous system that ensures all the muscles and sensors are connected efficiently, using the least amount of neural wiring.
    The established model the scientists base their results on represents adaptation as movement on a landscape with mountains and valleys. The features of an organism determine where it is located on this landscape. As evolution progresses and the organism adapts to its ecological niche, it climbs towards the peak of one of the mountains. Better adaptation results in a better performance in the environment — for example producing more offspring — which in turn is reflected in a higher elevation on this landscape. Therefore, a falcon with its sharp eyesight is located at a higher point than the bird’s ancestor whose vision was worse in the same environment.
    The new framework by Młynarski, Hledík, and colleagues allows them to quantify how well the organisms are adapted to their niche. On a two-dimensional landscape with mountains and valleys, calculating the elevation appears trivial, but real biological systems are much more complex. There are many more factors influencing it, which results in landscapes with many more dimensions. Here, intuition breaks down and the researchers need rigorous statistical tools to quantify adaptation and test its predictions against experimental data. This is what the new framework delivers.
    Building bridges in science
    IST Austria provides a fertile ground for interdisciplinary collaborations. Wiktor Młynarski, originally coming from computer science, is interested in applying mathematical concepts to biological systems. “This paper is a synthesis of many of my scientific interests, bringing together different biological systems and conceptual approaches,” he describes this most recent study. In his interdisciplinary research, Michal Hledík works with both the Tkačik group and the research group led by Nicholas Barton in the field of evolutionary genetics at IST Austria. Gašper Tkačik himself was inspired to study complex biological systems through the lens of physics by his PhD advisor William Bialek at Princeton University. “There, I learned that the living world is not always messy, complex, and unapproachable by physical theories. In contrast, it can drive completely new developments in applied and fundamental physics,” he explains.
    “Our legacy should be the ability to point a finger at selected biological systems and predict, from first principles, why these systems are as they are, rather than being limited to describing how they work,” Tkačik describes his motivation. Prediction should be possible in a controlled environment, such as with the relatively simple E. coli bacteria growing under optimal conditions. Another avenue for prediction are systems that operate under hard physical limits, which strongly constrain evolution. One example are our eyes that need to convey high-resolution images to the brain while using the minimal amount of energy. Tkačik summarizes, “Theoretically deriving even a bit of an organism’s complexity would be the ultimate answer to the ‘Why?’ question that humans have grappled with throughout the ages. Our recent work creates a tool to approach this question, by building a bridge between mathematics and biology.” More

  • in

    Mathematical modeling to identify factors that determine adaptive therapy success

    One of the most challenging issues in cancer therapy is the development of drug resistance and subsequent disease progression. In a new article featured on this month’s cover of Cancer Research, Moffitt Cancer Center researchers, in collaboration with Oxford University, report results from their study using mathematical modeling to show that cell turnover impacts drug resistance and is an important factor that governs the success of adaptive therapy.
    Cancer treatment options have increased substantially over the past few decades; however, many patients eventually develop drug resistance. Physicians strive to overcome resistance by either trying to target cancer cells through an alternative approach or targeting the resistance mechanism itself, but success with these approaches is often limited, as additional resistance mechanisms can arise.
    Researchers in Moffitt’s Integrated Mathematical Oncology Department and Center of Excellence for Evolutionary Therapy believe that resistance may partly develop because of the high doses of drugs that are commonly used during treatment. Patients are typically administered a maximum tolerated dose of therapy that kills as many cancer cells as possible with the fewest side effects. However, according to evolutionary theories, this maximum tolerated dose approach could lead to drug resistance because of the existence of drug resistant cells before treatment even begins. Once sensitive cells are killed by anti-cancer therapies, these drug resistant cells are given free rein to divide and multiply. Moffitt researchers believe an alternative treatment strategy called adaptive therapy may be a better approach to kill cancer cells and minimize the development of drug resistance.
    “Adaptive therapy aims not to eradicate the tumor, but to control it. Therapy is applied to reduce tumor burden to a tolerable level but is subsequently modulated or withdrawn to maintain a pool of drug-sensitive cancer cells,” said Alexander Anderson, Ph.D., chair of the Integrated Mathematical Oncology Department and founding director of the Center of Excellence for Evolutionary Therapy.
    Previous laboratory studies have shown that adaptive therapy can prolong the time to cancer progression for several different tumor types, including ovarian, breast and melanoma. Additionally, a clinical trial in prostate cancer patients at Moffitt has shown that compared to standard treatment, adaptive therapy increased the time to cancer progression by approximately 10 months and reduced the cumulative drug usage by 53%.
    Despite these encouraging results, it is unclear which tumor types will respond best to adaptive therapy in the clinic. Recent studies have shown that the success of adaptive therapy is dependent on different factors, including levels of spatial constraint, the fitness of the resistant cell population, the initial number of resistant cells and the mechanisms of resistance. However, it is unclear how the cost of resistance factors into a tumor’s response to adaptive therapy.

    advertisement

    The cost of resistance refers to the idea that cells that become resistant have a fitness advantage over non-resistant cells when a drug is present, but this may come at a cost, such as a slower growth rate. However, drug resistance is not always associated with a cost and it is unclear whether a cost of resistance is necessary for the success of adaptive therapy.
    The research team at Moffitt used mathematical modeling to determine how the cost of resistance is associated with adaptive therapy. They modeled the growth of drug sensitive and resistant cell populations under both continuous therapy and adaptive therapy conditions and compared their time to disease progression in the presence and absence of a cost of resistance.
    The researchers showed that tumors with higher cell density and those with smaller levels of pre-existing resistance did better under adaptive therapy conditions. They also showed that cell turnover is a key factor that impacts the cost of resistance and outcomes to adaptive therapy by increasing competition between sensitive and resistance cells. To do so, they made use of phase plane techniques, which provide a visual way to dissect the dynamics of mathematical models.
    “I’m a very visual person and find that phase planes make it easy to gain an intuition for a model. You don’t need to manipulate equations, which makes them great for communicating with experimental and clinical collaborators. We are honored that Cancer Research selected our collage of phase planes for their cover and hope this will help making mathematical oncology accessible to more people,” said Maximilian Strobl, lead study author and doctoral candidate at University of Oxford.
    To confirm their models, the researchers analyzed data from 67 prostate cancer patients undergoing intermittent therapy treatment, a predecessor of adaptive therapy.
    “We find that even though our model was constructed as a conceptual tool, it can recapitulate individual patient dynamics for a majority of patients, and that it can describe patients who continuously respond, as well as those who eventually relapse,” said Anderson.
    While more studies are needed to understand how adaptive therapies may benefit patients, researchers are hopeful their data will lead to better indicators of which tumors will respond to adaptive therapy.
    “With better understanding of tumor growth, resistance costs, and turnover rates, adaptive therapy can be more carefully tailored to patients who stand to benefit from it the most and, more importantly, highlight which patients may benefit from multi-drug approaches,” said Anderson. More

  • in

    Switching to firm contracts may prevent natural gas fuel shortages at US power plants

    Between January 2012 and March 2018, there were an average of 1,000 failures each year at large North American gas power plants due to unscheduled fuel shortages and fuel conservation interruptions. This is a problem as the power grid depends on reliable natural gas delivery from these power plants in order to function. More than a third of all U.S. electricity is generated from natural gas. New research now indicates that these fuel shortages are not due to failures of pipelines and that in certain areas of the country a change in how gas is purchased can significantly reduce generator outages.
    The paper, “What Causes Natural Gas Fuel Shortages at U.S. Power Plants?” by researchers at Carnegie Mellon University and the North American Electric Reliability Corporation, was published in Energy Policy.
    Gas shortages at generators have caused simultaneous failures of several power plants. Physical failures and disruptions of the natural gas pipeline network are rare; the authors found that they account for no more than 5% of the power plant generation lost to fuel shortages over the six years examined. The vast majority of the natural gas generator outages due to fuel unavailability were due to curtailment of gas when supplies were tight. In the Midwest and Mid-Atlantic states, natural gas was available but power plants that did not purchase firm contracts were out-prioritized by commercial and industrial customers.
    “While it is unsurprising that plants using the spot market or interruptible pipeline contracts for their fuel were somewhat more likely to experience fuel shortages than those with firm contracts, these contracts can still make a big difference in reliability in certain regions,” says Jay Apt, a Professor and the Co-Director of Carnegie Mellon’s Electricity Industry Center, who co-authored the paper. “Still, firm contracts are not a solution for areas such as New England that have few gas pipelines and further discussion on other mitigation strategies should be explored.”
    Natural gas is increasingly used to generate power in the U.S. and the North American Electric Reliability Corporation (NERC) projects that the natural gas generating capacity will further expand by 12 GW over the next decade, about a 5% increase. Fuel shortages have been a problem at power plants that are used exclusively at times of peak demand, such as during extreme cold and hot weather, as well as at more heavily-used gas power plants. This indicates that fuel shortages affect the power grid’s ability to operate whether it’s responding to an emergency or merely serving load during normal operation.
    Previous research has focused on technical reports from reliability organizations or regional transmission organizations. For the first time, researchers for this paper used historical data collected by NERC to examine fuel shortages between 2012 and 2018 at natural gas power plants in North America to determine their cause. The researchers’ primary goal was to identify how many of these fuel shortage failures were caused by physical interruptions of gas flow as opposed to operational procedures on the pipeline network, such as gas service curtailment priority. They also sought to respond to policy questions regarding whether generators could mitigate fuel shortage failures by switching to firm pipeline contracts.
    Along with analyzing the NERC data from 2012 — 2018, the researchers developed a systematic approach to match the NERC failure data to U.S. Energy Information Administration generator characteristic data in order to evaluate how gas pipeline system characteristics have historically affected natural gas fuel shortage failures. They calculated a time series of unscheduled, unavailable capacity due to fuel shortages and time-matched the beginning times of fuel shortage power plant failure events with time windows of pipeline failures to determine if pipeline failures could have caused fuel shortage outages at power plants. They then completed a similar process of spatial matching of power plants to gas trading hubs in order to assess the historical availability of natural gas for transactions by power plants.
    Ultimately, the researchers observed that both plants with firm contracts and plants without firm contracts experienced fuel shortages and conservation interruptions, but that non-firm plants were overrepresented in the fuel shortage failure data. This suggests that curtailment priority on pipeline networks is the likely reason for most correlated failures. However, the data also suggests that firm contracts will not solve everything and other strategies should be explored, especially in areas such as New England where the pipeline network has historically been constrained.

    Story Source:
    Materials provided by Carnegie Mellon University. Note: Content may be edited for style and length. More

  • in

    Supercomputer turns back cosmic clock

    Astronomers have tested a method for reconstructing the state of the early Universe by applying it to 4000 simulated universes using the ATERUI II supercomputer at the National Astronomical Observatory of Japan (NAOJ). They found that together with new observations the method can set better constraints on inflation, one of the most enigmatic events in the history of the Universe. The method can shorten the observation time required to distinguish between various inflation theories.
    Just after the Universe came into existence 13.8 billion years ago, it suddenly increased more than a trillion, trillion times in size, in less than a trillionth of a trillionth of a microsecond; but no one knows how or why. This sudden “inflation,” is one of the most important mysteries in modern astronomy. Inflation should have created primordial density fluctuations which would have affected the distribution of where galaxies developed. Thus, mapping the distribution of galaxies can rule out models for inflation which don’t match the observed data.
    However, processes other than inflation also impact galaxy distribution, making it difficult to derive information about inflation directly from observations of the large-scale structure of the Universe, the cosmic web comprised of countless galaxies. In particular, the gravitationally driven growth of groups of galaxies can obscure the primordial density fluctuations.
    A research team led by Masato Shirasaki, an assistant professor at NAOJ and the Institute of Statistical Mathematics, thought to apply a “reconstruction method” to turn back the clock and remove the gravitational effects from the large-scale structure. They used ATERUI II, the world’s fastest supercomputer dedicated to astronomy simulations, to create 4000 simulated universes and evolve them through gravitationally driven growth. They then applied this method to see how well it reconstructed the starting state of the simulations. The team found that their method can correct for the gravitational effects and improve the constraints on primordial density fluctuations.
    “We found that this method is very effective,” says Shirasaki. “Using this method, we can verify of the inflation theories with roughly one tenth the amount of data. This method can shorten the required observing time in upcoming galaxy survey missions such as SuMIRe by NAOJ’s Subaru Telescope.”

    Story Source:
    Materials provided by National Institutes of Natural Sciences. Note: Content may be edited for style and length. More

  • in

    Graphene 'nano-origami' creates tiniest microchips yet

    The tiniest microchips yet can be made from graphene and other 2D-materials, using a form of ‘nano-origami’, physicists at the University of Sussex have found.
    This is the first time any researchers have done this, and it is covered in a paper published in the ACS Nano journal.
    By creating kinks in the structure of graphene, researchers at the University of Sussex have made the nanomaterial behave like a transistor, and have shown that when a strip of graphene is crinkled in this way, it can behave like a microchip, which is around 100 times smaller than conventional microchips.
    Prof Alan Dalton in the School of Mathematical and Physics Sciences at the University of Sussex, said:
    “We’re mechanically creating kinks in a layer of graphene. It’s a bit like nano-origami.
    “Using these nanomaterials will make our computer chips smaller and faster. It is absolutely critical that this happens as computer manufacturers are now at the limit of what they can do with traditional semiconducting technology. Ultimately, this will make our computers and phones thousands of times faster in the future.
    “This kind of technology — “straintronics” using nanomaterials as opposed to electronics — allows space for more chips inside any device. Everything we want to do with computers — to speed them up — can be done by crinkling graphene like this.”
    Dr Manoj Tripathi, Research Fellow in Nano-structured Materials at the University of Sussex and lead author on the paper, said:
    “Instead of having to add foreign materials into a device, we’ve shown we can create structures from graphene and other 2D materials simply by adding deliberate kinks into the structure. By making this sort of corrugation we can create a smart electronic component, like a transistor, or a logic gate.”
    The development is a greener, more sustainable technology. Because no additional materials need to be added, and because this process works at room temperature rather than high temperature, it uses less energy to create.

    Story Source:
    Materials provided by University of Sussex. Note: Content may be edited for style and length. More

  • in

    Kagome graphene promises exciting properties

    For the first time, physicists from the University of Basel have produced a graphene compound consisting of carbon atoms and a small number of nitrogen atoms in a regular grid of hexagons and triangles. This honeycomb-structured “kagome lattice” behaves as a semiconductor and may also have unusual electrical properties. In the future, it could potentially be used in electronic sensors or quantum computers.
    Researchers around the world are searching for new synthetic materials with special properties such as superconductivity — that is, the conduction of electric current without resistance. These new substances are an important step in the development of highly energy-efficient electronics. The starting material is often a single-layer honeycomb structure of carbon atoms (graphene).
    Theoretical calculations predict that the compound known as “kagome graphene” should have completely different properties to graphene. Kagome graphene consists of a regular pattern of hexagons and equilateral triangles that surround one another. The name “kagome” comes from Japanese and refers to the old Japanese art of kagome weaving, in which baskets were woven in the aforementioned pattern.
    Kagome lattice with new properties
    Researchers from the Department of Physics and the Swiss Nanoscience Institute at the University of Basel, working in collaboration with the University of Bern, have now produced and studied kagome graphene for the first time, as they report in the journal Angewandte Chemie. The researchers’ measurements have delivered promising results that point to unusual electrical or magnetic properties.
    To produce the kagome graphene, the team applied a precursor to a silver substrate by vapor deposition and then heated it to form an organometallic intermediate on the metal surface. Further heating produced kagome graphene, which is made up exclusively of carbon and nitrogen atoms and features the same regular pattern of hexagons and triangles.

    advertisement

    Strong interactions between electrons
    “We used scanning tunneling and atomic force microscopes to study the structural and electronic properties of the kagome lattice,” reports Dr. Rémy Pawlak, first author of the study. With microscopes of this kind, researchers can probe the structural and electrical properties of materials using a tiny tip — in this case, the tip was terminated with individual carbon monoxide molecules.
    In doing so, the researchers observed that electrons of a defined energy, which is selected by applying an electrical voltage, are “trapped” between the triangles that appear in the crystal lattice of kagome graphene. This behavior clearly distinguishes the material from conventional graphene, where electrons are distributed across various energy states in the lattice — in other words, they are delocalized.
    “The localization observed in kagome graphene is desirable and precisely what we were looking for,” explains Professor Ernst Meyer, who leads the group in which the projects were carried out. “It causes strong interactions between the electrons — and, in turn, these interactions provide the basis for unusual phenomena, such as conduction without resistance.”
    Further investigations planned
    The analyses also revealed that kagome graphene features semiconducting properties — in other words, its conducting properties can be switched on or off, as with a transistor. In this way, kagome graphene differs significantly from graphene, whose conductivity cannot be switched on and off as easily.
    In subsequent investigations, the team will detach the kagome lattice from its metallic substrate and study its electronic properties further. “The flat band structure identified in the experiments supports the theoretical calculations, which predict that exciting electronic and magnetic phenomena could occur in kagome lattices. In the future, kagome graphene could act as a key building block in sustainable and efficient electronic components,” says Ernst Meyer. More

  • in

    Researchers develop algorithm to find possible misdiagnosis

    It does not happen often. But on rare occasions, physicians make mistakes and may make a wrong diagnosis. Patients may have many diseases all at once, where it can be difficult to distinguish the symptoms of one illness from the other, or there may be a lack of symptoms.
    Errors in diagnosis may lead to incorrect treatment or a lack of treatment. Therefore, everyone in the healthcare system tries to minimise errors as much as possible.
    Now, researchers at the University of Copenhagen have developed an algorithm that can help with just that.
    ‘Our new algorithm can find the patients who have such an unusual disease trajectory that they may indeed not suffer from the disease they were diagnosed with. It can hopefully end up being a support tool for physicians’, says Isabella Friis Jørgensen, Postdoc at the Novo Nordisk Foundation Center for Protein Research.
    The algorithm revealed possible lung cancer
    The researchers have developed the algorithm based on disease trajectories for 284,000 patients with chronic obstructive pulmonary disease (COPD), from 1994 to 2015. Based on these data, they came up with approximately 69,000 typical disease trajectories.

    advertisement

    ‘In the National Patient Registry, we have been able to map what you could call typical disease trajectory. And if a patient shows up with a very unusual disease trajectory, then it might be that the patient is simply suffering from a different disease. Our tool can help to detect this’, explains Søren Brunak, Professor at the Novo Nordisk Foundation Center for Protein Research.
    For example, the researchers found a small group of 2,185 COPD patients who died very shortly after being diagnosed with COPD. According to the researchers, it was a sign that something else might have been wrong, maybe something even more serious.
    ‘When we studied the laboratory values from these patients more closely, we saw that they deviated from normal values for COPD patients. Instead, the values resembled something that is seen in lung cancer patients. Only 10 per cent of these patients were diagnosed with lung cancer, but we are reasonably convinced that most, if not all of these patients actually had lung cancer’, explains Søren Brunak.
    Data that will provide an immediate benefit
    Although the algorithm was validated through data from COPD patients, it may be used for many other diseases. The principle is the same: the algorithm uses registry data to map the typical disease trajectories and can detect if some patients’ disease trajectory stand out so much that something may be wrong.
    ‘Naturally, our most important goal is for the patients to get their money’s worth with respect to their health care. And we believe that in the future, this algorithm may end up becoming a support tool for physicians. Once the algorithm has mapped the typical disease trjaectories, it only takes 10 seconds to match a single patient against everyone else’, says Søren Brunak.
    He emphasises that the algorithm must be further validated and tested in clinical trials before it can be implemented in Danish hospitals. But he hopes it is something that can be started soon.
    ‘In Denmark, we often praise our good health registries because they contain valuable data for researchers. We use them in our research because it may benefit other people in the future in the form of better treatment. But this is actually an example of how your own health data can benefit yourself right away’, says Søren Brunak. More

  • in

    New surgery may enable better control of prosthetic limbs

    MIT researchers have invented a new type of amputation surgery that can help amputees to better control their residual muscles and sense where their “phantom limb” is in space. This restored sense of proprioception should translate to better control of prosthetic limbs, as well as a reduction of limb pain, the researchers say.
    In most amputations, muscle pairs that control the affected joints, such as elbows or ankles, are severed. However, the MIT team has found that reconnecting these muscle pairs, allowing them to retain their normal push-pull relationship, offers people much better sensory feedback.
    “Both our study and previous studies show that the better patients can dynamically move their muscles, the more control they’re going to have. The better a person can actuate muscles that move their phantom ankle, for example, the better they’re actually able to use their prostheses,” says Shriya Srinivasan, an MIT postdoc and lead author of the study.
    In a study that will appear this week in the Proceedings of the National Academy of Sciences, 15 patients who received this new type of surgery, known as agonist-antagonist myoneural interface (AMI), could control their muscles more precisely than patients with traditional amputations. The AMI patients also reported feeling more freedom of movement and less pain in their affected limb.
    “Through surgical and regenerative techniques that restore natural agonist-antagonist muscle movements, our study shows that persons with an AMI amputation experience a greater phantom joint range of motion, a reduced level of pain, and an increased fidelity of prosthetic limb controllability,” says Hugh Herr, a professor of media arts and sciences, head of the Biomechatronics group in the Media Lab, and the senior author of the paper.
    Other authors of the paper include Samantha Gutierrez-Arango and Erica Israel, senior research support associates at the Media Lab; Ashley Chia-En Teng, an MIT undergraduate; Hyungeun Song, a graduate student in the Harvard-MIT Program in Health Sciences and Technology; Zachary Bailey, a former visiting researcher at the Media Lab; Matthew Carty, a visiting scientist at the Media Lab; and Lisa Freed, a Media Lab research scientist.

    advertisement

    Restoring sensation
    Most muscles that control limb movement occur in pairs that alternately stretch and contract. One example of these agonist-antagonist pairs is the biceps and triceps. When you bend your elbow, the biceps muscle contracts, causing the triceps to stretch, and that stretch sends sensory information back to the brain.
    During a conventional limb amputation, these muscle movements are restricted, cutting off this sensory feedback and making it much harder for amputees to feel where their prosthetic limbs are in space or to sense forces applied to those limbs.
    “When one muscle contracts, the other one doesn’t have its antagonist activity, so the brain gets confusing signals,” says Srinivasan, a former member of the Biomechatronics group now working at MIT’s Koch Institute for Integrative Cancer Research. “Even with state-of-the-art prostheses, people are constantly visually following the prosthesis to try to calibrate their brains to where the device is moving.”
    A few years ago, the MIT Biomechatronics group invented and scientifically developed in preclinical studies a new amputation technique that maintains the relationships between those muscle pairs. Instead of severing each muscle, they connect the two ends of the muscles so that they still dynamically communicate with each other within the residual limb. In a 2017 study of rats, they showed that when the animals contracted one muscle of the pair, the other muscle would stretch and send sensory information back to the brain.

    advertisement

    Since these preclinical studies, about 25 people have undergone the AMI surgery at Brigham and Women’s Hospital, performed by Carty, who is also a plastic surgeon at the Brigham and Women’s hospital. In the new PNAS study, the researchers measured the precision of muscle movements in the ankle and subtalar joints of 15 patients who had AMI amputations performed below the knee. These patients had two sets of muscles reconnected during their amputation: the muscles that control the ankle, and those that control the subtalar joint, which allows the sole of the foot to tilt inward or outward. The study compared these patients to seven people who had traditional amputations below the knee.
    Each patient was evaluated while lying down with their legs propped on a foam pillow, allowing their feet to extend into the air. Patients did not wear prosthetic limbs during the study. The researchers asked them to flex their ankle joints — both the intact one and the “phantom” one — by 25, 50, 75, or 100 percent of their full range of motion. Electrodes attached to each leg allowed the researchers to measure the activity of specific muscles as each movement was performed repeatedly.
    The researchers compared the electrical signals coming from the muscles in the amputated limb with those from the intact limb and found that for AMI patients, they were very similar. They also found that patients with the AMI amputation were able to control the muscles of their amputated limb much more precisely than the patients with traditional amputations. Patients with traditional amputations were more likely to perform the same movement over and over in their amputated limb, regardless of how far they were asked to flex their ankle.
    “The AMI patients’ ability to control these muscles was a lot more intuitive than those with typical amputations, which largely had to do with the way their brain was processing how the phantom limb was moving,” Srinivasan says.
    In a paper that recently appeared in Science Translational Medicine, the researchers reported that brain scans of the AMI amputees showed that they were getting more sensory feedback from their residual muscles than patients with traditional amputations. In work that is now ongoing, the researchers are measuring whether this ability translates to better control of a prosthetic leg while walking.
    Freedom of movement
    The researchers also discovered an effect they did not anticipate: AMI patients reported much less pain and a greater sensation of freedom of movement in their amputated limbs.
    “Our study wasn’t specifically designed to achieve this, but it was a sentiment our subjects expressed over and over again. They had a much greater sensation of what their foot actually felt like and how it was moving in space,” Srinivasan says. “It became increasingly apparent that restoring the muscles to their normal physiology had benefits not only for prosthetic control, but also for their day-to-day mental well-being.”
    The research team has also developed a modified version of the surgery that can be performed on people who have already had a traditional amputation. This process, which they call “regenerative AMI,” involves grafting small muscle segments to serve as the agonist and antagonist muscles for an amputated joint. They are also working on developing the AMI procedure for other types of amputations, including above the knee and above and below the elbow.
    “We’re learning that this technique of rewiring the limb, and using spare parts to reconstruct that limb, is working, and it’s applicable to various parts of the body,” Herr says.
    The research was funded by the MIT Media Lab Consortia, the National Institute of Child Health and Human Development, the National Center for Medical Rehabilitation Research, and the Congressionally Directed Medical Research Programs of the U.S. Department of Defense. More