More stories

  • in

    Can the bias in algorithms help us see our own?

    Algorithms were supposed to make our lives easier and fairer: help us find the best job applicants, help judges impartially assess the risks of bail and bond decisions, and ensure that healthcare is delivered to the patients with the greatest need. By now, though, we know that algorithms can be just as biased as the human decision-makers they inform and replace.
    What if that weren’t a bad thing?
    New research by Carey Morewedge, a Boston University Questrom School of Business professor of marketing and Everett W. Lord Distinguished Faculty Scholar, found that people recognize more of their biases in algorithms’ decisions than they do in their own — even when those decisions are the same. The research, publishing in the Proceedings of the National Academy of Sciences, suggests ways that awareness might help human decision-makers recognize and correct for their biases.
    “A social problem is that algorithms learn and, at scale, roll out biases in the human decisions on which they were trained,” says Morewedge, who also chairs Questrom’s marketing department. For example: In 2015, Amazon tested (and soon scrapped) an algorithm to help its hiring managers filter through job applicants. They found that the program boosted résumés it perceived to come from male applicants, and downgraded those from female applicants, a clear case of gender bias.
    But that same year, just 39 percent of Amazon’s workforce were women. If the algorithm had been trained on Amazon’s existing hiring data, it’s no wonder it prioritized male applicants — Amazon already was. If its algorithm had a gender bias, “it’s because Amazon’s managers were biased in their hiring decisions,” Morewedge says.
    “Algorithms can codify and amplify human bias, but algorithms also reveal structural biases in our society,” he says. “Many biases cannot be observed at an individual level. It’s hard to prove bias, for instance, in a single hiring decision. But when we add up decisions within and across persons, as we do when building algorithms, it can reveal structural biases in our systems and organizations.”
    Morewedge and his collaborators — Begüm Çeliktutan and Romain Cadario, both at Erasmus University in the Netherlands — devised a series of experiments designed to tease out people’s social biases (including racism, sexism, and ageism). The team then compared research participants’ recognition of how those biases colored their own decisions versus decisions made by an algorithm. In the experiments, participants sometimes saw the decisions of real algorithms. But there was a catch: other times, the decisions attributed to algorithms were actually the participants’ choices, in disguise.

    Across the board, participants were more likely to see bias in the decisions they thought came from algorithms than in their own decisions. Participants also saw as much bias in the decisions of algorithms as they did in the decisions of other people. (People generally better recognize bias in others than in themselves, a phenomenon called the bias blind spot.) Participants were also more likely to correct for bias in those decisions after the fact, a crucial step for minimizing bias in the future.
    Algorithms Remove the Bias Blind Spot
    The researchers ran sets of participants, more than 6,000 in total, through nine experiments. In the first, participants rated a set of Airbnb listings, which included a few pieces of information about each listing: its average star rating (on a scale of 1 to 5) and the host’s name. The researchers assigned these fictional listings to hosts with names that were “distinctively African American or white,” based on previous research identifying racial bias, according to the paper. The participants rated how likely they were to rent each listing.
    In the second half of the experiment, participants were told about a research finding that explained how the host’s race might bias the ratings. Then, the researchers showed participants a set of ratings and asked them to assess (on a scale of 1 to 7) how likely it was that bias had influenced the ratings.
    Participants saw either their own rating reflected back to them, their own rating under the guise of an algorithm’s, their own rating under the guise of someone else’s, or an actual algorithm rating based on their preferences.
    The researchers repeated this setup several times, testing for race, gender, age, and attractiveness bias in the profiles of Lyft drivers and Airbnb hosts. Each time, the results were consistent. Participants who thought they saw an algorithm’s ratings or someone else’s ratings (whether or not they actually were) were more likely to perceive bias in the results.

    Morewedge attributes this to the different evidence we use to assess bias in others and bias in ourselves. Since we have insight into our own thought process, he says, we’re more likely to trace back through our thinking and decide that it wasn’t biased, perhaps driven by some other factor that went into our decisions. When analyzing the decisions of other people, however, all we have to judge is the outcome.
    “Let’s say you’re organizing a panel of speakers for an event,” Morewedge says. “If all those speakers are men, you might say that the outcome wasn’t the result of gender bias because you weren’t even thinking about gender when you invited these speakers. But if you were attending this event and saw a panel of all-male speakers, you’re more likely to conclude that there was gender bias in the selection.”
    Indeed, in one of their experiments, the researchers found that participants who were more prone to this bias blind spot were also more likely to see bias in decisions attributed to algorithms or others than in their own decisions. In another experiment, they discovered that people more easily saw their own decisions influenced by factors that were fairly neutral or reasonable, such as an Airbnb host’s star rating, compared to a prejudicial bias, such as race — perhaps because admitting to preferring a five-star rental isn’t as threatening to one’s sense of self or how others might view us, Morewedge suggests.
    Algorithms as Mirrors: Seeing and Correcting Human Bias
    In the researchers’ final experiment, they gave participants a chance to correct bias in either their ratings or the ratings of an algorithm (real or not). People were more likely to correct the algorithm’s decisions, which reduced the actual bias in its ratings.
    This is the crucial step for Morewedge and his colleagues, he says. For anyone motivated to reduce bias, being able to see it is the first step. Their research presents evidence that algorithms can be used as mirrors — a way to identify bias even when people can’t see it in themselves.
    “Right now, I think the literature on algorithmic bias is bleak,” Morewedge says. “A lot of it says that we need to develop statistical methods to reduce prejudice in algorithms. But part of the problem is that prejudice comes from people. We should work to make algorithms better, but we should also work to make ourselves less biased.
    “What’s exciting about this work is that it shows that algorithms can codify or amplify human bias, but algorithms can also be tools to help people better see their own biases and correct them,” he says. “Algorithms are a double-edged sword. They can be a tool that amplifies our worst tendencies. And algorithms can be a tool that can help better ourselves.” More

  • in

    Could new technique for ‘curving’ light be the secret to improved wireless communication?

    While cellular networks and Wi-Fi systems are more advanced than ever, they are also quickly reaching their bandwidth limits. Scientists know that in the near future they’ll need to transition to much higher communication frequencies than what current systems rely on, but before that can happen there are a number of — quite literal — obstacles standing in the way.
    Researchers from Brown University and Rice University say they’ve advanced one step closer to getting around these solid obstacles, like walls, furniture and even people — and they do it by curving light.
    In a new study published in Communications Engineering, the researchers describe how they are helping address one of the biggest logjams emerging in wireless communication. Current systems rely on microwave radiation to carry data, but it’s become clear that the future standard for transmitting data will make use of terahertz waves, which have as much as 100 times the data-carrying capacity of microwaves. One longstanding issue has been that, unlike microwaves, terahertz signals can be blocked by most solid objects, making a direct line of sight between transmitter and receiver a logistical requirement.
    “Most people probably use a Wi-Fi base station that fills the room with wireless signals,” said Daniel Mittleman, a professor in Brown’s School of Engineering and senior author of the study. “No matter where they move, they maintain the link. At the higher frequencies that we’re talking about here, you won’t be able to do that anymore. Instead, it’s going to be a directional beam. If you move around, that beam is going to have to follow you in order to maintain the link, and if you move outside of the beam or something blocks that link, then you’re not getting any signal.”
    The researchers circumvented this by creating a terahertz signal that follows a curved trajectory around an obstacle, instead of being blocked by it. The novel method unveiled in the study could help revolutionize wireless communication and highlights the future feasibility of wireless data networks that run on terahertz frequencies, according to the researchers.
    “We want more data per second,” Mittleman said. “If you want to do that, you need more bandwidth, and that bandwidth simply doesn’t exist using conventional frequency bands.”
    In the study, Mittleman and his colleagues introduce the concept of self-accelerating beams. The beams are special configurations of electromagnetic waves that naturally bend or curve to one side as they move through space. The beams have been studied at optical frequencies but are now explored for terahertz communication.

    The researchers used this idea as a jumping off point. They engineered transmitters with carefully designed patterns so that the system can manipulate the strength, intensity and timing of the electromagnetic waves that are produced. With this ability to manipulate the light, the researchers make the waves work together more effectively to maintain the signal when a solid object blocks a portion of the beam. Essentially, the light beam adjusts to the blockage by shuffling data along the patterns the researchers engineered into the transmitter. When one pattern is blocked, the data transfers to the next one, and then the next one if that is blocked. This keeps the signal link fully intact. Without this level of control, when the beam is blocked, the system can’t make any adjustments, so no signal gets through.
    This effectively makes the signal bend around objects as long as the transmitter is not completely blocked. If it is completely blocked, another way of getting the data to the receiver will be needed.
    “Curving a beam doesn’t solve all possible blockage problems, but what it does is solve some of them and it solves them in a way that’s better than what others have tried,” said Hichem Guerboukha, who led the study as a postdoctoral researcher at Brown and is now an assistant professor at the University of Missouri — Kansas City.
    The researchers validated their findings through extensive simulations and experiments navigating around obstacles to maintain communication links with high reliability and integrity. The work builds on a previous study from the team that showed terahertz data links can be bounced off walls in a room without dropping too much data.
    By using these curved beams, the researchers hope to one day make wireless networks more reliable, even in crowded or obstructed environments. This could lead to faster and more stable internet connections in places like offices or cities where obstacles are common. Before getting to that point, however, there’s much more basic research to be done and plenty of challenges to overcome as terahertz communication technology is still in its infancy.
    “One of the key questions that everybody asks us is how much can you curve and how far away,” Mittleman said. “We’ve done rough estimations of these things, but we haven’t really quantified it yet, so we hope to map it out.” More

  • in

    New technique lets scientists create resistance-free electron channels

    An international research team led by Lawrence Berkeley National Laboratory (Berkeley Lab) has taken the first atomic-resolution images and demonstrated electrical control of a chiral interface state — an exotic quantum phenomenon that could help researchers advance quantum computing and energy-efficient electronics.
    The chiral interface state is a conducting channel that allows electrons to travel in only one direction, preventing them from being scattered backwards and causing energy-wasting electrical resistance. Researchers are working to better understand the properties of chiral interface states in real materials but visualizing their spatial characteristics has proved to be exceptionally difficult.
    But now, for the first time, atomic-resolution images captured by a research team at Berkeley Lab and UC Berkeley have directly visualized a chiral interface state. The researchers also demonstrated on-demand creation of these resistance-free conducting channels in a 2D insulator.
    Their work, which was reported in the journal Nature Physics, is part of Berkeley Lab’s broader push to advance quantum computing and other quantum information system applications, including the design and synthesis of quantum materials to address pressing technological needs.
    “Previous experiments have demonstrated that chiral interface states exist, but no one has ever visualized them with such high resolution. Our work shows for the first time what these 1D states look like at the atomic scale, including how we can alter them — and even create them,” said first author Canxun Zhang, a former graduate student researcher in Berkeley Lab’s Materials Sciences Division and the Department of Physics at UC Berkeley. He is now a postdoctoral researcher at UC Santa Barbara.
    Chiral interface states can occur in certain types of 2D materials known as quantum anomalous Hall (QAH) insulators that are insulators in bulk but conduct electrons without resistance at one-dimensional “edges” — the physical boundaries of the material and interfaces with other materials.
    To prepare chiral interface states, the team worked at Berkeley Lab’s Molecular Foundry to fabricate a device called twisted monolayer-bilayer graphene, which is a stack of two atomically thin layers of graphene rotated precisely relative to one another, creating a moiré superlattice that exhibits the QAH effect.

    In subsequent experiments at the UC Berkeley Department of Physics, the researchers used a scanning tunneling microscope (STM) to detect different electronic states in the sample, allowing them to visualize the wavefunction of the chiral interface state. Other experiments showed that the chiral interface state can be moved across the sample by modulating the voltage on a gate electrode placed underneath the graphene layers. In a final demonstration of control, the researchers showed that a voltage pulse from the tip of an STM probe can “write” a chiral interface state into the sample, erase it, and even rewrite a new one where electrons flow in the opposite direction.
    The findings may help researchers build tunable networks of electron channels with promise for energy-efficient microelectronics and low-power magnetic memory devices in the future, and for quantum computation making use of the exotic electron behaviors in QAH insulators.
    The researchers intend to use their technique to study more exotic physics in related materials, such as anyons, a new type of quasiparticle that could enable a route to quantum computation.
    “Our results provide information that wasn’t possible before. There is still a long way to go, but this is a good first step,” Zhang said.
    The work was led by Michael Crommie,a senior faculty scientist in Berkeley Lab’s Materials Sciences Division and physics professor at UC Berkeley.
    Tiancong Zhu, a former postdoctoral researcher in the Crommie group at Berkeley Lab and UC Berkeley, contributed as co-corresponding author and is now a physics professor at Purdue University.
    The Molecular Foundry is a DOE Office of Science user facility at Berkeley Lab.
    This work was supported by the DOE Office of Science. Additional funding was provided by the National Science Foundation. More

  • in

    Will the convergence of light and matter in Janus particles transcend performance limitations in the optical display industry?

    A research team consisting of Professor Kyoung-Duck Park and Hyeongwoo Lee, an integrated PhD student, from the Department of Physics at Pohang University of Science and Technology (POSTECH) has pioneered an innovative technique in ultra-high-resolution spectroscopy. Their breakthrough marks the world’s first instance of electrically controlling polaritons — hybridized light-matter particles — at room temperature.
    Polaritons are “half-light half-matter” hybrid particles, having both the characteristics of photons — particles of light — and those of solid matter. Their unique characteristics exhibit properties distinct from both traditional photons and solid matter, unlocking the potential for next-generation materials, particularly in surpassing performance limitations of optical displays. Until now, the inability to electrically control polaritons at room temperature on a single particle level has hindered their commercial viability.
    The research team has devised a novel method called “electric-field tip-enhanced strong coupling spectroscopy,” enabling ultra-high-resolution electrically controlled spectroscopy. This new technique empowers the active manipulation of individual polariton particles at room temperature.
    This technique introduces a novel approach to measurement, integrating super-resolution microscopy previously invented by Prof. Kyoung-Duck Park ‘s team with ultra-precise electrical control. The resulting instrument not only facilitates stable generation of polariton in a distinctive physical state called strong coupling at room temperature but also allows for the manipulation of the color and brightness of the light emitted by the polariton particles through the use of electric-field. Using polariton particles instead of quantum dots, key materials of QLED televisions, offers a notable advantage. A single polariton particle can emit light in all colors with significantly enhanced brightness. This eliminates the need for three distinct types of quantum dots to produce red, green, and blue light separately. Moreover, this property can be electrically controlled similar to conventional electronics. In terms of academic significance, the team has successfully established and experimentally validated the quantum confined stark effect in the strong coupling regime, shedding light on a longstanding mystery in polariton particle research.
    The team’s accomplishment holds profound significance as it marks a scientific breakthrough paving the path for the next generation of research aimed at creating diverse optoelectronic devices and optical components based on polariton technology. This breakthrough is poised to make a substantial contribution to industrial advancement, particularly in providing key source technology for the development of groundbreaking products within the optical display industry including ultra-bright and compact outdoor displays. Hyeongwoo Lee, the lead author of the paper, emphasized the research’s importance, stating that it represents “a significant discovery with the potential to drive advancements across numerous fields including next-generation optical sensors, optical communications, and quantum photonic devices.”
    The research utilized quantum dots fabricated by Professor Sohee Jeong’s team and Professor Jaehoon Lim’s team from Sungkyunkwan University. The theoretical model was crafted by Professor Alexander Efros of the Naval Research Laboratory while data analysis was conducted by Professor Markus Raschke’s team from the University of Colorado and Professor Matthew Pelton’s team from the University of Maryland. Yeonjeong Koo, Jinhyuk Bae, Mingu Kang, Taeyoung Moon, and Huitae Joo from POSTECH’s Physics Department carried out the measurement work.
    This research has been recently published in Physical Review Letters, an international physics journal, and was conducted with support from the Samsung Future Technology Incubation Program. More

  • in

    How climate change will impact food production and financial institutions

    Researchers at the University of California San Diego School of Global Policy and Strategy have developed a new method to predict the financial impacts climate change will have on agriculture, which can help support food security and financial stability for countries increasingly prone to climate catastrophes.
    The study, published today in the Proceedings of the National Academy of Sciences, uses climate and agricultural data from Brazil. It finds that climate change has a cascading effect on farming, leading to increased loan defaults for one of the nation’s largest public sector banks. Over the next three decades, climate-driven loan defaults could increase by up to 7%, according to the study.
    The projections in the paper revealed that although temperatures are rising everywhere, there is substantial variation in what that looks like from region to region, which underscores the need to build distinct types of physical and financial resilience.
    For example, parts of northern Brazil are predicted to have more dramatic seasonal swings around 2050, with heavier rainfall in winter and drier summers, so policymakers should be thinking about the need for water storage by building dams and reservoirs as well as increasing groundwater storage capacity. Conversely, central Brazil may have fairly steady weather, but will have higher overall temperatures, pointing to a need for heat-resistant crops.
    The authors of the paper used a statistical approach pairing past climate data in Brazil with information on crop productivity, farm revenue and agricultural loan performance. They combined this data with climate simulations to predict future weather conditions and their impacts on farming and how those changes will affect financial institutions.
    “A difficulty in studying climate impacts on agriculture is that there are all sorts of adaptations happening all the time that aren’t easily observed, but are really important for understanding vulnerability and how risk is changing,” said coauthor Jennifer Burney, professor of environmental science at UC San Diego’s School of Global Policy and Strategy and Scripps Institution of Oceanography. “We were able to distinguish signals from different types of climate impacts and which ones led to this larger financial risk.”
    Systematic thinking about building resilience against climate change around the globe
    A key objective of the research is to support resilient food security under a changing climate, which requires understanding of when small climate shifts might have outsized impacts, spilling across regions or into other sectors through institutions like trade and banking.

    Understanding the systemic risk posed by climate change is especially helpful for policymakers and disaster relief agencies, as climate change has increasingly become a national security threat. To that end, the statistical approach developed in the study could be applied around the globe.
    “The technique we developed will help populations identify where they are most vulnerable, how climate change will hurt them the most economically and what institutions they should focus on to build resilience,” said study coauthor Craig McIntosh, professor of economics at the School of Global Policy and Strategy.
    For example, some governments in the Western Pacific region buy extra food on the global market in emerging El Niño years, when their own crop productivity suffers. The statistical approach used in the study could help governments around the world understand their own climate conditions and whether local, regional or international institutions will be best placed to address them.
    The research could be especially helpful with the development of the loss and damage fund established by the United Nations in 2022. The fund is designed to help compensate developing nations that have contributed the least to the climate crisis but have been facing the brunt of its devastating floods, drought and sea-level rise.
    “Our technique could help countries think about where the resilience returns would be highest for the money spent,” said Krislert Samphantharak, professor of economics at the School of Global Policy and Strategy. “This technique also helps to identify where international reinsurance might be needed.”
    The “Empirical Modeling of Agricultural Climate Risk” study was also coauthored by Bruno Lopez-Videla, who earned a Ph.D. in economics from UC San Diego in 2021 and Alexandre Gori Maia of the Universidade Estadual de Campinas in Brazil. More

  • in

    A pulse of innovation: AI at the service of heart research

    Understanding heart function and disease, as well as testing new drugs for heart conditions, has long been a complex and time-consuming task. A promising way to study disease and test new drugs is to use cellular and engineered tissue models in a dish, but existing methods to study heart cell contraction and calcium handling require a good deal of manual work, are prone to errors, and need expensive specialized equipment. There clearly is a critical medical need for a more efficient, accurate, and accessible way to study heart function, using a methodology based on artificial intelligence (AI) and machine learning.
    BeatProfiler, new tool to rapidly analyze heart cell function
    Researchers at Columbia Engineering unveiled a groundbreaking new tool today that addresses these challenges head-on. BeatProfiler is a comprehensive software that automates the analysis of heart cell function from video data and is the first system to integrate the analysis of different heart function indicators, such as contractility, calcium handling, and force output into one tool, speeding up the process significantly and reducing the chance for errors. BeatProfiler enabled the researchers to not only distinguish between different diseases and levels of their severity but also to rapidly and objectively test drugs that affect heart function. The study was published on April 8 in IEEE Open Journal of Engineering in Medicine and Biology.
    “This is truly a transformative tool,” said project leader Gordana Vunjak-Novakovic, University Professor and the Mikati Foundation Professor of Biomedical Engineering, Medical Sciences, and Dental Medicine at Columbia. “It’s fast, comprehensive, automated, and compatible with a broad range of computer platforms so it is easily accessible to investigators and clinicians.”
    Software is open-source
    The team, which included Barry Fine, assistant professor of medicine (in Cardiology) at Columbia University Irving Medical Center, elected not to file a patent application, and instead are offering the AI software as open source, so it can be directly used — for free — by any lab. They believe that this is important for disseminating the results of their research, as well as for getting feedback from users in academic, clinical, and commercial labs that can help the team to further refine the software.
    The need to diagnose heart disease quickly and accurately
    This project was driven, like much of Vunjak-Novakovic’s research, by a clinical need to diagnose heart diseases more quickly and accurately. This was a project that was several years in the making in which the team added different features piece by piece. While the overarching need was to develop a tool that could better capture the function of the cardiac models that the team was building to study cardiac diseases and assess the efficacy of potential therapeutics, the researchers had an urgent need to quickly and accurately assess the function of their cardiac models in real-time.

    As the lab was making more and more cardiac tissues through innovations such as milliPillar and multiorgan tissue models, the increased capabilities of the tissues required the researchers to develop a method to more rapidly quantify the function of cardiomyocytes (heart muscle cells) and tissues to enable studies exploring genetic cardiomyopathies, cosmic radiation, immune-mediated inflammation, and drug discovery.
    Collaborators in software development, machine learning, and more
    In the last year and a half, lead author Youngbin Kim and his coauthors developed a graphical user interface (GUI) on top of the code so that biomedical researchers with no coding expertise could easily analyze the data with just a few clicks. This brought together experts in software development (for the GUI development), machine learning (for developing computer vision technology and disease/drug classifiers), signal processing (for processing contractile and calcium signals), engineering (translating pillar deflection on the cardiac platform to mechanical force), and user experience by lab members (to give feedback for improvements in the interface).
    The results
    The study showed that BeatProfiler could accurately analyze cardiomyocyte function, outperforming existing tools by being faster — up to 50 times in some cases — and more reliable. It detected subtle changes in engineered heat tissue force response that other tools might miss.
    “This level of analysis speed and versatility is unprecedented in cardiac research,” said Kim, a PhD candidate in Vunjak-Novakovic’s lab at Columbia Engineering. “Using machine learning, the functional measurements analyzed by BeatProfiler helped us to distinguish between diseased and healthy heart cells with high accuracy and even to classify different cardiac drugs based on how they affect the heart.”
    What’s next
    The team is working to expand BeatProfiler’s capabilities for new applications in heart research, including a full spectrum of diseases that affect the pumping of the heart, and drug development. To ensure that BeatProfiler can be applied to a wide variety of research questions, they are testing and validating its performance across additional in vitro cardiac models, including different engineered heart tissue models. They are also refining their machine-learning algorithm to extend and generalize its use to a variety of heart diseases and drug effect classification. The long-term goal is to adapt BeatProfiler to pharmaceutical settings to speed up the testing of hundreds of thousands of candidate drugs at once. More

  • in

    Engineers design soft and flexible ‘skeletons’ for muscle-powered robots

    Our muscles are nature’s perfect actuators — devices that turn energy into motion. For their size, muscle fibers are more powerful and precise than most synthetic actuators. They can even heal from damage and grow stronger with exercise.
    For these reasons, engineers are exploring ways to power robots with natural muscles. They’ve demonstrated a handful of “biohybrid” robots that use muscle-based actuators to power artificial skeletons that walk, swim, pump, and grip. But for every bot, there’s a very different build, and no general blueprint for how to get the most out of muscles for any given robot design.
    Now, MIT engineers have developed a spring-like device that could be used as a basic skeleton-like module for almost any muscle-bound bot. The new spring, or “flexure,” is designed to get the most work out of any attached muscle tissues. Like a leg press that’s fit with just the right amount of weight, the device maximizes the amount of movement that a muscle can naturally produce.
    The researchers found that when they fit a ring of muscle tissue onto the device, much like a rubber band stretched around two posts, the muscle pulled on the spring, reliably and repeatedly, and stretched it five times more, compared with other previous device designs.
    The team sees the flexure design as a new building block that can be combined with other flexures to build any configuration of artificial skeletons. Engineers can then fit the skeletons with muscle tissues to power their movements.
    “These flexures are like a skeleton that people can now use to turn muscle actuation into multiple degrees of freedom of motion in a very predictable way,” says Ritu Raman, the Brit and Alex d’Arbeloff Career Development Professor in Engineering Design at MIT. “We are giving roboticists a new set of rules to make powerful and precise muscle-powered robots that do interesting things.”
    Raman and her colleagues report the details of the new flexure design in a paper appearing in the journal Advanced Intelligent Systems. The study’s MIT co-authors include Naomi Lynch ’12, SM ’23; undergraduate Tara Sheehan; graduate students Nicolas Castro, Laura Rosado, and Brandon Rios; and professor of mechanical engineering Martin Culpepper.

    Muscle pull
    When left alone in a petri dish in favorable conditions, muscle tissue will contract on its own but in directions that are not entirely predictable or of much use.
    “If muscle is not attached to anything, it will move a lot, but with huge variability, where it’s just flailing around in liquid,” Raman says.
    To get a muscle to work like a mechanical actuator, engineers typically attach a band of muscle tissue between two small, flexible posts. As the muscle band naturally contracts, it can bend the posts and pull them together, producing some movement that would ideally power part of a robotic skeleton. But in these designs, muscles have produced limited movement, mainly because the tissues are so variable in how they contact the posts. Depending on where the muscles are placed on the posts, and how much of the muscle surface is touching the post, the muscles may succeed in pulling the posts together but at other times may wobble around in uncontrollable ways.
    Raman’s group looked to design a skeleton that focuses and maximizes a muscle’s contractions regardless of exactly where and how it is placed on a skeleton, to generate the most movement in a predictable, reliable way.
    “The question is: How do we design a skeleton that most efficiently uses the force the muscle is generating?” Raman says.

    The researchers first considered the multiple directions that a muscle can naturally move. They reasoned that if a muscle is to pull two posts together along a specific direction, the posts should be connected to a spring that only allows them to move in that direction when pulled.
    “We need a device that is very soft and flexible in one direction, and very stiff in all other directions, so that when a muscle contracts, all that force gets efficiently converted into motion in one direction,” Raman says.
    Soft flex
    As it turns out, Raman found many such devices in Professor Martin Culpepper’s lab. Culpepper’s group at MIT specializes in the design and fabrication of machine elements such as miniature actuators, bearings, and other mechanisms, that can be built into machines and systems to enable ultraprecise movement, measurement, and control, for a wide variety of applications. Among the group’s precision machined elements are flexures — spring-like devices, often made from parallel beams, that can flex and stretch with nanometer precision.
    “Depending on how thin and far apart the beams are, you can change how stiff the spring appears to be,” Raman says.
    She and Culpepper teamed up to design a flexure specifically tailored with a configuration and stiffness to enable muscle tissue to naturally contract and maximally stretch the spring. The team designed the device’s configuration and dimensions based on numerous calculations they carried out to relate a muscle’s natural forces with a flexure’s stiffness and degree of movement.
    The flexure they ultimately designed is 1/100 the stiffness of muscle tissue itself. The device resembles a miniature, accordion-like structure, the corners of which are pinned to an underlying base by a small post, which sits near a neighboring post that is fit directly onto the base. Raman then wrapped a band of muscle around the two corner posts (the team molded the bands from live muscle fibers that they grew from mouse cells), and measured how close the posts were pulled together as the muscle band contracted.
    The team found that the flexure’s configuration enabled the muscle band to contract mostly along the direction between the two posts. This focused contraction allowed the muscle to pull the posts much closer together — five times closer — compared with previous muscle actuator designs.
    “The flexure is a skeleton that we designed to be very soft and flexible in one direction, and very stiff in all other directions,” Raman says. “When the muscle contracts, all the force is converted into movement in that direction. It’s a huge magnification.”
    The team found they could use the device to precisely measure muscle performance and endurance. When they varied the frequency of muscle contractions (for instance, stimulating the bands to contract once versus four times per second), they observed that the muscles “grew tired” at higher frequencies, and didn’t generate as much pull.
    “Looking at how quickly our muscles get tired, and how we can exercise them to have high-endurance responses — this is what we can uncover with this platform,” Raman says.
    The researchers are now adapting and combining flexures to build precise, articulated, and reliable robots, powered by natural muscles.
    “An example of a robot we are trying to build in the future is a surgical robot that can perform minimally invasive procedures inside the body,” Raman says. “Technically, muscles can power robots of any size, but we are particularly excited in making small robots, as this is where biological actuators excel in terms of strength, efficiency, and adaptability.” More

  • in

    Researchers developed new method for detecting heart failure with a smartphone

    The new technology, which was created at the University of Turku and developed by the company CardioSignal, uses a smartphone to analyse heart movement and detect heart failure. The study involved five organisations from Finland and the United States.
    Heart failure is a condition affecting tens of millions of people worldwide, in which the heart is unable to perform its normal function of pumping blood to the body. It is a serious condition that develops as a result of a number of cardiovascular diseases and its symptoms may require repeated hospitalisation.
    Heart failure is challenging to diagnose because its symptoms, such as shortness of breath, abnormal fatigue on exertion, and swelling, can be caused by a number of conditions. There is no simple test available to detect it and diagnostics relies on an examination by a doctor, blood tests, and sophisticated imaging, such as an ultrasound scan of the heart.
    Gyrocardiography is a non-invasive technique for measuring cardiac vibrations on the chest. The smartphone’s built-in motion sensors can detect and record these vibrations, including those that doctors cannot hear with a stethoscope. The method has been developed over the last 10 years by researchers at the University of Turku and CardioSignal.
    The researchers’ latest study on using smartphone motion sensors to detect heart failure was carried out at the Turku and Helsinki University Hospitals in Finland and Stanford University Hospital in the US. Approximately 1,000 people took part in the study, of whom around 200 were patients suffering from heart failure. The study compared the data provided by the motion sensors in the heart failure patients and patients without heart disease.
    “The results we obtained with this new method are promising and may in the future make it easier to detect heart failure,” says Cardiologist Antti Saraste, one of the two main authors of the research article and the Professor of Cardiovascular Medicine at the University of Turku, Finland.
    Precise detection uncovers heart failure
    The researchers found that heart failure is associated with typical changes in the motion sensor data collected by a smartphone. On the basis of this data, the researchers were able to identify the majority of patients with heart failure.

    The analysis of the movements detected by the gyroscope and accelerometer is so accurate that in the future it could provide healthcare professionals with a quick and easy way to detect heart failure.
    “Primary healthcare has very limited tools for detecting heart failure. We can create completely new treatment options for remote monitoring of at-risk groups and for monitoring already diagnosed patients after hospitalisation,” says CardioSignal’s founding member and CEO, Cardiologist Juuso Blomster.
    Consistent with several European countries, heart failure affects around 1-2% of the population in Finland, but it is much more common in older adults, affecting around one in ten people aged 70. Detecting heart failure is important as effective treatment can help to alleviate its symptoms. Accurate diagnosis and timely access to treatment can also reduce healthcare costs, which are driven up by emergency room visits and hospital stays, especially during exacerbations.
    The joint research projects between CardioSignal and the University of Turku aim to promote people’s health and reduce healthcare costs through innovation, improved disease diagnostics, and prevention of serious complications. More