More stories

  • in

    Computer modeling used to predict reef health

    A UBC Okanagan researcher has developed a way to predict the future health of the planet’s coral reefs.
    Working with scientists from Australia’s Flinders’ University and privately-owned research firm Nova Blue Environment, biology doctoral student Bruno Carturan has been studying the ecosystems of the world’s endangered reefs.
    “Coral reefs are among the most diverse ecosystems on Earth and they support the livelihoods of more than 500 million people,” says Carturan. “But coral reefs are also in peril. About 75 per cent of the world’s coral reefs are threatened by habitat loss, climate change and other human-caused disturbances.”
    Carturan, who studies resilience, biodiversity and complex systems under UBCO Professors Lael Parrott and Jason Pither, says nearly all the world’s reefs will be dangerously affected by 2050 if no effective measures are taken.
    There is hope, however, as he has determined a way to examine the reefs and explore why some reef ecosystems appear to be more resilient than others. Uncovering why, he says, could help stem the losses.
    “In other ecosystems, including forests and wetlands, experiments have shown that diversity is key to resilience,” says Carturan. “With more species, comes a greater variety of form and function — what ecologists call traits. And with this, there is a greater likelihood that some particular traits, or combination of traits, help the ecosystem better withstand and bounce back from disturbances.”
    The importance of diversity for the health and stability of ecosystems has been extensively investigated by ecologists, he explains. While the consensus is that ecosystems with more diversity are more resilient and function better, the hypothesis has rarely been tested experimentally with corals.

    advertisement

    Using an experiment to recreate the conditions found in real coral reefs is challenging for several reasons — one being that the required size, timeframe and number of different samples and replicates are just unmanageable.
    That’s where computer simulation modelling comes in.
    “Technically called an ‘agent-based model’, it can be thought of as a virtual experimental arena that enables us to manipulate species and different types of disturbances, and then examine their different influences on resilience in ways that are just not feasible in real reefs,” explains Carturan.
    In his simulation arena, individual coral colonies and algae grow, compete with one another, reproduce and die. And they do all this in realistic ways. By using agent-based models — with data collected by many researchers over decades — scientists can manipulate the initial diversity of corals, including their number and identity, and see how the virtual reef communities respond to threats.
    “This is crucial because these traits are the building blocks that give rise to ecosystem structure and function. For instance, corals come in a variety of forms — from simple spheres to complex branching — and this influences the variety of fish species these reefs host, and their susceptibility to disturbances such as cyclones and coral bleaching.”
    By running simulations over and over again, the model can identify combinations that can provide the greatest resilience. This will help ecologists design reef management and restoration strategies using predictions from the model, says collaborating Flinders researcher Professor Corey Bradshaw.

    advertisement

    “Sophisticated models like ours will be useful for coral-reef management around the world,” Bradshaw adds. “For example, Australia’s iconic Great Barrier Reef is in deep trouble from invasive species, climate change-driven mass bleaching and overfishing.”
    “This high-resolution coral ‘video game’ allows us to peek into the future to make the best possible decisions and avoid catastrophes.”
    The research, supported by grants from the Natural Sciences and Engineering Research Council of Canada and the Canada Foundation for Innovation, was published recently in eLife. More

  • in

    Faster, more efficient energy storage could stem from holistic study of layered materials

    A team led by the Department of Energy’s Oak Ridge National Laboratory developed a novel, integrated approach to track energy-transporting ions within an ultra-thin material, which could unlock its energy storage potential leading toward faster charging, longer lasting devices.
    Scientists have for a decade studied the energy-storing possibilities of an emerging class of two-dimensional materials — those constructed in layers that are only a few atoms thick — called MXenes, pronounced “max-eens.”
    The ORNL-led team integrated theoretical data from computational modeling of experimental data to pinpoint potential locations of a variety of charged ions in titanium carbide, the most studied MXene phase. Through this holistic approach, they could track and analyze the ions’ motion and behavior from the single-atom to the device scale.
    “By comparing all the methods we employed, we were able to form links between theory and different types of materials characterization, ranging from very simple to very complex over a wide range of length and time scales,” said Nina Balke, ORNL co-author of the published study that was conducted within the Fluid Interface Reactions, Structures and Transport, or FIRST, Center. FIRST is a DOE-funded Energy Frontier Research Center located at ORNL.
    “We pulled all those links together to understand how ion storage works in layered MXene electrodes,” she added. The study’s results allowed the team to predict the material’s capacitance, or its ability to store energy. “And, in the end, after much discussion, we were able to unify all these techniques into one cohesive picture, which was really cool.”
    Layered materials can enhance energy stored and power delivered because the gaps between the layers allow charged particles, or ions, to move freely and quickly. However, ions can be difficult to detect and characterize, especially in a confined environment with multiple processes at play. A better understanding of these processes can advance the energy storage potential of lithium-ion batteries and supercapacitors.
    As a FIRST center project, the team focused on the development of supercapacitors — devices that charge quickly for short-term, high-power energy needs. In contrast, lithium-ion batteries have a higher energy capacity and provide electrical power longer, but the rates of discharge, and therefore their power levels, are lower.
    MXenes have the potential to bridge the benefits of these two concepts, Balke said, which is the overarching goal of fast-charging devices with greater, more efficient energy storage capacity. This would benefit a range of applications from electronics to electric vehicle batteries.
    Using computational modeling, the team simulated the conditions of five different charged ions within the layers confined in an aqueous solution, or “water shell.” The theoretical model is simple, but combined with experimental data, it created a baseline that provided evidence of where the ions within the MXene layers went and how they behaved in a complex environment.
    “One surprising outcome was we could see, within the simulation limits, different behavior for the different ions,” said ORNL theorist and co-author Paul Kent.
    The team hopes their integrated approach can guide scientists toward future MXene studies. “What we developed is a joint model. If we have a little bit of data from an experiment using a certain MXene, and if we knew the capacitance for one ion, we can predict it for the other ones, which is something that we weren’t able to do before,” Kent said.
    “Eventually, we’ll be able to trace those behaviors to more real-world, observable changes in the material’s properties,” he added. More

  • in

    Bold proposal to tackle one of the biggest barriers to more renewable energy

    The phrase “too much of a good thing” may sound like a contradiction, but it encapsulates one of the key hurdles preventing the expansion of renewable energy generation. Too much of a service or commodity makes it harder for companies to sell them, so they curtail production.
    Usually that works out fine: The market reaches equilibrium and economists are happy. But external factors are bottlenecking renewable electricity despite the widespread desire to increase its capacity.
    UC Santa Barbara’s Sangwon Suh is all too familiar with this issue. The professor of industrial ecology has focused on it and related challenges for at least the past two years at the Bren School of Environmental Science & Management. “Curtailment is the biggest problem of renewable energy we are facing,” said Suh, who noted it will only escalate as renewable energy capacity increases.
    Now Suh, along with Bren doctoral student Jiajia Zheng, and Andrew Chien at the University of Chicago, have presented an innovative proposal to address this issue by routing workloads between data centers in different regions. The concept, published in the journal Joule, is cheap, efficient and requires minimal new infrastructure. Yet it could reduce thousands of tons of greenhouse gas emissions per year, all while saving companies money and encouraging the expansion of renewable energy.
    The main roadblock
    Curtailment comes into play when renewable energy sources generate more electricity than is required to meet demand. Modern power grids balance energy supply and demand in real-time, every minute of every day. Extra electricity would overwhelm them, so it needs to be either stored, sold off or curtailed.

    advertisement

    This occurs because reliable energy sources — like fossil fuel and nuclear power plants — are critical to grid stability, as well as meeting nighttime demand. These facilities have to operate above a minimum capacity, since shutting down and restarting them is both costly and inefficient. This sets a minimum for electricity from conventional power sources, and if renewables continue to generate more power, then the extra energy is effectively useless.
    California is a case study in the challenges of variable renewable electricity and the problem of curtailment. Presumably the state could sell its surplus electricity to neighbors. Unfortunately, many power grids are encountering the same problem, and the transmission network has limited capacity. As a result, the state has resorted to selling excess electricity at a negative price, essentially paying other states to take the energy.
    There are two other solutions for dealing with excess electricity aside from simply curtailing energy generation, Suh explained. Energy can be stored in batteries and even hydroelectric reservoirs. That said, batteries are incredibly expensive, and hydropower storage is only suitable for certain locations.
    The other option is to use the extra electricity to generate things of value that can be used later. “Whatever we produce will have to be stored and transported to where it’s needed,” Suh pointed out. “And this can be very expensive.
    “But,” he added, “transporting data and information is very cheap because we can use fiber optics to transmit the data literally at the speed of light.” As the authors wrote in the study, the idea behind data load migration is “moving bits, not watts.”
    An innovative idea

    advertisement

    The task ahead of the authors was clear. “The question we were trying to answer was can we process data using excess electricity?” Suh said. “If we can, then it’s probably the cheapest solution for transporting the product or service made using excess electricity.”
    Currently, Northern Virginia hosts most of the nation’s data centers. Unlike California’s grid, CAISO, the grid Northern Virginia sits on, PJM, relies heavily on coal-fired power plants, “the dirtiest electricity that we can ever imagine,” in Suh’s words.
    Suh, Zheng and Chien propose that workloads from the PJM region could be sent to centers out west whenever California has excess electricity. The jobs can be accomplished using electricity that otherwise would have been curtailed or sold for a loss, and then the processed data can be sent wherever the service is needed. Data centers usually have average server usage rates below 50%, Zheng explained, meaning there is plenty of idle capacity ready to be tapped.
    This plan is not only environmentally sound; it represents significant savings for the companies using these services. “This approach could potentially save the data center operators tens of millions of dollars,” said lead author Zheng. Since the electricity would otherwise have been useless, its cost to the company is essentially zero.
    The authors analyzed historical curtailment data of CAISO from 2015 through 2019. They found that load migration could have absorbed up to 62% of CAISO’s curtailed electricity capacity in 2019. That’s nearly 600,000 megawatt-hours of previously wasted energy — roughly as much electricity as 100,000 Californian households consume in a year.
    At the same time, the strategy could have reduced the equivalent of up to 240,000 metric tons of CO2 emissions in 2019 using only existing data center capacity in California. “That is equivalent to the greenhouse gas emissions from 600 million miles of driving using average passenger vehicles,” Suh said. And, rather than costing money, each ton of CO2 emissions averted by switching power grids would actually provide around $240 in savings due to decreased spending on electricity.
    Untapped potential
    These findings were within what the authors expected to see. It was the ramifications that amazed them. “What surprised me was why we were not doing this before,” Suh said. “This seems very straightforward: There’s excess electricity, and electricity is a valuable thing, and information is very cheap to transmit from one location to another. So why are we not doing this?”
    Suh suspects it may be because data center operators are less inclined to cooperate with each other under current market pressures. Despite the environmental and financial benefits, these companies may be reluctant to outsource data processing to a facility run by a different firm.
    In fact, the data center industry is somewhat of a black box. “It was very challenging for us to get detailed information on the power usage characteristics and energy consumption data from the industry,” said Zheng.
    Harnessing the potential of curtailed renewable energy will require fluid coordination between the data center operators. Shifting the system may require changing the incentives currently at work. This could take the form of new regulations, a price on carbon emissions or collaborations between rival companies.
    “Two different things need to happen in parallel,” Suh said. “One is from the private sector: They need to cooperate and come up with the technological and managerial solutions to enable this. And from the government side, they need to think about the policy changes and incentives that can enable this type of change more quickly.”
    A widespread price on carbon emissions could provide the necessary nudge. California already has a carbon price, and Suh believes that, as additional states follow suit, it will become more economically attractive for companies to start using the strategies laid out in this study.
    And these strategies have huge growth potential. Data processing and renewable electricity capacity are both growing rapidly. Researchers predict that the datasphere will expand more than fivefold from 2018 to 2025. As a result, there is a lot of room for data centers to absorb additional processing needs using excess renewable energy in the future.
    This paper offers only a conservative estimate of the financial and environmental benefits of data load migration, Suh acknowledged. “As we increase the data center capacity, I think that the ability for a data center to be used as a de facto battery is actually increasing as well,” he said.
    “If we can think ahead and be prepared, I think that a substantial portion of the curtailment problem can be addressed in a very cost-effective way by piggybacking on the growth of data centers.” More

  • in

    Deep learning algorithm to speed up materials discovery in emerging tech industries

    Solid-state inorganic materials are critical to the growth and development of electric vehicle, cellphone, laptop battery and solar energy technologies. However, finding the ideal materials with the desired functions for these industries is extremely challenging. Jianjun Hu, an associate professor of computer science at the University of South Carolina is the lead researcher on a project to generate new hypothetical materials.
    Due to the vast chemical design space and the high sparsity of candidates, experimental trials and first-principle computational simulations cannot be used as screening tools to solve this problem. Instead, researchers have developed a deep learning-based smart algorithm that uses a technique called generative adversarial network (GAN) model to dramatically improve the material search efficiency up to two orders of magnitude. It has the potential to greatly speed up the discovery of novel functional materials.
    The work, published in NPJ Computational Materials, was a collaboration between researchers at the University of South Carolina College of Engineering and Computing and Guizhou University, a research university located in Guiyang, China.
    Inspired by the deep learning technique used in Google’s AlphaGo, which learned implicit rules of the board game Go to defeat the game’s top players, the researchers used their GAN neural network to learn the implicit chemical composition rules of atoms in different elements to assemble chemically valid formulas. By training their deep learning models using the tens of thousands of known inorganic materials deposited in databases such as ICSD and OQMD, they created a generative machine learning model capable of generating millions of new hypothetical inorganic material formulas.
    “There is almost an infinite number of new materials that could exist, but they haven’t been discovered yet,” said Jianjun Hu. “Our algorithm, it’s like a generation engine. Using this model, we can generate a lot of new hypothetical materials that have very high likelihoods to exist.”
    Without explicitly modeling or enforcing chemical constraints such as charge neutrality and electronegativity, the deep learning-based smart algorithm learned to observe such rules when generating millions of hypothetical materials’ formulas. The predictive power of the algorithm has been verified both by known materials and recent findings in materials discovery literature. “One major advantage of our algorithm is the high validity, uniqueness and novelty, which are the three major evaluation metrics of such generative models,” said Shaobo Li, a professor at Guizhou University who was involved in this study.
    This is not the first time that an algorithm has been created for materials discovery. Past algorithms were also able to produce millions of potential new materials. However, very few of the materials discovered by these algorithms were synthesizable due to their high free energy and instability. In contrast, nearly 70 percent of the inorganic materials identified by Hu’s team are very likely to be stable and then possibly synthesizable.
    “You can get any number of formula combinations by putting elements’ symbols together. But it doesn’t mean the physics can exist,” said Ming Hu, an associate professor of mechanical engineering at UofSC also involved in the research. “So, our algorithm and the next step, structure prediction algorithm, will dramatically increase the speed to screening new function materials by creating synthesizable compounds.”
    These new materials will help researchers in fields such as electric vehicles, green energy, solar energy and cellphone development as they continually search for new materials with optimized functionalities. With the current materials discovery process being so slow, these industries’ growth has been limited by the materials available to them.
    The next major step for the team is to predict the crystal structure of the generated formulas, which is currently a major challenge. However, the team has already started working on this challenge along with several leading international teams. Once solved, the two steps can be combined to discover many potential materials for energy conversion, storage and other applications.
    About University of South Carolina:
    The University of South Carolina is a globally recognized, high-impact research university committed to a superior student experience and dedicated to innovation in learning, research and community engagement. Founded in 1801, the university offers more than 350 degree programs and is the state’s only top-tier Carnegie Foundation research institution. More than 50,000 students are enrolled at one of 20 locations throughout the state, including the research campus in Columbia. With 56 nationally ranked academic programs including top-ranked programs in international business, the nation’s best honors college and distinguished programs in engineering, law, medicine, public health and the arts, the university is helping to build healthier, more educated communities in South Carolina and around the world. More

  • in

    Fifty new planets confirmed in machine learning first

    Fifty potential planets have had their existence confirmed by a new machine learning algorithm developed by University of Warwick scientists.
    For the first time, astronomers have used a process based on machine learning, a form of artificial intelligence, to analyse a sample of potential planets and determine which ones are real and which are ‘fakes’, or false positives, calculating the probability of each candidate to be a true planet.
    Their results are reported in a new study published in the Monthly Notices of the Royal Astronomical Society, where they also perform the first large scale comparison of such planet validation techniques. Their conclusions make the case for using multiple validation techniques, including their machine learning algorithm, when statistically confirming future exoplanet discoveries.
    Many exoplanet surveys search through huge amounts of data from telescopes for the signs of planets passing between the telescope and their star, known as transiting. This results in a telltale dip in light from the star that the telescope detects, but it could also be caused by a binary star system, interference from an object in the background, or even slight errors in the camera. These false positives can be sifted out in a planetary validation process.
    Researchers from Warwick’s Departments of Physics and Computer Science, as well as The Alan Turing Institute, built a machine learning based algorithm that can separate out real planets from fake ones in the large samples of thousands of candidates found by telescope missions such as NASA’s Kepler and TESS.
    It was trained to recognise real planets using two large samples of confirmed planets and false positives from the now retired Kepler mission. The researchers then used the algorithm on a dataset of still unconfirmed planetary candidates from Kepler, resulting in fifty new confirmed planets and the first to be validated by machine learning. Previous machine learning techniques have ranked candidates, but never determined the probability that a candidate was a true planet by themselves, a required step for planet validation.

    advertisement

    Those fifty planets range from worlds as large as Neptune to smaller than Earth, with orbits as long as 200 days to as little as a single day. By confirming that these fifty planets are real, astronomers can now prioritise these for further observations with dedicated telescopes.
    Dr David Armstrong, from the University of Warwick Department of Physics, said: “The algorithm we have developed lets us take fifty candidates across the threshold for planet validation, upgrading them to real planets. We hope to apply this technique to large samples of candidates from current and future missions like TESS and PLATO.
    “In terms of planet validation, no-one has used a machine learning technique before. Machine learning has been used for ranking planetary candidates but never in a probabilistic framework, which is what you need to truly validate a planet. Rather than saying which candidates are more likely to be planets, we can now say what the precise statistical likelihood is. Where there is less than a 1% chance of a candidate being a false positive, it is considered a validated planet.”
    Dr Theo Damoulas from the University of Warwick Department of Computer Science, and Deputy Director, Data Centric Engineering and Turing Fellow at The Alan Turing Institute, said: “Probabilistic approaches to statistical machine learning are especially suited for an exciting problem like this in astrophysics that requires incorporation of prior knowledge — from experts like Dr Armstrong — and quantification of uncertainty in predictions. A prime example when the additional computational complexity of probabilistic methods pays off significantly.”
    Once built and trained the algorithm is faster than existing techniques and can be completely automated, making it ideal for analysing the potentially thousands of planetary candidates observed in current surveys like TESS. The researchers argue that it should be one of the tools to be collectively used to validate planets in future.
    Dr Armstrong adds: “Almost 30% of the known planets to date have been validated using just one method, and that’s not ideal. Developing new methods for validation is desirable for that reason alone. But machine learning also lets us do it very quickly and prioritise candidates much faster.
    “We still have to spend time training the algorithm, but once that is done it becomes much easier to apply it to future candidates. You can also incorporate new discoveries to progressively improve it.
    “A survey like TESS is predicted to have tens of thousands of planetary candidates and it is ideal to be able to analyse them all consistently. Fast, automated systems like this that can take us all the way to validated planets in fewer steps let us do that efficiently.” More

  • in

    Teamwork can make the 5G dream work: A collaborative system architecture for 5G networks

    A research team led by Prof Jeongho Kwak from Daegu Gyeongbuk Institute of Science and Technology (DGIST) has designed a novel system architecture where collaboration between cloud service providers and mobile network operators plays a central role. Such a collaborative architecture would allow for optimizing the use of network, computing, and storage resources, thereby unlocking the potential of various novel services and applications.
    That many novel network- and cloud-dependent services will have become commonplace in the next few years is evident. This includes highly demanding technological feats like 8K video streaming, remote virtual reality, and large-scale data processing. But, it is also likely that today’s network infrastructures won’t make the cut unless significant improvements are made to enable the advanced, “killer” 5G applications expected in the imminent 5G era.
    So, instead of having cloud service providers (CSPs) and mobile network operators (MNOs) like Google and like Verizon independently improve their systems, what if they actively collaborated to achieve common goals? In a recent paper published in IEEE Network, a team of scientists, including Prof Jeongho Kwak from Daegu Gyeongbuk Institute of Science and Technology in Korea, explored the benefits and challenges of implementing a system focused on MNO-CSP collaboration.
    In their study, the scientists propose an overarching system architecture in which both CSPs and MNOs share information and exert unified control over the available network, computing, and storage resources. Prof Kwak explains, “The proposed architecture includes vertical collaboration from end devices to centralized cloud systems and horizontal collaboration between cloud providers and network providers. Hence, via vertical-horizontal optimization of the architecture, we can experience holistic improvement in the services for both current and future killer applications of 5G.” For example, by having MNOs share information about current traffic congestions and CSPs inform MNOs about their available computing resources, a collaborative system becomes more agile, flexible, and efficient.
    Through simulations, the research team went on to demonstrate how CSP-MNO collaboration could bring about potential performance improvements. Moreover, they discussed the present challenges that need to be overcome before such a system can be implemented, including calculating the financial incentives for each party and certain compatibility issues during the transition to a collaborative system architecture.
    Embracing collaboration between CSPs and MNOs might be necessary to unlock many of the features that were promised during the early development of 5G. Prof Kwak concludes, “We envision unconstrained use of augmented or virtual reality services and autonomous vehicles with almost zero latency. However, this ideal world will be possible only through the joint optimization of networking, processing, and storage resources.”
    One thing is clear: “teamwork,” among various service providers, is essential if we are to keep up with the current Information Age. More

  • in

    Cutting surgical robots down to size

    Teleoperated surgical robots are becoming commonplace in operating rooms, but many are massive (sometimes taking up an entire room) and difficult to manipulate. A new collaboration between Harvard’s Wyss Institute and Sony Corporation has created the mini-RCM, a surgical robot the size of a tennis ball that weighs as much as a penny, and performed significantly better than manually operated tools in delicate mock-surgical procedures.
    Minimally invasive laparoscopic surgery, in which a surgeon uses tools and a tiny camera inserted into small incisions to perform operations, has made surgical procedures safer for both patients and doctors over the last half-century. Recently, surgical robots have started to appear in operating rooms to further assist surgeons by allowing them to manipulate multiple tools at once with greater precision, flexibility, and control than is possible with traditional techniques. However, these robotic systems are extremely large, often taking up an entire room, and their tools can be much larger than the delicate tissues and structures on which they operate.
    A collaboration between Wyss Associate Faculty member Robert Wood, Ph.D. and Robotics Engineer Hiroyuki Suzuki of Sony Corporation has brought surgical robotics down to the microscale by creating a new, origami-inspired miniature remote center of motion manipulator (the “mini-RCM”). The robot is the size of a tennis ball, weighs about as much as a penny, and successfully performed a difficult mock surgical task, as described in a recent issue of Nature Machine Intelligence.
    “The Wood lab’s unique technical capabilities for making micro-robots have led to a number of impressive inventions over the last few years, and I was convinced that it also had the potential to make a breakthrough in the field of medical manipulators as well,” said Suzuki, who began working with Wood on the mini-RCM in 2018 as part of a Harvard-Sony collaboration. “This project has been a great success.”
    A mini robot for micro tasks
    To create their miniature surgical robot, Suzuki and Wood turned to the Pop-Up MEMS manufacturing technique developed in Wood’s lab, in which materials are deposited on top of each other in layers that are bonded together, then laser-cut in a specific pattern that allows the desired three-dimensional shape to “pop up,” as in a children’s pop-up picture book. This technique greatly simplifies the mass-production of small, complex structures that would otherwise have to be painstakingly constructed by hand.

    advertisement

    The team created a parallelogram shape to serve as the main structure of the robot, then fabricated three linear actuators (mini-LAs) to control the robot’s movement: one parallel to the bottom of the parallelogram that raises and lowers it, one perpendicular to the parallelogram that rotates it, and one at the tip of the parallelogram that extends and retracts the tool in use. The result was a robot that is much smaller and lighter than other microsurgical devices previously developed in academia.
    The mini-LAs are themselves marvels in miniature, built around a piezoelectric ceramic material that changes shape when an electrical field is applied. The shape change pushes the mini-LA’s “runner unit” along its “rail unit” like a train on train tracks, and that linear motion is harnessed to move the robot. Because piezoelectric materials inherently deform as they change shape, the team also integrated LED-based optical sensors into the mini-LA to detect and correct any deviations from the desired movement, such as those caused by hand tremors.
    Steadier than a surgeon’s hands
    To mimic the conditions of a teleoperated surgery, the team connected the mini-RCM to a Phantom Omni device, which manipulated the mini-RCM in response to the movements of a user’s hand controlling a pen-like tool. Their first test evaluated a human’s ability to trace a tiny square smaller than the tip of a ballpoint pen, looking through a microscope and either tracing it by hand, or tracing it using the mini-RCM. The mini-RCM tests dramatically improved user accuracy, reducing error by 68% compared to manual operation — an especially important quality given the precision required to repair small and delicate structures in the human body.
    Given the mini-RCM’s success on the tracing test, the researchers then created a mock version of a surgical procedure called retinal vein cannulation, in which a surgeon must carefully insert a needle through the eye to inject therapeutics into the tiny veins at the back of the eyeball. They fabricated a silicone tube the same size as the retinal vein (about twice the thickness of a human hair), and successfully punctured it with a needle attached to the end of the mini-RCM without causing local damage or disruption.
    In addition to its efficacy in performing delicate surgical maneuvers, the mini-RCM’s small size provides another important benefit: it is easy to set up and install and, in the case of a complication or electrical outage, the robot can be easily removed from a patient’s body by hand.
    “The Pop-Up MEMS method is proving to be a valuable approach in a number of areas that require small yet sophisticated machines, and it was very satisfying to know that it has the potential to improve the safety and efficiency of surgeries to make them even less invasive for patients,” said Wood, who is also the Charles River Professor of Engineering and Applied Sciences at Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS).
    The researchers aim to increase the force of the robot’s actuators to cover the maximum forces experienced during an operation, and improve its positioning precision. They are also investigating using a laser with a shorter pulse during the machining process, to improve the mini-LAs’ sensing resolution.
    “This unique collaboration between the Wood lab and Sony illustrates the benefits that can arise from combining the real-world focus of industry with the innovative spirit of academia, and we look forward to seeing the impact this work will have on surgical robotics in the near future,” said Wyss Institute Founding Director Don Ingber, M.D., Ph.D., who is also the the Judah Folkman Professor of Vascular Biology at Harvard Medical School and Boston Children’s Hospital, and Professor of Bioengineering at SEAS. More

  • in

    Machines rival expert analysis of stored red blood cell quality

    Each year, nearly 120 million units* of donated blood flow from donor veins into storage bags at collection centres around the world. The fluid is packed, processed and reserved for later use. But once outside the body, stored red blood cells (RBCs) undergo continuous deterioration. By day 42 in most countries, the products are no longer usable.
    For years, labs have used expert microscopic examinations to assess the quality of stored blood. How viable is a unit by day 24? How about day 37? Depending on what technicians’ eyes perceive, answers may vary. This manual process is laborious, complex and subjective.
    Now, after three years of research, a study published in the Proceedings of the National Academy of Sciences unveils two new strategies to automate the process and achieve objective RBC quality scoring — with results that match and even surpass expert assessment.
    The methodologies showcase the potential in combining artificial intelligence with state-of-the-art imaging to solve a longstanding biomedical problem. If standardized, it could ensure more consistent, accurate assessments, with increased efficiency and better patient outcomes.
    Trained machines match expert human assessment
    The interdisciplinary collaboration spanned five countries, twelve institutions and nineteen authors, including universities, research institutes, and blood collection centres in Canada, USA, Switzerland, Germany and the UK. The research was led by computational biologist Anne Carpenter of the Broad Institute of Harvard and MIT, physicist Michael Kolios of Ryerson University’s Department of Physics, and Jason Acker of the Canadian Blood Services.

    advertisement

    They first investigated whether a neural network could be taught to “see” in images of RBCs the same six categories of cell degradation as human experts could. To generate the vast quantity of images required, imaging flow cytometry played a crucial role. Joseph Sebastian, co-author and Ryerson undergraduate then working under Kolios, explains.
    “With this technique, RBCs are suspended and flowed through the cytometer, an instrument that takes thousands of images of individual blood cells per second. We can then examine each RBC without handling or inadvertently damaging them, which sometimes happens during microscopic examinations.”
    The researchers used 40,900 cell images to train the neural networks on classifying RBCs into the six categories — in a collection that is now the world’s largest, freely available database of RBCs individually annotated with the various categories of deterioration.
    When tested, the machine learning algorithm achieved 77% agreement with human experts. Although a 23% error rate might sound high, perfectly matching an expert’s judgment in this test is impossible: even human experts agree only 83% of the time. Thus, this fully-supervised machine learning model could be effective to replace tedious visual examination by humans with little loss of accuracy.
    Even so, the team wondered: could a different strategy push the upper limits of accuracy further?

    advertisement

    Machines surpass human vision, detect cellular subtleties
    In the study’s second part, the researchers avoided human input altogether and devised an alternative, “weakly-supervised” deep learning model in which neural networks learned about RBC degradation on their own.
    Instead of being taught the six visual categories used by experts, the machines learned solely by analyzing over one million images of RBCs, unclassed and ordered only by blood storage duration time. Eventually, the machines correctly discerned features in single RBCs that correspond to the descent from healthy to unhealthy cells.
    “Allowing the computer to teach itself the progression of stored red blood cells as they degrade is a really exciting development,” says Carpenter, “particularly because it can capture more subtle changes in cells that humans don’t recognize.”
    When tested against other relevant tests such as a biochemical assay, the weakly-supervised trained machines predicted RBC quality better than the current six-category assessment method used by experts.
    Deep learning strategies: Blood quality and beyond
    Further training is still needed before the model is ready for clinical testing, but the outlook is promising. The fully-supervised machine learning model could soon automate and streamline the current manual method, minimizing sample handling, discrepancies and procedural errors in blood quality assessments.
    The second, alternative weakly-supervised framework may further eliminate human subjectivity from the process. Objective, accurate blood quality predictions would allow doctors to better personalize blood products to patients. Beyond stored blood, the time-based deep learning strategy may be transferable to other applications involving chronological progression, such as the spread of cancer.
    “People used to ask what the alternative is to the manual process,” says Kolios. “Now, we’ve developed an approach that integrates cutting-edge developments from several disciplines, including computational biology, transfusion medicine, and medical physics. It’s a testament to how technology and science are now interconnecting to solve today’s biomedical problems.”
    *Data reported by the World Health Organization More