More stories

  • in

    Biologists create new genetic systems to neutralize gene drives

    In the past decade, researchers have engineered an array of new tools that control the balance of genetic inheritance. Based on CRISPR technology, such gene drives are poised to move from the laboratory into the wild where they are being engineered to suppress devastating diseases such as mosquito-borne malaria, dengue, Zika, chikungunya, yellow fever and West Nile. Gene drives carry the power to immunize mosquitoes against malarial parasites, or act as genetic insecticides that reduce mosquito populations.
    Although the newest gene drives have been proven to spread efficiently as designed in laboratory settings, concerns have been raised regarding the safety of releasing such systems into wild populations. Questions have emerged about the predictability and controllability of gene drives and whether, once let loose, they can be recalled in the field if they spread beyond their intended application region.
    Now, scientists at the University of California San Diego and their colleagues have developed two new active genetic systems that address such risks by halting or eliminating gene drives in the wild. On Sept.18, 2020 in the journal Molecular Cell, research led by Xiang-Ru Xu, Emily Bulger and Valentino Gantz in the Division of Biological Sciences offers two new solutions based on elements developed in the common fruit fly.
    “One way to mitigate the perceived risks of gene drives is to develop approaches to halt their spread or to delete them if necessary,” said Distinguished Professor Ethan Bier, the paper’s senior author and science director for the Tata Institute for Genetics and Society. “There’s been a lot of concern that there are so many unknowns associated with gene drives. Now we have saturated the possibilities, both at the genetic and molecular levels, and developed mitigating elements.”
    The first neutralizing system, called e-CHACR (erasing Constructs Hitchhiking on the Autocatalytic Chain Reaction) is designed to halt the spread of a gene drive by “shooting it with its own gun.” e-CHACRs use the CRISPR enzyme Cas9 carried on a gene drive to copy itself, while simultaneously mutating and inactivating the Cas9 gene. Xu says an e-CHACR can be placed anywhere in the genome.
    “Without a source of Cas9, it is inherited like any other normal gene,” said Xu. “However, once an e-CHACR confronts a gene drive, it inactivates the gene drive in its tracks and continues to spread across several generations ‘chasing down’ the drive element until its function is lost from the population.”
    The second neutralizing system, called ERACR (Element Reversing the Autocatalytic Chain Reaction), is designed to eliminate the gene drive altogether. ERACRs are designed to be inserted at the site of the gene drive, where they use the Cas9 from the gene drive to attack either side of the Cas9, cutting it out. Once the gene drive is deleted, the ERACR copies itself and replaces the gene-drive.
    “If the ERACR is also given an edge by carrying a functional copy of a gene that is disrupted by the gene drive, then it races across the finish line, completely eliminating the gene drive with unflinching resolve,” said Bier.
    The researchers rigorously tested and analyzed e-CHACRs and ERACRs, as well as the resulting DNA sequences, in meticulous detail at the molecular level. Bier estimates that the research team, which includes mathematical modelers from UC Berkeley, spent an estimated combined 15 years of effort to comprehensively develop and analyze the new systems. Still, he cautions there are unforeseen scenarios that could emerge, and the neutralizing systems should not be used with a false sense of security for field-implemented gene drives.
    “Such braking elements should just be developed and kept in reserve in case they are needed since it is not known whether some of the rare exceptional interactions between these elements and the gene drives they are designed to corral might have unintended activities,” he said.
    According to Bulger, gene drives have enormous potential to alleviate suffering, but responsibly deploying them depends on having control mechanisms in place should unforeseen consequences arise. ERACRs and eCHACRs offer ways to stop the gene drive from spreading and, in the case of the ERACR, can potentially revert an engineered DNA sequence to a state much closer to the naturally-occurring sequence.
    “Because ERACRs and e-CHACRs do not possess their own source of Cas9, they will only spread as far as the gene drive itself and will not edit the wild type population,” said Bulger. “These technologies are not perfect, but we now have a much more comprehensive understanding of why and how unintended outcomes influence their function and we believe they have the potential to be powerful gene drive control mechanisms should the need arise.” More

  • in

    Engineers produce a fisheye lens that's completely flat

    To capture panoramic views in a single shot, photographers typically use fisheye lenses — ultra-wide-angle lenses made from multiple pieces of curved glass, which distort incoming light to produce wide, bubble-like images. Their spherical, multipiece design makes fisheye lenses inherently bulky and often costly to produce.
    Now engineers at MIT and the University of Massachusetts at Lowell have designed a wide-angle lens that is completely flat. It is the first flat fisheye lens to produce crisp, 180-degree panoramic images. The design is a type of “metalens,” a wafer-thin material patterned with microscopic features that work together to manipulate light in a specific way.
    In this case, the new fisheye lens consists of a single flat, millimeter-thin piece of glass covered on one side with tiny structures that precisely scatter incoming light to produce panoramic images, just as a conventional curved, multielement fisheye lens assembly would. The lens works in the infrared part of the spectrum, but the researchers say it could be modified to capture images using visible light as well.
    The new design could potentially be adapted for a range of applications, with thin, ultra-wide-angle lenses built directly into smartphones and laptops, rather than physically attached as bulky add-ons. The low-profile lenses might also be integrated into medical imaging devices such as endoscopes, as well as in virtual reality glasses, wearable electronics, and other computer vision devices.
    “This design comes as somewhat of a surprise, because some have thought it would be impossible to make a metalens with an ultra-wide-field view,” says Juejun Hu, associate professor in MIT’s Department of Materials Science and Engineering. “The fact that this can actually realize fisheye images is completely outside expectation.
    This isn’t just light-bending — it’s mind-bending.”
    Hu and his colleagues have published their results in the journal Nano Letters. Hu’s MIT coauthors are Mikhail Shalaginov, Fan Yang, Peter Su, Dominika Lyzwa, Anuradha Agarwal, and Tian Gu, along with Sensong An and Hualiang Zhang of UMass Lowell.

    advertisement

    Design on the back side
    Metalenses, while still largely at an experimental stage, have the potential to significantly reshape the field of optics. Previously, scientists have designed metalenses that produce high-resolution and relatively wide-angle images of up to 60 degrees. To expand the field of view further would traditionally require additional optical components to correct for aberrations, or blurriness — a workaround that would add bulk to a metalens design.
    Hu and his colleagues instead came up with a simple design that does not require additional components and keeps a minimum element count. Their new metalens is a single transparent piece made from calcium fluoride with a thin film of lead telluride deposited on one side. The team then used lithographic techniques to carve a pattern of optical structures into the film.
    Each structure, or “meta-atom,” as the team refers to them, is shaped into one of several nanoscale geometries, such as a rectangular or a bone-shaped configuration, that refracts light in a specific way. For instance, light may take longer to scatter, or propagate off one shape versus another — a phenomenon known as phase delay.
    In conventional fisheye lenses, the curvature of the glass naturally creates a distribution of phase delays that ultimately produces a panoramic image. The team determined the corresponding pattern of meta-atoms and carved this pattern into the back side of the flat glass.

    advertisement

    ‘We’ve designed the back side structures in such a way that each part can produce a perfect focus,” Hu says.
    On the front side, the team placed an optical aperture, or opening for light.
    “When light comes in through this aperture, it will refract at the first surface of the glass, and then will get angularly dispersed,” Shalaginov explains. “The light will then hit different parts of the backside, from different and yet continuous angles. As long as you design the back side properly, you can be sure to achieve high-quality imaging across the entire panoramic view.”
    Across the panorama
    In one demonstration, the new lens is tuned to operate in the mid-infrared region of the spectrum. The team used the imaging setup equipped with the metalens to snap pictures of a striped target. They then compared the quality of pictures taken at various angles across the scene, and found the new lens produced images of the stripes that were crisp and clear, even at the edges of the camera’s view, spanning nearly 180 degrees.
    “It shows we can achieve perfect imaging performance across almost the whole 180-degree view, using our methods,” Gu says.
    In another study, the team designed the metalens to operate at a near-infrared wavelength using amorphous silicon nanoposts as the meta-atoms. They plugged the metalens into a simulation used to test imaging instruments. Next, they fed the simulation a scene of Paris, composed of black and white images stitched together to make a panoramic view. They then ran the simulation to see what kind of image the new lens would produce.
    “The key question was, does the lens cover the entire field of view? And we see that it captures everything across the panorama,” Gu says. “You can see buildings and people, and the resolution is very good, regardless of whether you’re looking at the center or the edges.”
    The team says the new lens can be adapted to other wavelengths of light. To make a similar flat fisheye lens for visible light, for instance, Hu says the optical features may have to be made smaller than they are now, to better refract that particular range of wavelengths. The lens material would also have to change. But the general architecture that the team has designed would remain the same.
    The researchers are exploring applications for their new lens, not just as compact fisheye cameras, but also as panoramic projectors, as well as depth sensors built directly into smartphones, laptops, and wearable devices.
    “Currently, all 3D sensors have a limited field of view, which is why when you put your face away from your smartphone, it won’t recognize you,” Gu says. “What we have here is a new 3D sensor that enables panoramic depth profiling, which could be useful for consumer electronic devices.” More

  • in

    Promising computer simulations for stellarator plasmas

    For the fusion researchers at IPP, who want to develop a power plant based on the model of the sun, the turbulence formation in its fuel — a hydrogen plasma — is a central research topic. The small eddies carry particles and heat out of the hot plasma centre and thus reduce the thermal insulation of the magnetically confined plasma. Because the size and thus the price of electricity of a future fusion power plant depends on it, one of the most important goals is to understand, predict and influence this “turbulent transport.”
    Since the exact computational description of plasma turbulence would require the solution of highly complex systems of equations and the execution of countless computational steps, the code development process is aimed at achieving reasonable simplifications. The GENE code developed at IPP is based on a set of simplified, so-called gyrokinetic equations. They disregard all phenomena in the plasma which do not play a major role in turbulent transport. Although the computational effort can be reduced by many orders of magnitude in this way, the world’s fastest and most powerful supercomputers have always been needed to further develop the code. In the meantime, GENE is able to describe the formation and propagation of small low-frequency plasma eddies in the plasma interior well and to reproduce and explain the experimental results — but originally only for the simply constructed, because axisymmetric fusion systems of the tokamak type.
    For example, calculations with GENE showed that fast ions can greatly reduce turbulent transport in tokamak plasmas. Experiments at the ASDEX Upgrade tokamak at Garching confirmed this result. The required fast ions were provided by plasma heating using radio waves of the ion cyclotron frequency.
    A tokamak code for stellarators
    In stellarators, this turbulence suppression by fast ions had not been observed experimentally so far. However, the latest calculations with GENE now suggest that this effect should also exist in stellarator plasmas: In the Wendelstein 7-X stellarator at IPP at Greifswald, it could theoretically reduce turbulence by more than half. As IPP scientists Alessandro Di Siena, Alejandro Bañón Navarro and Frank Jenko show in the journal Physical Review Letters, the optimal ion temperature depends strongly on the shape of the magnetic field. Professor Frank Jenko, head of the Tokamak Theory department at IPP in Garching: “If this calculated result is confirmed in future experiments with Wendelstein 7-X in Greifswald, this could open up a path to interesting high-performance plasmas.”
    In order to use GENE for turbulence calculation in the more complicated shaped plasmas of stellarators, major code adjustments were necessary. Without the axial symmetry of the tokamaks, one has to cope with a much more complex geometry for stellarators.
    For Professor Per Helander, head of the Stellarator Theory department at IPP in Greifswald, the stellarator simulations performed with GENE are “very exciting physics.” He hopes that the results can be verified in the Wendelstein 7-X stellarator at Greifswald. “Whether the plasma values in Wendelstein 7-X are suitable for such experiments can be investigated when, in the coming experimental period, the radio wave heating system will be put into operation in addition to the current microwave and particle heating,” says Professor Robert Wolf, whose department is responsible for plasma heating.
    GENE becomes GENE-3D
    According to Frank Jenko, it was another “enormous step” to make GENE not only approximately, but completely fit for the complex, three-dimensional shape of stellarators. After almost five years of development work, the code GENE-3D, now presented in the “Journal of Computational Physics” by Maurice Maurer and co-authors, provides a “fast and yet realistic turbulence calculation also for stellarators,” says Frank Jenko. In contrast to other stellarator turbulence codes, GENE-3D describes the full dynamics of the system, i.e. the turbulent motion of the ions and also of the electrons over the entire inner volume of the plasma, including the resulting fluctuations of the magnetic field.

    Story Source:
    Materials provided by Max-Planck-Institut für Plasmaphysik (IPP). Note: Content may be edited for style and length. More

  • in

    New mathematical tool can select the best sensors for the job

    In the 2019 Boeing 737 Max crash, the recovered black box from the aftermath hinted that a failed pressure sensor may have caused the ill-fated aircraft to nose dive. This incident and others have fueled a larger debate on sensor selection, number and placement to prevent the reoccurrence of such tragedies.
    Texas A&M University researchers have now developed a comprehensive mathematical framework that can help engineers make informed decisions about which sensors to use and where they must be positioned in aircraft and other machines.
    “During the early design stage for any control system, critical decisions have to be made about which sensors to use and where to place them so that the system is optimized for measuring certain physical quantities of interest,” said Dr. Raktim Bhattacharya, associate professor in the Department of Aerospace Engineering. “With our mathematical formulation, engineers can feed the model with information on what needs to be sensed and with what precision, and the model’s output will be the fewest sensors needed and their accuracies.”
    The researchers detailed their mathematical framework in the June issue of the Institute of Electrical and Electronics Engineers’ Control System Letters.
    Whether a car or an airplane, complex systems have internal properties that need to be measured. For instance, in an airplane, sensors for angular velocity and acceleration are placed at specific locations to estimate the velocity.
    Sensors can also have different accuracies. In technical terms, accuracy is measured by the noise or the wiggles in the sensor measurements. This noise impacts how accurately the internal properties can be predicted. However, accuracies may be defined differently depending on the system and the application. For instance, some systems may require that noise in the predictions do not exceed a certain amount, while others may need the square of the noise to be as small as possible. In all cases, prediction accuracy has a direct impact on the cost of the sensor.

    advertisement

    “If you want to get sensor accuracy that is two times more accurate, the cost is likely to be more than double,” said Bhattacharya. “Furthermore, in some cases, very high accuracy is not even required. For example, an expensive 4K HD vehicle camera for object detection is unnecessary because first, fine features are not needed to distinguish humans from other cars and second, data processing from high-definition cameras becomes an issue.”
    Bhattacharya added that even if the sensors are extremely precise, knowing where to put the sensor is critical because one might place an expensive sensor at a location where it is not needed. Thus, he said the ideal solution balances cost and precision by optimizing the number of sensors and their positions.
    To test this rationale, Bhattacharya and his team designed a mathematical model using a set of equations that described the model of an F-16 aircraft. In their study, the researchers’ objective was to estimate the forward velocity, the direction of wind angle with respect to the airplane (the angle of attack), the angle between where the airplane is pointed and the horizon (the pitch angle) and pitch rate for this aircraft. Available to them were sensors that are normally in aircraft for measuring acceleration, angular velocity, pitch rate, pressure and the angle of attack. In addition, the model was also provided with expected accuracies for each sensor.
    Their model revealed that all of the sensors were not needed to accurately estimate forward velocity; readings from angular velocity sensors and pressure sensors were enough. Also, these sensors were enough to estimate the other physical states, like the angle of attack, precluding the need of an additional angle of attack sensor. In fact, these sensors, although a surrogate for measuring the angle of attack, had the effect of introducing redundancy in the system, resulting in higher system reliability.
    Bhattacharya said the mathematical framework has been designed so that it always indicates the least sensors that are needed even if it is provided with a repertoire of sensors to choose from.
    “Let’s assume a designer wants to put every type of sensor everywhere. The beauty of our mathematical model is that it will take out the unnecessary sensors and then give you the minimum number of sensors needed and their position,” he said.
    Furthermore, the researchers noted that although the study is from an aerospace engineering perspective, their mathematical model is very general and can impact other systems as well.
    “As engineering systems become bigger and more complex, the question of where to put the sensor becomes more and more difficult,” said Bhattacharya. “So, for example, if you are building a really long wind turbine blade, some physical properties of the system need to be estimated using sensors and these sensors need to be placed at optimal locations to make sure the structure does not fail. This is nontrivial and that’s where our mathematical framework comes in.” More

  • in

    Shedding light on the development of efficient blue-emitting semiconductors

    Artificial light accounts for approximately 20% of the total electricity consumed globally. Considering the present environmental crisis, this makes the discovery of energy-efficient light-emitting materials particularly important, especially those that produce white light. Over the last decade, technological advances in solid-state lighting, the subfield of semiconductors research concerned with light-emitting compounds, has led to the widespread use of white LEDs. However, most of these LEDs are actually a blue LED chip coated with a yellow luminescent material; the emitted yellow light combined with the remaining blue light produces the white color.
    Therefore, a way to reduce the energy consumption of modern white LED lights is to find better blue-emitting semiconductors. Unfortunately, no known blue-emitting compounds were simultaneously highly efficient, easily processible, durable, eco-friendly, and made from abundant materials — until now.
    In a recent study, published in Advanced Materials, a team of scientists from Tokyo Institute of Technology, Japan, discovered a new alkali copper halide, Cs5Cu3Cl6I2, that fills all the criteria. Unlike Cs3Cu2I5, another promising blue-emitting candidate for future devices, the proposed compound has two different halides, chloride and iodide. Although mixed-halide materials have been tried before, Cs5Cu3Cl6I2 has unique properties that emerge specifically from the use of I− and CI− ions.
    It turns out that Cs5Cu3Cl6I2 forms a one-dimensional zigzag chain out of two different subunits, and the links in the chain are exclusively bridged by I− ions. The scientists also found another important feature: its valence band, which describes the energy levels of electrons in different positions of the material’s crystalline structure, is almost flat (of constant energy). In turn, this characteristic makes photo-generated holes — positively charged pseudoparticles that represent the absence of a photoexcited electron — “heavier.” These holes tend to become immobilized due to their strong interaction with I− ions, and they easily bond with nearby free electrons to form a small system known as an exciton.
    Excitons induce distortions in the crystal structure. Much like the fact that one would have trouble moving atop a suspended large net that is considerably deformed by one’s own weight, the excitons become trapped in place by their own effect. This is crucial for the highly efficient generation of blue light. Professor Junghwan Kim, who led the study, explains: “The self-trapped excitons are localized forms of optically excited energy; the eventual recombination of their constituting electron-hole pair causes photoluminescence, the emission of blue light in this case.”
    In addition to its efficiency, Cs5Cu3Cl6I2 has other attractive properties. It is exclusively composed of abundant materials, making it relatively inexpensive. Moreover, it is much more stable in air than Cs3Cu2I5 and other alkali copper halide compounds. The scientists found that the performance of Cs5Cu3Cl6I2 did not degrade when stored in air for three months, while similar light-emitting compounds performed worse after merely days. Finally, Cs5Cu3Cl6I2 does not require lead, a highly toxic element, making it eco-friendly overall.
    Excited about the results of the study, Prof. Kim concludes: “Our findings provide a new perspective for the development of new alkali copper halide candidates and demonstrate that Cs5Cu3Cl6I2 could be a promising blue-emitting material.” The light shed by this team of scientists will hopefully lead to more efficient and eco-friendly lighting technology.

    Story Source:
    Materials provided by Tokyo Institute of Technology. Note: Content may be edited for style and length. More

  • in

    The brain's memory abilities inspire AI experts in making neural networks less 'forgetful'

    Artificial intelligence (AI) experts at the University of Massachusetts Amherst and the Baylor College of Medicine report that they have successfully addressed what they call a “major, long-standing obstacle to increasing AI capabilities” by drawing inspiration from a human brain memory mechanism known as “replay.”
    First author and postdoctoral researcher Gido van de Ven and principal investigator Andreas Tolias at Baylor, with Hava Siegelmann at UMass Amherst, write in Nature Communications that they have developed a new method to protect — “surprisingly efficiently” — deep neural networks from “catastrophic forgetting” — upon learning new lessons, the networks forget what they had learned before.
    Siegelmann and colleagues point out that deep neural networks are the main drivers behind recent AI advances, but progress is held back by this forgetting.
    They write, “One solution would be to store previously encountered examples and revisit them when learning something new. Although such ‘replay’ or ‘rehearsal’ solves catastrophic forgetting,” they add, “constantly retraining on all previously learned tasks is highly inefficient and the amount of data that would have to be stored becomes unmanageable quickly.”
    Unlike AI neural networks, humans are able to continuously accumulate information throughout their life, building on earlier lessons. An important mechanism in the brain believed to protect memories against forgetting is the replay of neuronal activity patterns representing those memories, they explain.
    Siegelmann says the team’s major insight is in “recognizing that replay in the brain does not store data.” Rather, “the brain generates representations of memories at a high, more abstract level with no need to generate detailed memories.” Inspired by this, she and colleagues created an artificial brain-like replay, in which no data is stored. Instead, like the brain, the network generates high-level representations of what it has seen before.
    The “abstract generative brain replay” proved extremely efficient, and the team showed that replaying just a few generated representations is sufficient to remember older memories while learning new ones. Generative replay not only prevents catastrophic forgetting and provides a new, more streamlined path for system learning, it allows the system to generalize learning from one situation to another, they state.
    For example, “if our network with generative replay first learns to separate cats from dogs, and then to separate bears from foxes, it will also tell cats from foxes without specifically being trained to do so. And notably, the more the system learns, the better it becomes at learning new tasks,” says van de Ven.
    He and colleagues write, “We propose a new, brain-inspired variant of replay in which internal or hidden representations are replayed that are generated by the network’s own, context-modulated feedback connections. Our method achieves state-of-the-art performance on challenging continual learning benchmarks without storing data, and it provides a novel model for abstract level replay in the brain.”
    Van de Ven says, “Our method makes several interesting predictions about the way replay might contribute to memory consolidation in the brain. We are already running an experiment to test some of these predictions.”

    Story Source:
    Materials provided by University of Massachusetts Amherst. Note: Content may be edited for style and length. More

  • in

    All-optical method sets record for ultrafast high-spatial-resolution imaging: 15 trillion frames per second

    High-speed cameras can take pictures in quick succession. This makes them useful for visualizing ultrafast dynamic phenomena, such as femtosecond laser ablation for precise machining and manufacturing processes, fast ignition for nuclear fusion energy systems, shock-wave interactions in living cells, and certain chemical reactions.
    Among the various parameters in photography, the sequential imaging of microscopic ultrafast dynamic processes requires high frame rates and high spatial and temporal resolutions. In current imaging systems, these characteristics are in a tradeoff with one another.
    However, scientists at Shenzhen University, China, have recently developed an all-optical ultrafast imaging system with high spatial and temporal resolutions, as well as a high frame rate. Because the method is all-optical, it’s free from the bottlenecks that arise from scanning with mechanical and electronic components.
    Their design focuses on non-collinear optical parametric amplifiers (OPAs). An OPA is a crystal that, when simultaneously irradiated with a desired signal light beam and a higher-frequency pump light beam, amplifies the signal beam and produces another light beam known as an idler. Because the crystal used in this study is non-collinear, the idler is fired in a different direction from that of the signal beam. But how is such a device useful in a high-speed imaging system?
    The answer lies in cascading OPAs. The information of the target, contained in the signal beam, is mapped onto the idler beam by the OPA while the pump beam is active. Because the idler moves in a different direction, it can be captured using a conventional charge-coupled device (CCD) camera “set to the side” while the signal beam moves toward the next stage in the OPA cascade.
    Just like how water would descend in a waterfall, the signal beam reaches the subsequent OPA, and the pump beam generated from the same laser source activates it; except now, a delay line makes the pump beam arrive later, causing the CCD camera next to the OPA in the second stage to take a picture later. Through a cascade of four OPAs with four associated CCD cameras and four different delay lines for the pump laser, the scientists created a system that can take four pictures in extremely quick succession.
    The speed of capturing consecutive pictures is limited by how small the difference between two laser delay lines can be. In this regard, this system achieved an effective frame rate of 15 trillion frames per second — a record shutter speed for high-spatial-resolution cameras. Conversely, the temporal resolution depends on the duration of the laser pulses triggering the OPAs and generating the idler signals. In this case, the pulse width was 50 fs (fifty millionths of a nanosecond). Coupled with the incredibly fast frame rate, this method is able to observe ultrafast physical phenomena, such as an air plasma grating and a rotating optical field spinning at 10 trillion radians per second.
    According to Anatoly Zayats, Co-Editor-in-Chief of Advanced Photonics, “The team at Shenzhen University has demonstrated ultrafast photographic imaging with the record fastest shutter speed. This research opens up new opportunities for studies of ultrafast processes in various fields.”
    This imaging method has scope for improvement but could easily become a new microscopy technique. Future research will unlock the potential of this approach to give us a clearer picture of ultrafast transient phenomena. More

  • in

    Algorithm boosts efficiency, nutrition for food bank ops

    Cornell University systems engineers examined data from a busy New York state food bank and, using a new algorithm, found ways to better distribute and allocate food, and elevate nutrition among its patrons in the process.
    “In order to serve thousands of people and combat food insecurity, our algorithm helps food banks manage their food resources more efficiently — and patrons get more nutrition,” said lead researcher Faisal Alkaabneh, Cornell’s first doctoral graduate in systems engineering.
    Alkaabneh and his adviser, Oliver Gao, professor of civil and environmental engineering, are co-authors of “A Unified Framework for Efficient, Effective and Fair Resource Allocation by Food Banks Using an Approximate Dynamic Programming Approach,” published in the journal Omega.
    The researchers reviewed data of the Food Bank of the Southern Tier, which serves six counties in upstate New York. In 2019, the food bank distributed 10.9 million meals to about 21,700 people each week. Nearly 19% of its patrons are seniors and about 41% are children, according to the group’s data.
    Last year, the food bank distributed 2.8 million pounds of fresh fruit through 157 partner agencies, and moved about 3.4 million pounds of food through local mobile pantries.
    The algorithm Gao and his team used to determine how to allocate several food categories efficiently, based upon pantry requests, demonstrated a 7.73% improvement in efficiency from 2018 to 2019, compared to standard food bank allocation practices. Their calculations also showed a 3% improvement in nutrition using a wider variety of food, Alkaabneh said.
    “We hope our research is used as a baseline model for food banks improving practices,” Gao said. “and boosting nutrition and policies to help people at risk for hunger.”

    Story Source:
    Materials provided by Cornell University. Original written by Blaine Friedlander. Note: Content may be edited for style and length. More