More stories

  • in

    Engineering discovery challenges heat transfer paradigm that guides electronic and photonic device design

    A research breakthrough from the University of Virginia School of Engineering demonstrates a new mechanism to control temperature and extend the lifetime of electronic and photonic devices such as sensors, smart phones and transistors.
    The discovery, from UVA’s experiments and simulations in thermal engineering research group, challenges a fundamental assumption about heat transfer in semiconductor design. In devices, electrical contacts form at the junction of a metal and a semiconducting material. Traditionally, materials and device engineers have assumed that electron energy moves across this junction through a process called charge injection, said group leader Patrick Hopkins, professor of mechanical and aerospace engineering with courtesy appointments in materials science and engineering and physics.
    Charge injection posits that with the flow of the electrical charge, electrons physically jump from the metal into the semiconductor, taking their excess heat with them. This changes the electrical composition and properties of the insulating or semiconducting materials. The cooling that goes hand-in-hand with charge injection can significantly degrade device efficiency and performance.
    Hopkins’ group discovered a new heat transfer path that embraces the benefits of cooling associated with charge injection without any of the drawbacks of the electrons physically moving into the semiconductor device. They call this mechanism ballistic thermal injection.
    As described by Hopkins’ advisee John Tomko, a Ph.D. student of materials science and engineering: “The electron gets to the bridge between its metal and the semiconductor, sees another electron across the bridge and interacts with it, transferring its heat but staying on its own side of the bridge. The semiconducting material absorbs a lot of heat, but the number of electrons remains constant.”
    “The ability to cool electrical contacts by keeping charge densities constant offers a new direction in electronic cooling without impacting the electrical and optical performance of the device,” Hopkins said. “The ability to independently optimize optical, electrical and thermal behavior of materials and devices improves device performance and longevity.”
    Tomko’s expertise in laser metrology — measuring energy transfer at the nanoscale — revealed ballistic thermal injection as a new path for device self-cooling. Tomko’s measurement technique, more specifically optical laser spectroscopy, is an entirely new way to measure heat transfer across the metal-semiconductor interface.

    advertisement

    “Previous methods of measurement and observation could not decompose the heat transfer mechanism separately from charge injection,” Tomko said.
    For their experiments, Hopkins’ research team selected cadmium oxide, a transparent electricity-conducting oxide that looks like glass. Cadmium oxide was a pragmatic choice because its unique optical properties are well suited to Tomko’s laser spectroscopy measurement method.
    Cadmium oxide perfectly absorbs mid-infrared photons in the form of plasmons, quasiparticles composed of synchronized electrons that are an incredibly efficient way of coupling light into a material. Tomko used ballistic thermal injection to move the light wavelength at which perfect absorption occurs, essentially tuning the optical properties of cadmium oxide through injected heat.
    “Our observations of tuning enable us to say definitively that heat transfer happens without swapping electrons,” Tomko said.
    Tomko probed the plasmons to extract information on the number of free electrons on each side of the bridge between the metal and the semiconductor. In this way, Tomko captured the measurement of electrons’ placement before and after the metal was heated and cooled.

    advertisement

    The team’s discovery offers promise for infrared sensing technologies as well. Tomko’s observations reveal that the optical tuning lasts as long as the cadmium oxide remains hot, keeping in mind that time is relative — a trillionth rather than a quadrillionth of a second.
    Ballistic thermal injection can control plasmon absorption and therefore the optical response of non-metal materials. Such control enables highly efficient plasmon absorption at mid-infrared length. One benefit of this development is that night vision devices can be made more responsive to a sudden, intense change in heat that would otherwise leave the device temporarily blind.
    “The realization of this ballistic thermal injection process across metal/cadmium oxide interfaces for ultrafast plasmonic applications opens the door for us to use this process for efficient cooling of other device-relevant material interfaces,” Hopkins said.
    Tomko first-authored a paper documenting these findings. Nature Nanotechnology published the team’s paper, Long-lived Modulation of Plasmonic Absorption by Ballistic Thermal Injection, on November 9; the paper was also promoted in the journal editors’ News and Views. The Nature Nanotechnology paper adds to a long list of publications for Tomko, who has co-authored more than 30 papers and can now claim first-authorship of two Nature Nanotechnology papers as a graduate student.
    The research paper culminates a two-year, collaborative effort funded by a U.S. Army Research Office Multi-University Research Initiative. Jon-Paul Maria, professor of materials science and engineering at Penn State University, is the principal investigator for the MURI grant, which includes the University of Southern California as well as UVA. This MURI team also collaborated with Josh Caldwell, associate professor of mechanical engineering and electrical engineering at Vanderbilt University.
    The team’s breakthrough relied on Penn State’s expertise in making the cadmium oxide samples, Vanderbilt’s expertise in optical modeling, the University of Southern California’s computational modeling, and UVA’s expertise in energy transport, charge flow, and photonic interactions with plasmons at heterogeneous interfaces, including the development of a novel ultrafast-pump-probe laser experiment to monitor this novel ballistic thermal injection process. More

  • in

    New tools 'turn on' quantum gases of ultracold molecules

    JILA researchers have developed tools to “turn on” quantum gases of ultracold molecules, gaining control of long-distance molecular interactions for potential applications such as encoding data for quantum computing and simulations.
    The new scheme for nudging a molecular gas down to its lowest energy state, called quantum degeneracy, while suppressing chemical reactions that break up molecules finally makes it possible to explore exotic quantum states in which all the molecules interact with one another.
    The research is described in the Dec. 10 issue of Nature. JILA is a joint institute of the National Institute of Standards and Technology (NIST) and the University of Colorado Boulder.
    “Molecules are always celebrated for their long-range interactions, which can give rise to exotic quantum physics and novel control in quantum information science,” NIST/JILA Fellow Jun Ye said. “However, until now, nobody had figured out how to turn on these long-range interactions in a bulk gas.”
    “Now, all this has changed. Our work showed for the first time that we can turn on an electric field to manipulate molecular interactions, get them to cool down further, and start to explore collective physics where all molecules are coupled to each other.”
    The new work follows up on Ye’s many previous achievements with ultracold quantum gases. Researchers have long sought to control ultracold molecules in the same way they can control atoms. Molecules offer additional means of control, including polarity — that is, opposing electrical charges — and many different vibrations and rotations.

    advertisement

    The JILA experiments created a dense gas of about 20,000 trapped potassium-rubidium molecules at a temperature of 250 nanokelvin above absolute zero (about minus 273 degrees Celsius or minus 459 degrees Fahrenheit). Crucially, these molecules are polar, with a positive electric charge at the rubidium atom and a negative charge at the potassium atom. The differences between these positive and negative charges, called electric dipole moments, cause the molecules to behave like tiny compass magnets sensitive to certain forces, in this case electric fields.
    When the gas is cooled to near absolute zero, the molecules stop behaving like particles and instead behave like waves that overlap. The molecules stay apart because they are fermions, a class of particles that cannot be in the same quantum state and location at the same time and therefore repel each other. But they can interact at long range through their overlapping waves, electric dipole moments and other features.
    In the past, JILA researchers created quantum gases of molecules by manipulating a gas containing both types of atoms with a magnetic field and lasers. This time the researchers first loaded the mixture of gaseous atoms into a vertical stack of thin, pancake-shaped traps formed from laser light (called an optical lattice), tightly confining the atoms along the vertical direction. Researchers then used magnetic fields and lasers to bond pairs of atoms together into molecules. Leftover atoms were heated and removed by tuning a laser to excite motion unique to each type of atom.
    Then, with the molecular cloud positioned at the center of a new six-electrode assembly formed by two glass plates and four tungsten rods, researchers generated a tunable electric field.
    The electric field set off repulsive interactions among the molecules that stabilized the gas, reducing inelastic (“bad”) collisions in which the molecules undergo a chemical reaction and escape from the trap. This technique boosted rates of elastic (“good”) interactions more than a hundredfold while suppressing chemical reactions.
    This environment allowed efficient evaporative cooling of the gas down to a temperature below the onset of quantum degeneracy. The cooling process removed the hottest molecules from the lattice trap and allowed the remaining molecules to adjust to a lower temperature through the elastic collisions. Slowly turning on a horizontal electric field over hundreds of milliseconds reduced the trap strength in one direction, long enough for hot molecules to escape and the remaining molecules to cool down. At the end of this process, the molecules returned to their most stable state but now in a denser gas.
    The new JILA method can be applied to make ultracold gases out of other types of polar molecules.
    Ultracold molecular gases may have many practical uses, including new methods for quantum computing using polar molecules as quantum bits; simulations and improved understanding of quantum phenomena such as colossal magnetoresistance (for improved data storage and processing) and superconductivity (for perfectly efficient electric power transmission); and new tools for precision measurement such as molecular clocks or molecular systems that enable searches for new theories of physics.
    Funding was provided by NIST, the Defense Advanced Research Projects Agency, the Army Research Office, and the National Science Foundation. More

  • in

    Hidden symmetry could be key to more robust quantum systems

    Researchers have found a way to protect highly fragile quantum systems from noise, which could aid in the design and development of new quantum devices, such as ultra-powerful quantum computers.
    The researchers, from the University of Cambridge, have shown that microscopic particles can remain intrinsically linked, or entangled, over long distances even if there are random disruptions between them. Using the mathematics of quantum theory, they discovered a simple setup where entangled particles can be prepared and stabilised even in the presence of noise by taking advantage of a previously unknown symmetry in quantum systems.
    Their results, reported in the journal Physical Review Letters, open a new window into the mysterious quantum world that could revolutionise future technology by preserving quantum effects in noisy environments, which is the single biggest hurdle for developing such technology. Harnessing this capability will be at the heart of ultrafast quantum computers.
    Quantum systems are built on the peculiar behaviour of particles at the atomic level and could revolutionise the way that complex calculations are performed. While a normal computer bit is an electrical switch that can be set to either one or zero, a quantum bit, or qubit, can be set to one, zero, or both at the same time. Furthermore, when two qubits are entangled, an operation on one immediately affects the other, no matter how far apart they are. This dual state is what gives a quantum computer its power. A computer built with entangled qubits instead of normal bits could perform calculations well beyond the capacities of even the most powerful supercomputers.
    “However, qubits are extremely finicky things, and the tiniest bit of noise in their environment can cause their entanglement to break,” said Dr Shovan Dutta from Cambridge’s Cavendish Laboratory, the paper’s first author. “Until we can find a way to make quantum systems more robust, their real-world applications will be limited.”
    Several companies — most notably, IBM and Google — have developed working quantum computers, although so far these have been limited to less than 100 qubits. They require near-total isolation from noise, and even then, have very short lifetimes of a few microseconds. Both companies have plans to develop 1000 qubit quantum computers within the next few years, although unless the stability issues are overcome, quantum computers will not reach practical use.
    Now, Dutta and his co-author Professor Nigel Cooper have discovered a robust quantum system where multiple pairs of qubits remain entangled even with a lot of noise.
    They modelled an atomic system in a lattice formation, where atoms strongly interact with each other, hopping from one site of the lattice to another. The authors found if noise were added in the middle of the lattice, it didn’t affect entangled particles between left and right sides. This surprising feature results from a special type of symmetry that conserves the number of such entangled pairs.
    “We weren’t expecting this stabilised type of entanglement at all,” said Dutta. “We stumbled upon this hidden symmetry, which is very rare in these noisy systems.”
    They showed this hidden symmetry protects the entangled pairs and allows their number to be controlled from zero to a large maximum value. Similar conclusions can be applied to a broad class of physical systems and can be realised with already existing ingredients in experimental platforms, paving the way to controllable entanglement in a noisy environment.
    “Uncontrolled environmental disturbances are bad for survival of quantum effects like entanglement, but one can learn a lot by deliberately engineering specific types of disturbances and seeing how the particles respond,” said Dutta. “We’ve shown that a simple form of disturbance can actually produce — and preserve — many entangled pairs, which is a great incentive for experimental developments in this field.”
    The researchers are hoping to confirm their theoretical findings with experiments within the next year. More

  • in

    Stretchable micro-supercapacitors to self-power wearable devices

    A stretchable system that can harvest energy from human breathing and motion for use in wearable health-monitoring devices may be possible, according to an international team of researchers, led by Huanyu “Larry” Cheng, Dorothy Quiggle Career Development Professor in Penn State’s Department of Engineering Science and Mechanics.
    The research team, with members from Penn State and Minjiang University and Nanjing University, both in China, recently published its results in Nano Energy.
    According to Cheng, current versions of batteries and supercapacitors powering wearable and stretchable health-monitoring and diagnostic devices have many shortcomings, including low energy density and limited stretchability.
    “This is something quite different than what we have worked on before, but it is a vital part of the equation,” Cheng said, noting that his research group and collaborators tend to focus on developing the sensors in wearable devices. “While working on gas sensors and other wearable devices, we always need to combine these devices with a battery for powering. Using micro-supercapacitors gives us the ability to self-power the sensor without the need for a battery.”
    An alternative to batteries, micro-supercapacitors are energy storage devices that can complement or replace lithium-ion batteries in wearable devices. Micro-supercapacitors have a small footprint, high power density, and the ability to charge and discharge quickly. However, according to Cheng, when fabricated for wearable devices, conventional micro-supercapacitors have a “sandwich-like” stacked geometry that displays poor flexibility, long ion diffusion distances and a complex integration process when combined with wearable electronics.
    This led Cheng and his team to explore alternative device architectures and integration processes to advance the use of micro-supercapacitors in wearable devices. They found that arranging micro-supercapacitor cells in a serpentine, island-bridge layout allows the configuration to stretch and bend at the bridges, while reducing deformation of the micro-supercapacitors — the islands. When combined, the structure becomes what the researchers refer to as “micro-supercapacitors arrays.”
    “By using an island-bridge design when connecting cells, the micro-supercapacitor arrays displayed increased stretchability and allowed for adjustable voltage outputs,” Cheng said. “This allows the system to be reversibly stretched up to 100%.”
    By using non-layered, ultrathin zinc-phosphorus nanosheets and 3D laser-induced graphene foam — a highly porous, self-heating nanomaterial — to construct the island-bridge design of the cells, Cheng and his team saw drastic improvements in electric conductivity and the number of absorbed charged ions. This proved that these micro-supercapacitor arrays can charge and discharge efficiently and store the energy needed to power a wearable device.
    The researchers also integrated the system with a triboelectric nanogenerator, an emerging technology that converts mechanical movement to electrical energy. This combination created a self-powered system.
    “When we have this wireless charging module that’s based on the triboelectric nanogenerator, we can harvest energy based on motion, such as bending your elbow or breathing and speaking,” Cheng said. “We are able to use these everyday human motions to charge the micro-supercapacitors.”
    By combining this integrated system with a graphene-based strain sensor, the energy-storing micro-supercapacitor arrays — charged by the triboelectric nanogenerators — are able to power the sensor, Cheng said, showing the potential for this system to power wearable, stretchable devices.

    Story Source:
    Materials provided by Penn State. Original written by Tessa M. Pick. Note: Content may be edited for style and length. More

  • in

    Algorithms and automation: Making new technology faster and cheaper

    Additive manufacturing (AM) machinery has advanced over time, however, the necessary software for new machines often lags behind. To help mitigate this issue, Penn State researchers designed an automated process planning software to save money, time and design resources.
    Newer, five-axis machines are designed to move linearly along an x, y and z plane and rotate between the planes to allow the machine to change an object’s orientation. These machines are an advancement on the traditional three-axis machines that lack rotation capabilities and require support structures.
    Such a machine can potentially lead to large cost and time savings; however, five-axis AM lacks the same design planning and automation that three-axis machines have. This is where the creation of planning software becomes critical.
    “Five-axis AM is a young area, and the software isn’t there yet,” said Xinyi Xiao, a summer 2020 Penn State doctoral recipient in industrial engineering, now an assistant professor in mechanical and manufacturing engineering at Miami University in Ohio. “Essentially, we developed a methodology to automatically map designs from CAD — computer-aided design — software to AM to help cut unnecessary steps. You save money by taking less time to make the part and by also using less materials from three-axis support structures.”
    Xiao conducted this work as part of her doctoral program in the Penn State Harold and Inge Marcus Department of Industrial and Manufacturing Engineering under the supervision of Sanjay Joshi, professor of industrial engineering. Their research was published in the Journal of Additive Manufacturing.
    “We want to automate the decision process for manufacturing designs to get to ‘push button additive manufacturing,'” Joshi said. “The idea of the software is to make five-axis AM fully automated without the need for manual work or re-designs of a product. Xinyi came to me when she needed guidance or had questions, but ultimately, she held the key.”
    The software’s algorithm automatically determines a part’s sections and the sections’ orientations. From this, the software designates when each section will be printed, and in which orientation within the printing sequence. Through a decomposition process, the part’s geometry boils down into individual sections, each printable without support structures. As each piece is made in order, the machine can rotate throughout its axes to reorient the part and continue printing. Xiao compared it to working with Lego building blocks.
    The algorithm can help inform a designer’s process plan to manufacture a part. It allows designers opportunities to make corrections or alter the design before printing, which can positively affect cost. The algorithm can also inform a designer how feasible a part may be to create using support-free manufacturing.
    “With an algorithm, you don’t really need the expertise from the user because it’s in the software,” Joshi said. “Automation can help with trying out a bunch of different scenarios very quickly before you create anything on the machine.”
    Xiao said she intends to continue this research as some of the major application areas of this technology are aerospace and automobiles.
    “Large metal components, using traditional additive manufacturing, can takes days and waste lots of materials by using support structures,” Xiao said. “Additive manufacturing is very powerful, and it can make a lot of things due to its flexibility; however, it also has its disadvantages. There is still more work to do.”

    Story Source:
    Materials provided by Penn State. Original written by Miranda Buckheit. Note: Content may be edited for style and length. More

  • in

    Understanding COVID-19 infection and possible mutations

    The binding of a SARS-CoV-2 virus surface protein spike — a projection from the spherical virus particle — to the human cell surface protein ACE2 is the first step to infection that may lead to COVID-19 disease. Penn State researchers computationally assessed how changes to the virus spike makeup can affect binding with ACE2 and compared results to those of the original SARS-CoV virus (SARS).
    The researchers’ original manuscript preprint, made available online in March, was among the first to computationally investigate SARS-CoV-2’s high affinity, or tendency to bind, with human ACE2. The paper was published online on Sept. 18 in the Computational and Structural Biotechnology Journal. The work was conceived and led by Costas Maranas, Donald B. Broughton Professor in the Department of Chemical Engineering, and his former graduate student Ratul Chowdhury, who is currently a postdoctoral fellow at Harvard Medical School.
    “We were interested in answering two important questions,” said Veda Sheersh Boorla, doctoral student in chemical engineering and co-author on the paper. “We wanted to first discern key structural changes that give COVID-19 a higher affinity towards human ACE2 proteins when compared with SARS, and then assess its potential affinity to livestock or other animal ACE2 proteins.”
    The researchers computationally modeled the attachment of SARS-CoV-2 protein spike to ACE2, which is located in the upper respiratory tract and serves as the entry point for other coronaviruses, including SARS. The team used a molecular modeling approach to compute the binding strength and interactions of the viral protein’s attachment to ACE2.
    The team found that the SARS-CoV-2 spike protein is highly optimized to bind with human ACE2. Simulations of viral attachment to homologous ACE2 proteins of bats, cattle, chickens, horses, felines and canines showed the highest affinity for bats and human ACE2, with lower values of affinity for cats, horses, dogs, cattle and chickens, according to Chowdhury.
    “Beyond explaining the molecular mechanism of binding with ACE2, we also explored changes in the virus spike that could change its affinity with human ACE2,” said Chowdhury, who earned his doctorate in chemical engineering at Penn State in fall 2019.
    Understanding the binding behavior of the virus spike with ACE2 and the virus tolerance of these structural spike changes could inform future research on vaccine durability and the potential for the virus to spread to other species.
    “The computational workflow that we have established should be able to handle other receptor binding-mediated entry mechanisms for other viruses that may arise in the future,” Chowdhury said.
    The Department of Agriculture, the Department of Energy and the National Science Foundation supported this work.

    Story Source:
    Materials provided by Penn State. Original written by Gabrielle Stewart. Note: Content may be edited for style and length. More

  • in

    Breakthrough optical sensor mimics human eye, a key step toward better AI

    Researchers at Oregon State University are making key advances with a new type of optical sensor that more closely mimics the human eye’s ability to perceive changes in its visual field.
    The sensor is a major breakthrough for fields such as image recognition, robotics and artificial intelligence. Findings by OSU College of Engineering researcher John Labram and graduate student Cinthya Trujillo Herrera were published today in Applied Physics Letters.
    Previous attempts to build a human-eye type of device, called a retinomorphic sensor, have relied on software or complex hardware, said Labram, assistant professor of electrical engineering and computer science. But the new sensor’s operation is part of its fundamental design, using ultrathin layers of perovskite semiconductors — widely studied in recent years for their solar energy potential — that change from strong electrical insulators to strong conductors when placed in light.
    “You can think of it as a single pixel doing something that would currently require a microprocessor,” said Labram, who is leading the research effort with support from the National Science Foundation.
    The new sensor could be a perfect match for the neuromorphic computers that will power the next generation of artificial intelligence in applications like self-driving cars, robotics and advanced image recognition, Labram said. Unlike traditional computers, which process information sequentially as a series of instructions, neuromorphic computers are designed to emulate the human brain’s massively parallel networks.
    “People have tried to replicate this in hardware and have been reasonably successful,” Labram said. “However, even though the algorithms and architecture designed to process information are becoming more and more like a human brain, the information these systems receive is still decidedly designed for traditional computers.”
    In other words: To reach its full potential, a computer that “thinks” more like a human brain needs an image sensor that “sees” more like a human eye.

    advertisement

    A spectacularly complex organ, the eye contains around 100 million photoreceptors. However, the optic nerve only has 1 million connections to the brain. This means that a significant amount of preprocessing and dynamic compression must take place in the retina before the image can be transmitted.
    As it turns out, our sense of vision is particularly well adapted to detect moving objects and is comparatively “less interested” in static images, Labram said. Thus, our optical circuitry gives priority to signals from photoreceptors detecting a change in light intensity — you can demonstrate this yourself by staring at a fixed point until objects in your peripheral vision start to disappear, a phenomenon known as the Troxler effect.
    Conventional sensing technologies, like the chips found in digital cameras and smartphones, are better suited to sequential processing, Labram said. Images are scanned across a two-dimensional array of sensors, pixel by pixel, at a set frequency. Each sensor generates a signal with an amplitude that varies directly with the intensity of the light it receives, meaning a static image will result in a more or less constant output voltage from the sensor.
    By contrast, the retinomorphic sensor stays relatively quiet under static conditions. It registers a short, sharp signal when it senses a change in illumination, then quickly reverts to its baseline state. This behavior is owed to the unique photoelectric properties of a class of semiconductors known as perovskites, which have shown great promise as next-generation, low-cost solar cell materials.
    In Labram’s retinomorphic sensor, the perovskite is applied in ultrathin layers, just a few hundred nanometers thick, and functions essentially as a capacitor that varies its capacitance under illumination. A capacitor stores energy in an electrical field.

    advertisement

    “The way we test it is, basically, we leave it in the dark for a second, then we turn the lights on and just leave them on,” he said. “As soon as the light goes on, you get this big voltage spike, then the voltage quickly decays, even though the intensity of the light is constant. And that’s what we want.”
    Although Labram’s lab currently can test only one sensor at a time, his team measured a number of devices and developed a numerical model to replicate their behavior, arriving at what Labram deems “a good match” between theory and experiment.
    This enabled the team to simulate an array of retinomorphic sensors to predict how a retinomorphic video camera would respond to input stimulus.
    “We can convert video to a set of light intensities and then put that into our simulation,” Labram said. “Regions where a higher-voltage output is predicted from the sensor light up, while the lower-voltage regions remain dark. If the camera is relatively static, you can clearly see all the things that are moving respond strongly. This stays reasonably true to the paradigm of optical sensing in mammals.”
    A simulation using footage of a baseball practice demonstrates the expected results: Players in the infield show up as clearly visible, bright moving objects. Relatively static objects — the baseball diamond, the bleachers, even the outfielders — fade into darkness.
    An even more striking simulation shows a bird flying into view, then all but disappearing as it stops at an invisible bird feeder. The bird reappears as it takes off. The feeder, set swaying, becomes visible only as it starts to move.
    “The good thing is that, with this simulation, we can input any video into one of these arrays and process that information in essentially the same way the human eye would,” Labram said. “For example, you can imagine these sensors being used by a robot tracking the motion of objects. Anything static in its field of view would not elicit a response, however a moving object would be registering a high voltage. This would tell the robot immediately where the object was, without any complex image processing.” More

  • in

    New approach for more accurate epidemic modeling

    A new class of epidemiological models based on alternative thinking about how contagions propagate, particularly in the early phases of a pandemic, provide a blueprint for more accurate epidemic modeling and improved disease spread predictions and responses, according to a study published recently in Scientific Reports by researchers at the University of California, Irvine and other institutions.
    In the paper, the scientists said that standard epidemic models incorrectly assume that the rate in which an infectious disease spreads depends on a simple product of the number of infected and susceptible people. The authors instead suggest that transmission happens not through complete mingling of entire populations but at the boundary of sub-groups of infected individuals.
    “Standard epidemiological models rely on the presumption of strong mixing between infected and non-infected individuals, with widespread contact between members of those groups,” said co-author Tryphon Georgiou, UCI Distinguished Professor of mechanical & aerospace engineering. “We stress, rather, that transmission occurs in geographically concentrated cells. Therefore, in our view, the use of fractional exponents helps us more accurately predict rates of infection and disease spread.”
    The researchers proposed a “fractional power alternative” to customary models that takes into account susceptible, infected and recovered populations. The value of the exponents in these fractional (fSIR) models depends on factors such as the nature and extent of contact between infected and healthy sub-populations.
    The authors explained that during the initial phase of an epidemic, infection proceeds outwards from contagion carriers to the general population. Since the number of susceptible people is much larger than that of the infected, the boundary of infected cells scales at a fractional power of less than one of the area of the cells.
    The researchers tested their theory through a series of numerical simulations. They also fitted their fractional models to actual data from Johns Hopkins University Center for Systems Science and Engineering. Those data covered the first few months of the COVID-19 pandemic in Italy, Germany, France and Spain. Through both processes they found the exponent to be in the range of .6 and .8.
    “The fractional exponent impacts in substantially different ways how the epidemic progresses during early and later phases, and as a result, identifying the correct exponent extends the duration over which reliable predictions can be made as compared to previous models,” Georgiou said.
    In the context of the current COVID-19 pandemic, better knowledge about propagation of infections could aid in decisions related to the institution of masking and social distancing mandates in communities.
    “Accurate epidemiological models can help policy makers choose the right course of action to help prevent further spread of infectious diseases,” Georgiou said.

    Story Source:
    Materials provided by University of California – Irvine. Note: Content may be edited for style and length. More