More stories

  • in

    A topography of extremes

    An international team of scientists from the Helmholtz-Zentrum Dresden-Rossendorf (HZDR), the Max Planck Institute for Chemical Physics of Solids, and colleagues from the USA and Switzerland have successfully combined various extreme experimental conditions in a completely unique way, revealing exciting insights into the mysterious conducting properties of the crystalline metal CeRhIn5. In the journal Nature Communications, they report on their exploration of previously uncharted regions of the phase diagram of this metal, which is considered a promising model system for understanding unconventional superconductors.
    “First, we apply a thin layer of gold to a microscopically small single crystal. Then we use an ion beam to carve out tiny microstructures. At the ends of these structures, we attach ultra-thin platinum tapes to measure resistance along different directions under extremely high pressures, which we generate with a diamond anvil pressure cell. In addition, we apply very powerful magnetic fields to the sample at temperatures near absolute zero.”
    To the average person, this may sound like an overzealous physicist’s whimsical fancy, but in fact, it is an actual description of the experimental work conducted by Dr. Toni Helm from HZDR’s High Magnetic Field Laboratory (HLD) and his colleagues from Tallahassee, Los Alamos, Lausanne and Dresden. Well, at least in part, because this description only hints at the many challenges involved in combining such extremes concurrently. This great effort is, of course, not an end in itself: the researchers are trying to get to the bottom of some fundamental questions of solid state physics.
    The sample studied is cer-rhodium-indium-five (CeRhIn5), a metal with surprising properties that are not fully understood yet. Scientists describe it as an unconventional electrical conductor with extremely heavy charge carriers, in which, under certain conditions, electrical current can flow without losses. It is assumed that the key to this superconductivity lies in the metal’s magnetic properties. The central issues investigated by physicists working with such correlated electron systems include: How do heavy electrons organize collectively? How can this cause magnetism and superconductivity? And what is the relationship between these physical phenomena?
    An expedition through the phase diagram
    The physicists are particularly interested in the metal’s phase diagram, a kind of map whose coordinates are pressure, magnetic field strength, and temperature. If the map is to be meaningful, the scientists have to uncover as many locations as possible in this system of coordinates, just like a cartographer exploring unknown territory. In fact, the emerging diagram is not unlike the terrain of a landscape.
    As they reduce temperature to almost four degrees above absolute zero, the physicists observe magnetic order in the metal sample. At this point, they have a number of options: They can cool the sample down even further and expose it to high pressures, forcing a transition into the superconducting state. If, on the other hand, they solely increase the external magnetic field to 600,000 times the strength of the earth’s magnetic field, the magnetic order is also suppressed; however, the material enters a state called “electronically nematic.”
    This term is borrowed from the physics of liquid crystals, where it describes a certain spatial orientation of molecules with a long-range order over larger areas. The scientists assume that the electronically nematic state is closely linked to the phenomenon of unconventional superconductivity. The experimental environment at HLD provides optimum conditions for such a complex measurement project. The large magnets generate relatively long-lasting pulses and offer sufficient space for complex measurement methods under extreme conditions.
    Experiments at the limit afford a glimpse of the future
    The experiments have a few additional special characteristics. For example, working with high-pulsed magnetic fields creates eddy currents in the metallic parts of the experimental setup, which can generate unwanted heat. The scientists have therefore manufactured the central components from a special plastic material that suppresses this effect and functions reliably near absolute zero. Through the microfabrication by focused ion beams, they produce a sample geometry that guarantees a high-quality measurement signal.
    “Microstructuring will become much more important in future experiments. That’s why we brought this technology into the laboratory right away,” says Toni Helm, adding: “So we now have ways to access and gradually penetrate into dimensions where quantum mechanical effects play a major role.” He is also certain that the know-how he and his team have acquired will contribute to research on high-temperature superconductors or novel quantum technologies.

    Story Source:
    Materials provided by Helmholtz-Zentrum Dresden-Rossendorf. Original written by Dr. Bernd Schröder. Note: Content may be edited for style and length. More

  • in

    Photonics researchers report breakthrough in miniaturizing light-based chips

    Photonic integrated circuits that use light instead of electricity for computing and signal processing promise greater speed, increased bandwidth, and greater energy efficiency than traditional circuits using electricity.
    But they’re not yet small enough to compete in computing and other applications where electric circuits continue to reign.
    Electrical engineers at the University of Rochester believe they’ve taken a major step in addressing the problem. Using a material widely adopted by photonics researchers, the Rochester team has created the smallest electro-optical modulator yet. The modulator is a key component of a photonics-based chip, controlling how light moves through its circuits.
    In Nature Communications, the lab of Qiang Lin, professor of electrical and computer engineering, describes using a thin film of lithium niobate (LN) bonded on a silicon dioxide layer to create not only the smallest LN modulator yet, but also one that operates at high speed and is energy efficient.
    This “paves a crucial foundation for realizing large-scale LN photonic integrated circuits that are of immense importance for broad applications in data communication, microwave photonics, and quantum photonics,” writes lead author Mingxiao Li, a graduate student in Lin’s lab.
    Because of its outstanding electro-optic and nonlinear optic properties, lithium niobate has “become a workhorse material system for photonics research and development,” Lin says. “However current LN photonic devices, made upon either bulk crystal or thin-film platform require large dimensions and are difficult to scale down in size, which limits the modulation efficiency, energy consumption, and the degree of circuit integration. A major challenge lies in making high-quality nanoscopic photonic structures with high precision.”
    The modulator project builds upon the lab’s previous use of lithium niobate to create a photonic nanocavity — another key component in photonic chips. At only about a micron in size, the nanocavity can tune wavelengths using only two to three photons at room temperature — “the first time we know of that even two or three photons have been manipulated in this way at room temperatures,” Lin says. That device was described in a paper in Optica.
    The modulator could be used in conjunction with a nanocavity in creating a photonic chip at the nanoscale.
    The project was supported with funding from the National Science Foundation, Defense Threat Reduction Agency, and Defense Advanced Research Projects Agency (DARPA); fabrication of the device was done in part at the Cornell NanoScale Facility.

    Story Source:
    Materials provided by University of Rochester. Original written by Bob Marcotte. Note: Content may be edited for style and length. More

  • in

    Using math to examine the sex differences in dinosaurs

    Male lions typically have manes. Male peacocks have six-foot-long tail feathers. Female eagles and hawks can be about 30% bigger than males. But if you only had these animals’ fossils to go off of, it would be hard to confidently say that those differences were because of the animals’ sex. That’s the problem that paleontologists face: it’s hard to tell if dinosaurs with different features were separate species, different ages, males and females of the same species, or just varied in a way that had nothing to do with sex. A lot of the work trying to show differences between male and female dinosaurs has come back inconclusive. But in a new paper, scientists show how using a different kind of statistical analysis can often estimate the degree of sexual variation in a dataset of fossils.
    “It’s a whole new way of looking at fossils and judging the likelihood that the traits we see correlate with sex,” says Evan Saitta, a research associate at Chicago’s Field Museum and the lead author of the new paper in the Biological Journal of the Linnean Society. “This paper is part of a larger revolution of sorts about how to use statistics in science, but applied in the context of paleontology.”
    Unless you find a dinosaur skeleton that contains the fossilized eggs that it was about to lay, or a similar dead giveaway, it’s hard to be sure about an individual dinosaur’s sex. But many birds, the only living dinosaurs, vary a lot between males and females on average, a phenomenon called sexual dimorphism. Dinosaurs’ cousins, the crocodilians, show sexual dimorphism too. So it stands to reason that in many species of dinosaurs, males and females would differ from each other in a variety of traits.
    But not all differences in animals of the same species are linked to their sex. For example, in humans, average height is related to sex, but other traits like eye color and hair color don’t neatly map onto men versus women. We often don’t know precisely how the traits we see in dinosaurs relate to their sex, either. Since we don’t know if, say, larger dinosaurs were female, or dinosaurs with bigger crests on their heads were male, Saitta and his colleagues looked for patterns in the differences between individuals of the same species. To do that, they examined measurements from a bunch of fossils and modern species and did a lot of math.
    Other paleontologists have tried to look for sexual dimorphism in dinosaurs using a form of statistics (called significance testing, for all you stats nerds) where you collect all your data points and then calculate the probability that those results could have happened by pure chance rather than an actual cause (like how doctors determine whether a new medicine is more helpful than a placebo). This kind of analysis sometimes works for big, clean datasets. But, says Saitta, “with a lot of these dinosaur tests, our data is pretty bad” — there aren’t that many fossil specimens, or they’re incomplete or poorly preserved. Using significance testing in these cases, Saitta argues, results in a lot of false negatives: since the samples are small, it takes an extreme amount of variation between the sexes to trigger a positive test result. (Significance testing isn’t just a consideration for paleontologists — concerns over a “replication crisis” have plagued researchers in psychology and medicine, where certain studies are difficult to reproduce.)
    Instead, Saitta and his colleagues experimented with another form of stats, called effect size statistics. Effect size statistics is better for smaller datasets because it attempts to estimate the degree of sex differences and calculate the uncertainty in that estimate. This alternative statistical method takes natural variations into account without viewing dimorphism as black-or-white-many sexual dimorphisms can be subtle. Co-author Max Stockdale of the University of Bristol wrote the code to run the statistical simulations. Saitta and his colleagues uploaded measurements of dinosaur fossils to the program, and it yielded estimates of body mass dimorphism and error bars in those estimates that would have simply been dismissed using significance testing.
    “We showed that if you adopt this paradigm shift in statistics, where you attempt to estimate the magnitude of an effect and then put error bars around that, you can often produce a fairly accurate estimate of sexual variation even when the sexes of the individuals are unknown,” says Saitta.
    For instance, Saitta and his colleagues found that in the dinosaur Maiasaura, adult specimens vary a lot in size, and the analyses show that these are likelier to correspond to sexual variation than differences seen in other dinosaur species. But while the current data suggest that one sex was about 45% bigger than the other, they can’t tell if the bigger ones are males or females.
    While there’s a lot of work yet to be done, Saitta says he’s excited that the statistical simulations gave such consistent results despite the limits of the fossil data.
    “Sexual selection is such an important driver of evolution, and to limit ourselves to ineffective statistical approaches hurts our ability to understand the paleobiology of these animals,” he says. “We need to account for sexual variation in the fossil record.”
    “I’m happy to play a small part in this sort of statistical revolution,” he adds. “Effect size statistics has a major impact for psychological and medical research, so to apply it to dinosaurs and paleontology is really cool.”

    Story Source:
    Materials provided by Field Museum. Note: Content may be edited for style and length. More

  • in

    Thermodynamics of computation: A quest to find the cost of running a Turing machine

    Turing machines were first proposed by British mathematician Alan Turing in 1936, and are a theoretical mathematical model of what it means for a system to “be a computer.”
    At a high level, these machines are similar to real-world modern computers because they have storage for digital data and programs (somewhat like a hard drive), a little central processing unit (CPU) to perform computations, and can read programs from their storage, run them, and produce outputs. Amazingly, Turing proposed his model before real-world electronic computers existed.
    In a paper published in the American Physical Society’s Physical Review Research, Santa Fe Institute researchers Artemy Kolchinsky and David Wolpert present their work exploring the thermodynamics of computation within the context of Turing machines.
    “Our hunch was that the physics of Turing machines would show a lot of rich and novel structure because they have special properties that simpler models of computation lack, such as universality,” says Kolchinsky.
    Turing machines are widely believed to be universal, in the sense that any computation done by any system can also be done by a Turing machine.
    The quest to find the cost of running a Turing machine began with Wolpert trying to use information theory — the quantification, storage, and communication of information — to formalize how complex a given operation of a computer is. While not restricting his attention to Turing machines per se, it was clear that any results he derived would have to apply to them as well.
    During the process, Wolpert stumbled onto the field of stochastic thermodynamics. “I realized, very grudgingly, that I had to throw out the work I had done trying to reformulate nonequilibrium statistical physics, and instead adopt stochastic thermodynamics,” he says. “Once I did that, I had the tools to address my original question by rephrasing it as: In terms of stochastic thermodynamics cost functions, what’s the cost of running a Turing machine? In other words, I reformulated my question as a thermodynamics of computation calculation.”
    Thermodynamics of computation is a subfield of physics that explores what the fundamental laws of physics say about the relationship between energy and computation. It has important implications for the absolute minimum amount of energy required to perform computations.
    Wolpert and Kolchinsky’s work shows that relationships exist between energy and computation that can be stated in terms of algorithmic information (which defines information as compression length), rather than “Shannon information” (which defines information as reduction of uncertainty about the state of the computer).
    Put another way: The energy required by a computation depends on how much more compressible the output of the computation is than the input. “To stretch a Shakespeare analogy, imagine a Turing machine reads-in the entire works of Shakespeare, and then outputs a single sonnet,” explains Kolchinsky. “The output has a much shorter compression than the input. Any physical process that carries out that computation would, relatively speaking, require a lot of energy.”
    While important earlier work also proposed relationships between algorithmic information and energy, Wolpert and Kolchinsky derived these relationships using the formal tools of modern statistical physics. This allows them to analyze a broader range of scenarios and to be more precise about the conditions under which their results hold than was possible by earlier researchers.
    “Our results point to new kinds of relationships between energy and computation,” says Kolchinsky. “This broadens our understanding of the connection between contemporary physics and information, which is one of the most exciting research areas in physics.”

    Story Source:
    Materials provided by Santa Fe Institute. Note: Content may be edited for style and length. More

  • in

    U.S. political parties become extremist to get more votes

    New research shows that U.S. political parties are becoming increasingly polarized due to their quest for voters — not because voters themselves are becoming more extremist.
    The research team, which includes Northwestern University researchers, found that extremism is a strategy that has worked over the years even if voters’ views remain in the center. Voters are not looking for a perfect representative but a “satisficing,” meaning “good enough,” candidate.
    “Our assumption is not that people aren’t trying to make the perfect choice, but in the presence of uncertainty, misinformation or a lack of information, voters move toward satisficing,” said Northwestern’s Daniel Abrams, a senior author of the study.
    The study is now available online and will be published in SIAM Review’s printed edition on Sept. 1.
    Abrams is an associate professor of engineering sciences and applied mathematics in Northwestern’s McCormick School of Engineering. Co-authors include Adilson Motter, the Morrison Professor of Physics and Astronomy in Northwestern’s Weinberg College of Arts and Sciences, and Vicky Chuqiao Yang, a postdoctoral fellow at the Santa Fe Institute and former student in Abrams’ laboratory.
    To accommodate voters’ “satisficing” behavior, the team developed a mathematical model using differential equations to understand how a rational political party would position itself to get the most votes. The tool is reactive, with the past influencing future behaviors of the parties.

    advertisement

    The team tested 150 years of U.S. Congressional voting data and found the model’s predictions are consistent with the political parties’ historical trajectories: Congressional voting has shifted to the margins, but voters’ positions have not changed much.
    “The two major political parties have been getting more and more polarized since World War II, while historical data indicates the average American voter remains just as moderate on key issues and policies as they always have been,” Abrams said.
    The team found that polarization is instead tied to the ideological homogeneity within the constituencies of the two major parties. To differentiate themselves, the politicians of the parties move further away from the center.
    The new model helps explains why. The moves to the extremes can be interpreted as attempts by the Democratic and Republican parties to minimize an overlap of constituency. Test runs of the model show how staying within the party lines creates a winning strategy.
    “Right now, we have one party with a lot of support from minorities and women, and another party with a lot of support from white men,” Motter said.

    advertisement

    Why not have both parties appeal to everyone? “Because of the perception that if you gain support from one group, it comes at the expense of the other group,” he added. “The model shows that the increased polarization is not voters’ fault. It is a way to get votes. This study shows that we don’t need to assume that voters have a hidden agenda driving polarization in Congress. There is no mastermind behind the policy. It is an emergent phenomenon.”
    The researchers caution that many other factors — political contributions, gerrymandering and party primaries — also contribute to election outcomes, which future work can examine.
    The work challenges a model introduced in the late 1950s by economist Anthony Downs, which assumes everyone votes and makes well-informed, completely rational choices, picking the candidate closest to their opinions. The Downsian model predicts that political parties over time would move closer to the center.
    However, U.S. voters’ behaviors don’t necessarily follow those patterns, and the parties’ positions have become dramatically polarized.
    “People aren’t perfectly rational, but they’re not totally irrational either,” Abrams said. “They’ll vote for the candidate that’s good enough — or not too bad — without making fine distinctions among those that meet their perhaps low bar for good enough. If we want to reduce political polarization between the parties, we need both parties to be more tolerant of the diversity within their own ranks.” More

  • in

    New neural network differentiates Middle and Late Stone Age toolkits

    MSA toolkits first appear some 300 thousand years ago, at the same time as the earliest fossils of Homo sapiens, and are still in use 30 thousand years ago. However, from 67 thousand years ago, changes in stone tool production indicate a marked shift in behaviour; the new toolkits that emerge are labelled LSA and remained in use into the recent past. A growing body of evidence suggests that the transition from MSA to LSA was not a linear process, but occurred at different times in different places. Understanding this process is important to examine what drives cultural innovation and creativity, and what explains this critical behavioural change. Defining differences between the MSA and LSA is an important step towards this goal.
    “Eastern Africa is a key region to examine this major cultural change, not only because it hosts some of the youngest MSA sites and some of the oldest LSA sites, but also because the large number of well excavated and dated sites make it ideal for research using quantitative methods,” says Dr. Jimbob Blinkhorn, an archaeologist from the Pan African Evolution Research Group, Max Planck Institute for the Science of Human History and the Centre for Quaternary Research, Department of Geography, Royal Holloway. “This enabled us to pull together a substantial database of changing patterns of stone tool production and use, spanning 130 to 12 thousand years ago, to examine the MSA-LSA transition.”
    The study examines the presence or absence of 16 alternate tool types across 92 stone tool assemblages, but rather than focusing on them individually, emphasis is placed on the constellations of tool forms that frequently occur together.
    “We’ve employed an Artificial Neural Network (ANN) approach to train and test models that differentiate LSA assemblages from MSA assemblages, as well as examining chronological differences between older (130-71 thousand years ago) and younger (71-28 thousand years ago) MSA assemblages with a 94% success rate,” says Dr. Matt Grove, an archaeologist at the University of Liverpool.
    Artificial Neural Networks (ANNs) are computer models intended to mimic the salient features of information processing in the brain. Like the brain, their considerable processing power arises not from the complexity of any single unit but from the action of many simple units acting in parallel. Despite the widespread use of ANNs today, applications in archaeological research remain limited.
    “ANNs have sometimes been described as a ‘black box’ approach, as even when they are highly successful, it may not always be clear exactly why,” says Grove. “We employed a simulation approach that breaks open this black box to understand which inputs have a significant impact on the results. This enabled us to identify how patterns of stone tool assemblage composition vary between the MSA and LSA, and we hope this demonstrates how such methods can be used more widely in archaeological research in the future.”
    “The results of our study show that MSA and LSA assemblages can be differentiated based on the constellation of artefact types found within an assemblage alone,” Blinkhorn adds. “The combined occurrence of backed pieces, blade and bipolar technologies together with the combined absence of core tools, Levallois flake technology, point technology and scrapers robustly identifies LSA assemblages, with the opposite pattern identifying MSA assemblages. Significantly, this provides quantified support to qualitative differences noted by earlier researchers that key typological changes do occur with this cultural transition.”
    The team plans to expand the use of these methods to dig deeper into different regional trajectories of cultural change in the African Stone Age. “The approach we’ve employed offers a powerful toolkit to examine the categories we use to describe the archaeological record and to help us examine and explain cultural change amongst our ancestors,” says Blinkhorn. More

  • in

    Don't forget to clean robotic support pets, study says

    Robotic support pets used to reduce depression in older adults and people with dementia acquire bacteria over time, but a simple cleaning procedure can help them from spreading illnesses, according to a new study published August 26, 2020 in the open-access journal PLOS ONE by Hannah Bradwell of the University of Plymouth, UK and colleagues.
    There is a wealth of research on the use of social robots, or companion robots, in care and long-term nursing homes. “Paro the robot seal” and other robotic animals have been linked to reductions in depression, agitation, loneliness, nursing staff stress, and medication use — especially relevant during this period of pandemic-related social isolation.
    In the new study, researchers measured the microbial load found on the surface of eight different robot animals (Paro, Miro, Pleo rb, Joy for All dog, Joy for All cat, Furby Connect, Perfect Petzzz dog, and Handmade Hedgehog) after interaction with four care home residents, and again after cleaning by a researcher or care home staff member. The animals ranged in material from fur to soft plastic to solid plastic. The cleaning process involved spraying with anti-bacterial product, brushing any fur, and vigorous cleaning with anti-bacterial wipes.
    Most of the devices gathered enough harmful microbes during 20 minutes of standard use to have a microbial load above the acceptable threshold of 2.5 CFU/cm2 (colony forming units per square centimetre). Only the Joy for All cat and the MiRo robot remained below this level when microbes were measured after a 48 hour incubation period; microbial loads on the other 6 robots ranged from 2.56 to 17.28 CFU/cm2. The post-cleaning microbial load, however, demonstrated that regardless of material type, previous microbial load, or who carried out the cleaning procedure, all robots could be brought to well below acceptable levels. 5 of the 8 robots had undetectable levels of microbes after cleaning and 48 hours of incubation, and the remaining 3 robots had only 0.04 to 0.08 CFU/cm2 after this protocol.
    Hannah Bradwell, researcher at the Centre for Health Technology says: “Robot pets may be beneficial for older adults and people with dementia living in care homes, likely improving wellbeing and providing company. This benefit could be particularly relevant at present, in light of social isolation, however our study has shown the strong requirement for considerations around infection control for these devices.”

    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    Scientists take new spin on quantum research

    Army researchers discovered a way to further enhance quantum systems to provide Soldiers with more reliable and secure capabilities on the battlefield.
    Specifically, this research informs how future quantum networks will be designed to deal with the effects of noise and decoherence, or the loss of information from a quantum system in the environment.
    As one of the U.S. Army’s priority research areas in its Modernization Strategy, quantum research will help transform the service into a multi-domain force by 2035 and deliver on its enduring responsibility as part of the joint force providing for the defense of the United States.
    “Quantum networking, and quantum information science as a whole, will potentially lead to unsurpassed capabilities in computation, communication and sensing,” said Dr. Brian Kirby, researcher at the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory. “Example applications of Army interest include secure secret sharing, distributed network sensing and efficient decision making.”
    This research effort considers how dispersion, a very common effect found in optical systems, impacts quantum states of three or more particles of light.
    Dispersion is an effect where a pulse of light spreads out in time as it is transmitted through a medium, such as a fiber optic. This effect can destroy time correlations in communication systems, which can result in reduced data rates or the introduction of errors.

    advertisement

    To understand this, Kirby said, consider the situation where two light pulses are created simultaneously and the goal is to send them to two different detectors so that they arrive at the same time. If each light pulse goes through a different dispersive media, such as two different fiber optic paths, then each pulse will be spread in time, ultimately making the arrival time of the pulses less correlated.
    “Amazingly, it was shown that the situation is different in quantum mechanics,” Kirby said. “In quantum mechanics, it is possible to describe the behavior of individual particles of light, called photons. Here, it was shown by research team member Professor James Franson from the University of Maryland, Baltimore County, that quantum mechanics allows for certain situations where the dispersion on each photon can actually cancel out so that the arrival times remain correlated.”
    The key to this is something called entanglement, a strong correlation between quantum systems, which is not possible in classical physics, Kirby said.
    In this new work, Nonlocal Dispersion Cancellation for Three or More Photons, published in the peer-reviewed Physical Review A, the researchers extend the analysis to systems of three or more entangled photons and identify in what scenarios quantum systems outperform classical ones. This is unique from similar research as it considers the effects of noise on entangled systems beyond two-qubits, which is where the primary focus has been.
    “This informs how future quantum networks will be designed to deal with the effects of noise and decoherence, in this case, dispersion specifically,” Kirby said.

    advertisement

    Additionally, based on the success of Franson’s initial work on systems of two-photons, it was reasonable to assume that dispersion on one part of a quantum system could always be cancelled out with the proper application of dispersion on another part of the system.
    “Our work clarifies that perfect compensation is not, in general, possible when you move to entangled systems of three or more photons,” Kirby said. “Therefore, dispersion mitigation in future quantum networks may need to take place in each communication channel independently.”
    Further, Kirby said, this work is valuable for quantum communications because it allows for increased data rates.
    “Precise timing is required to correlate detection events at different nodes of a network,” Kirby said. “Conventionally the reduction in time correlations between quantum systems due to dispersion would necessitate the use of larger timing windows between transmissions to avoid confusing sequential signals.”
    Since Kirby and his colleagues’ new work describes how to limit the uncertainty in joint detection times of networks, it will allow subsequent transmissions in quicker succession.
    The next step for this research is to determine if these results can be readily verified in an experimental setting. More