More stories

  • in

    Toward ultrafast computer chips that retain data even when there is no power

    Spintronic devices are attractive alternatives to conventional computer chips, providing digital information storage that is highly energy efficient and also relatively easy to manufacture on a large scale. However, these devices, which rely on magnetic memory, are still hindered by their relatively slow speeds, compared to conventional electronic chips.
    In a paper published in the journal Nature Electronics, an international team of researchers has reported a new technique for magnetization switching — the process used to “write” information into magnetic memory — that is nearly 100 times faster than state-of-the-art spintronic devices. The advance could lead to the development of ultrafast magnetic memory for computer chips that would retain data even when there is no power.
    In the study, the researchers report using extremely short, 6-picosecond electrical pulses to switch the magnetization of a thin film in a magnetic device with great energy efficiency. A picosecond is one-trillionth of a second.
    The research was led by Jon Gorchon, a researcher at the French National Centre for Scientific Research (CNRS) working at the University of Lorraine’s L’Institut Jean Lamour in France, in collaboration with Jeffrey Bokor, professor of electrical engineering and computer sciences at the University of California, Berkeley, and Richard Wilson, assistant professor of mechanical engineering and of materials science and engineering at UC Riverside. The project began at UC Berkeley when Gorchon and Wilson were postdoctoral researchers in Bokor’s lab.
    In conventional computer chips, the 0s and 1s of binary data are stored as the “on” or “off” states of individual silicon transistors. In magnetic memory, this same information can be stored as the opposite polarities of magnetization, which are usually thought of as the “up” or “down” states. This magnetic memory is the basis for magnetic hard drive memory, the technology used to store the vast amounts of data in the cloud.
    A key feature of magnetic memory is that the data is “non-volatile,” which means that information is retained even when there is no electrical power applied.

    advertisement

    “Integrating magnetic memory directly into computer chips has been a long-sought goal,” said Gorchon. “This would allow local data on-chip to be retained when the power is off, and it would enable the information to be accessed far more quickly than pulling it in from a remote disk drive.”
    The potential of magnetic devices for integration with electronics is being explored in the field of spintronics, in which tiny magnetic devices are controlled by conventional electronic circuits, all on the same chip.
    State-of-the-art spintronics is done with the so-called spin-orbit torque device. In such a device, a small area of a magnetic film (a magnetic bit) is deposited on top of a metallic wire. A current flowing through the wire leads to a flow of electrons with a magnetic moment, which is also called the spin. That, in turn, exerts a magnetic torque — called the spin-orbit torque — on the magnetic bit. The spin-orbit torque can then switch the polarity of the magnetic bit.
    State-of-the-art spin-orbit torque devices developed so far required current pulses of at least a nanosecond, or a millionth of a second, to switch the magnetic bit, while the transistors in state-of-the-art computer chips switch in only 1 to 2 picoseconds. This leads to the speed of the overall circuit being limited by the slow magnetic switching speed.
    In this study, the researchers launched the 6-picosecond-wide electrical current pulses along a transmission line into a cobalt-based magnetic bit. The magnetization of the cobalt bit was then demonstrated to be reliably switched by the spin-orbit torque mechanism.

    advertisement

    While heating by electric currents is a debilitating problem in most modern devices, the researchers note that, in this experiment, the ultrafast heating aids the magnetization reversal.
    “The magnet reacts differently to heating on long versus short time scales,” said Wilson. “When heating is this fast, only a small amount can change the magnetic properties to help reverse the magnet’s direction.”
    Indeed, preliminary energy usage estimates are incredibly promising; the energy needed in this “ultrafast” spin-orbit torque device is almost two orders of magnitude smaller than in conventional spintronic devices that operate at much longer time scales.
    “The high energy efficiency of this novel, ultrafast magnetic switching process was a big, and very welcome, surprise,” said Bokor. “Such a high-speed, low-energy spintronic device can potentially tackle the performance limitations of current processor level memory systems, and it could also be used for logic applications.”
    The experimental methods used by the researchers also offer a new way of triggering and probing spintronic phenomena at ultrafast time scales, which could help better understand the underlying physics at play in phenomena like spin-orbit torque. More

  • in

    Machine learning helps hunt for COVID-19 therapies

    Michigan State University Foundation Professor Guowei Wei wasn’t preparing machine learning techniques for a global health crisis. Still, when one broke out, he and his team were ready to help.
    The group already has one machine learning model at work in the pandemic, predicting consequences of mutations to SARS-CoV-2. Now, Wei’s team has deployed another to help drug developers on their most promising leads for attacking one of the virus’ most compelling targets. The researchers shared their research in the peer-reviewed journal Chemical Science.
    Prior to the pandemic, Wei and his team were already developing machine learning computer models — specifically, models that use what’s known as deep learning — to help save drug developers time and money. The researchers “train” their deep learning models with datasets filled with information about proteins that drug developers want to target with therapeutics. The models can then make predictions about unknown quantities of interest to help guide drug design and testing.
    Over the past three years, the Spartans’ models have been among the top performers in a worldwide competition series for computer-aided drug design known as the Drug Design Data Resource, or D3R, Grand Challenge. Then COVID-19 came.
    “We knew this was going to be bad. China shut down an entire city with 10 million people,” said Wei, who is a professor in the Departments of Mathematics as well as Electrical and Computer Engineering. “We had a technique at hand, and we knew this was important.”
    Wei and his team have repurposed their deep learning models to focus on a specific SARS-CoV-2 protein called its main protease. The main protease is a cog in the coronavirus’s protein machinery that’s critical to how the pathogen makes copies of itself. Drugs that disable that cog could thus stop the virus from replicating.

    advertisement

    What makes the main protease an even more attractive target is that it’s distinct from all known human proteases, which isn’t always the case. Drugs that attack the viral protease are thus less likely to disrupt people’s natural biochemistry.
    Another advantage of the SARS-CoV-2 main protease is that’s it’s nearly identical to that of the coronavirus responsible for the 2003 SARS outbreak. This means that drug developers and Wei’s team weren’t starting completely from scratch. They had information about the structure of the main protease and chemical compounds called protease inhibitors that interfere with the protein’s function.
    Still, gaps remained in understanding where those protease inhibitors latch onto the viral protein and how tightly. That’s where the Spartans’ deep learning models came in.
    Wei’s team used its models to predict those details for over 100 known protease inhibitors. That data also let the team rank those inhibitors and highlight the most promising ones, which can be very valuable information for labs and companies developing new drugs, Wei said.
    “In the early days of a drug discovery campaign, you might have 1,000 candidates,” Wei said. Typically, all those candidates would move to preclinical tests in animals, then maybe the most promising 10 or so can safely advance to clinical trials in humans, Wei explained.

    advertisement

    By focusing on drugs that are most attracted to the protease’s most vulnerable spots, drug developers can whittle down that list of 1,000 from the start, saving money and months, if not years, Wei said.
    “This is a way to help drug developers prioritize. They don’t have to waste resources to check every single candidate,” he said.
    But Wei also had a reminder. The team’s models do not replace the need for experimental validation, preclinical or clinical trials. Drug developers still need to prove their products are safe before providing them for patients, which can take many years.
    For that reason, Wei said, antibody treatments that resemble what immune systems produce naturally to fight the coronavirus will be most likely the first therapies approved during the pandemic. These antibodies, however, target the virus’s spike protein, rather than its main protease. Developing protease inhibitors would thus provide a welcome addition in an arsenal to fight a deadly and constantly evolving enemy.
    “If developers want to design a new set of drugs, we’ve shown basically what they need to do,” Wei said. More

  • in

    Artificial intelligence-based algorithm for the early diagnosis of Alzheimer's

    Alzheimer’s disease (AD) is a neurodegenerative disorder that affects a significant proportion of the older population worldwide. It causes irreparable damage to the brain and severely impairs the quality of life in patients. Unfortunately, AD cannot be cured, but early detection can allow medication to manage symptoms and slow the progression of the disease.
    Functional magnetic resonance imaging (fMRI) is a noninvasive diagnostic technique for brain disorders. It measures minute changes in blood oxygen levels within the brain over time, giving insight into the local activity of neurons. Despite its advantages, fMRI has not been used widely in clinical diagnosis. The reason is twofold. First, the changes in fMRI signals are so small that they are overly susceptible to noise, which can throw off the results. Second, fMRI data are complex to analyze. This is where deep-learning algorithms come into the picture.
    In a recent study published in the Journal of Medical Imaging, scientists from Texas Tech University employed machine-learning algorithms to classify fMRI data. They developed a type of deep-learning algorithm known as a convolutional neural network (CNN) that can differentiate among the fMRI signals of healthy people, people with mild cognitive impairment, and people with AD.
    CNNs can autonomously extract features from input data that are hidden to human observers. They obtain these features through training, for which a large amount of pre-classified data is needed. CNNs are predominantly used for 2D image classification, which means that four-dimensional fMRI data (three spatial and one temporal) present a challenge. fMRI data are incompatible with most existing CNN designs.
    To overcome this problem, the researchers developed a CNN architecture that can appropriately handle fMRI data with minimal pre-processing steps. The first two layers of the network focus on extracting features from the data solely based on temporal changes, without regard for 3D structural properties. Then, the three subsequent layers extract spatial features at different scales from the previously extracted time features. This yields a set of spatiotemporal characteristics that the final layers use to classify the input fMRI data from either a healthy subject, one with early or late mild cognitive impairment, or one with AD.
    This strategy offers many advantages over previous attempts to combine machine learning with fMRI for AD diagnosis. Harshit Parmar, doctoral student at Texas Tech University and lead author of the study, explains that the most important aspect of their work lies in the qualities of their CNN architecture. The new design is simple yet effective for handling complex fMRI data, which can be fed as input to the CNN without any significant manipulation or modification of the data structure. In turn, this reduces the computational resources needed and allows the algorithm to make predictions faster.
    Can deep learning methods improve the field of AD detection and diagnosis? Parmar thinks so. “Deep learning CNNs could be used to extract functional biomarkers related to AD, which could be helpful in the early detection of AD-related dementia,” he explains.
    The researchers trained and tested their CNN with fMRI data from a public database, and the initial results were promising: the classification accuracy of their algorithm was as high as or higher than that of other methods.
    If these results hold up for larger datasets, their clinical implications could be tremendous. “Alzheimer’s has no cure yet. Although brain damage cannot be reversed, the progression of the disease can be reduced and controlled with medication,” according to the authors. “Our classifier can accurately identify the mild cognitive impairment stages which provide an early warning before progression into AD.” More

  • in

    How computer scientists and marketers can create a better CX with AI

    Researchers from Erasmus University, The Ohio State University, York University, and London Business School published a new paper in the Journal of Marketing that examines the tension between AI’s benefits and costs and then offers recommendations to guide managers and scholars investigating these challenges.
    The study, forthcoming in the Journal of Marketing, is titled “Consumers and Artificial Intelligence: An Experiential Perspective” and is authored by Stefano Puntoni, Rebecca Walker Reczek, Markus Giesler, and Simona Botti.
    Not long ago, artificial intelligence (AI) was the stuff of science fiction. Now it is changing how consumers eat, sleep, work, play, and even date. Consumers can interact with AI throughout the day, from Fitbit’s fitness tracker and Alibaba’s Tmall Genie smart speaker to Google Photo’s editing suggestions and Spotify’s music playlists. Given the growing ubiquity of AI in consumers’ lives, marketers operate in organizations with a culture increasingly shaped by computer science. Software developers’ objective of creating technical excellence, however, may not naturally align with marketers’ objective of creating valued consumer experiences. For example, computer scientists often characterize algorithms as neutral tools evaluated on efficiency and accuracy, an approach that may overlook the social and individual complexities of the contexts in which AI is increasingly deployed. Thus, whereas AI can improve consumers’ lives in very concrete and relevant ways, a failure to incorporate behavioral insight into technological developments may undermine consumers’ experiences with AI.
    This article seeks to bridge these two perspectives. On one hand, the researchers acknowledge the benefits that AI can provide to consumers. On the other hand, they build on and integrate sociological and psychological scholarship to examine the costs consumers can experience in their interactions with AI. As Puntoni explains, “A key problem with optimistic celebrations that view AI’s alleged accuracy and efficiency as automatic promoters of democracy and human inclusion is their tendency to efface intersectional complexities.”
    The article begins by presenting a framework that conceptualizes AI as an ecosystem with four capabilities: data capture, classification, delegation, and social. It focuses on the consumer experience of these capabilities, including the tensions felt. Reczek adds, “To articulate a customer-centric view of AI, we move attention away from the technology toward how the AI capabilities are experienced by consumers. Consumer experience relates to the interactions between the consumer and the company during the customer journey and encompasses multiple dimensions: emotional, cognitive, behavioral, sensorial, and social.”
    The researchers then discuss the experience of these tensions at a macro level, by exposing relevant and often explosive narratives in the sociological context, and at the micro level, by illustrating them with real-life examples grounded in relevant psychological literature. Using these insights, the researchers provide marketers with recommendations regarding how to learn about and manage the tensions. Paralleling the joint emphasis on social and individual responses, they outline both the organizational learning in which firms should engage to lead the deployment of consumer AI and concrete steps to design improved consumer AI experiences. The article closes with a research agenda that cuts across the four consumer experiences and ideas for how researchers might contribute new knowledge on this important topic.

    Story Source:
    Materials provided by American Marketing Association. Original written by Matt Weingarden. Note: Content may be edited for style and length. More

  • in

    The sweet spot of flagellar assembly

    To build the machinery that enables bacteria to swim, over 50 proteins have to be assembled according to a logic and well-defined order to form the flagellum, the cellular equivalent of an offshore engine of a boat. To be functional, the flagellum is assembled piece by piece, ending with the helix called flagellar filament, composed of six different subunits called flagellins. Microbiologists from the University of Geneva (UNIGE) have demonstrated that adding sugar to the flagellins is crucial for the flagellum’s assembly and functionality. This glycosylation is carried out by a newly discovered enzyme FlmG, whose role was hitherto unknown. Based on this observation — which you can read all about in the journal eLife — the researchers followed up with a second discovery published in Developmental Cell. Among the six flagellins of Caulobacter crescentus, the model bacterium in the two studies, one is the special one serving a signalling role to trigger the final assembly of the flagellum.
    The flagellum is the locomotive engine of bacteria. Thanks to the flagellum, bacteria can swim towards food whether in the lake Geneva (Léman) or inside the host during an infection. The flagellum — which, due to its complexity, is similar to an offshore engine — is made up of a basic structure, a rotary motor and a helical propeller. It is synthesized inside the bacteria in their cytosol. “The 50 proteins must be produced sequentially and assembled in the right order,” begins Patrick Viollier, a researcher in UNIGE’s Department of Microbiology and Molecular Medicine. At the same time the flagellum must be embedded within the bacterial envelope that contains up to three cell layers before ending up on the outside. While the flagellar subunits are known many of the subtleties in flagellar assembly control and targeting mechanisms are still poorly understood.
    Sweet suprise
    The UNIGE microbiologists studied the bacterium Caulobacter crescentus. “These bacteria are very interesting for studying flagella since they produce two different daughter cells: one has a flagellum and the other doesn’t. They’re ideal for understanding what is needed for building a flagellum ,” explains Nicolas Kint, co-author of the study. Another peculiarity is that the flagellar filament of this bacterium is an assembly consisting of six flagellin sub-units, meaning it isn’t the result of the polymerisation of a single protein, as is the case for many other bacteria. “When analysing these six flagellins, we discovered they were decorated with sugars, indicating that a glycosylation step — an enzymatic reaction adding sugars to proteins — was taking place and was needed for assembly. It was a surprising discovery, since this reaction is not very common and not well understood in bacteria,” continues Professor Viollier.
    Viollier’s research team succeeded in demonstrating that the glycosylation of the six flagellins that make up the filament is essential for the formation and functionality of the flagellum. “To demonstrate this, we first identified the gene that produces the glycosylation enzyme, FlmG. When it’s absent, it results in bacteria without flagellum. Secondly, we genetically modified another type of bacterium, Escherichia coli, to express one of the six flagellins, the glycosylation enzyme and sugar producing enzymes from Caulobacter crescentus. All these elements are required to obtain a modified flagellin,” adds Nicolas Kint.
    A versatile black sheep
    “The different elements of the flagellum are produced one after the other: the molecules of the base first, then those of the rotor and finally the propeller. The scientific literature indicates that this sequential process is important. However, we don’t know how the order of manufacturing the sub-units is controlled .” The researcher and his team focused on the synthesis of the six flagellins, discovering a black sheep among them: a sub-unit that has only 50% sequence homology with the other five. “This sub-unit serves as become a checkpoint protein, a repressive molecular traffic cop restraining the synthesis of the other flagellin proteins,” says Professor Viollier. It is present before the synthesis of the other five sub-units, and it acts as a negative regulator. As long as it is present in the cytosol, the synthesis of the other sub-units is prevented. Once the elements of the flagellum are assembled, apart from the filament, the cop is exported to the membrane and thus removed. Then the synthesis of the last five sub-units can then begin. “It is a sensor for the protein synthesis and a component of the flagellar filament at the same time: a dual function that is unique of its kind,” says the microbiologist with great enthusiasm.
    This discovery is fundamental for understanding the motility of bacteria and the assembly of proteins. “It also provides clues for understanding the synthesis and assembly of tubulin, an essential part of the cytoskeleton,” concludes Professor Viollier.

    Story Source:
    Materials provided by Université de Genève. Note: Content may be edited for style and length. More

  • in

    Scientists map structure of potent antibody against coronavirus

    Scientists at Fred Hutchinson Cancer Research Center in Seattle have shown that a potent antibody from a COVID-19 survivor interferes with a key feature on the surface of the coronavirus’s distinctive spikes and induces critical pieces of those spikes to break off in the process.
    The antibody — a tiny, Y-shaped protein that is one of the body’s premier weapons against pathogens including viruses — was isolated by the Fred Hutch team from a blood sample received from a Washington state patient in the early days of the pandemic.
    The team led by Drs. Leo Stamatatos, Andrew McGuire and Marie Pancera previously reported that, among dozens of different antibodies generated naturally by the patient, this one — dubbed CV30 — was 530 times more potent than any of its competitors.
    Using tools derived from high-energy physics, Hutch structural biologist Pancera and her postdoctoral fellow Dr. Nicholas Hurlburt have now mapped the molecular structure of CV30. They and their colleagues published their results online today in the journal Nature Communications.
    The product of their research is a set of computer-generated 3D images that look to the untrained eye as an unruly mass of noodles. But to scientists they show the precise shapes of proteins comprising critical surface structures of antibodies, the coronavirus spike and the spike’s binding site on human cells. The models depict how these structures can fit together like pieces of a 3D puzzle.
    “Our study shows that this antibody neutralizes the virus with two mechanisms. One is that it overlaps the virus’s target site on human cells, the other is that it induces shedding or dissociation of part of the spike from the rest,” Pancera said.

    advertisement

    On the surface of the complex structure of the antibody is a spot on the tips of each of its floppy, Y-shaped arms. This infinitesimally small patch of molecules can neatly stretch across a spot on the coronavirus spike, a site that otherwise works like a grappling hook to grab onto a docking site on human cells.
    The target for those hooks is the ACE2 receptor, a protein found on the surfaces of cells that line human lung tissues and blood vessels. But if CV30 antibodies cover those hooks, the coronavirus cannot dock easily with the ACE2 receptor. Its ability to infect cells is blunted.
    This very effective antibody not only jams the business end of the coronavirus spike, it apparently causes a section of that spike, known as S1, to shear off. Hutch researcher McGuire and his laboratory team performed an experiment showing that, in the presence of this antibody, there is reduction of antibody binding over time, suggesting the S1 section was shed from the spike surface.
    The S1 protein plays a crucial role in helping the coronavirus to enter cells. Research indicates that after the spike makes initial contact with the ACE2 receptor, the S1 protein swings like a gate to help the virus fuse with the captured cell surface and slip inside. Once within a cell, the virus hijacks components of its gene and protein-making machinery to make multiple copies of itself that are ultimately released to infect other target cells.
    The incredibly small size of antibodies is difficult to comprehend. These proteins are so small they would appear to swarm like mosquitos around a virus whose structure can only be seen using the most powerful of microscopes. The tiny molecular features Pancera’s team focused on the tips of the antibody protein are measured in nanometers — billionths of a meter.

    advertisement

    Yet structural biologists equipped with the right tools can now build accurate 3D images of these proteins, deduce how parts of these structures fit like puzzle pieces, and even animate their interactions.
    Fred Hutch structural biologists developed 3D images of an antibody fished from the blood of an early COVID-19 survivor that efficiently neutralized the coronavirus.
    Dr. Nicholas Hurlburt, who helped develop the images, narrates this short video showing how that antibody interacts with the notorious spikes of the coronavirus, blocking their ability to bind to a receptor on human cells that otherwise presents a doorway to infection.
    Key to building models of these nanoscale proteins is the use of X-ray crystallography. Structural biologists determine the shapes of proteins by illuminating frozen, crystalized samples of these molecules with extremely powerful X-rays. The most powerful X-rays come from a gigantic instrument known as a synchrotron light source. Born from atom-smashing experiments dating back to the 1930s, a synchrotron is a ring of massively powerful magnets that are used to accelerate a stream of electrons around a circular track at close to the speed of light. Synchrotrons are so costly that only governments can build and operate them. There are only 40 of them in the world.
    Pancera’s work used the Advanced Photon Source, a synchrotron at Argonne National Laboratory near Chicago, which is run by the University of Chicago and the U.S. Department of Energy. Argonne’s ring is 1,200 feet in diameter and sits on an 80-acre site.
    As the electrons whiz around the synchrotron ring, they give off enormously powerful X-rays — far brighter than the sun but delivered in flashes of beams smaller than a pinpoint.
    Structural biologists from around the world rely on these brilliant X-ray beamlines to illuminate frozen crystals of proteins. They reveal their structure in the way these bright beams are bent as they pass though the molecules. It takes powerful computers to translate the data readout from these synchrotron experiments into the images of proteins that are eventually completed by structural biologists.
    The Fred Hutch team’s work on CV30 builds on that of other structural biologists who are studying a growing family of potent neutralizing antibodies against the coronavirus. The goal of most coronavirus vaccine candidates is to stimulate and train the immune system to make similar neutralizing antibodies, which can recognize the virus as an invader and stop COVID-19 infections before they can take hold.
    Neutralizing antibodies from the blood of recovered COVID-19 patients may also be infused into infected patients — an experimental approach known as convalescent plasma therapy. The donated plasma contains a wide variety of different antibodies of varying potency. Although once thought promising, recent studies have cast doubt on its effectiveness.
    However, pharmaceutical companies are experimenting with combinations of potent neutralizing antibodies that can be grown in a laboratory. These “monoclonal antibody cocktails” can be produced at industrial scale for delivery by infusion to infected patients or given as prophylactic drugs to prevent infection. After coming down with COVID-19, President Trump received an experimental monoclonal antibody drug being tested in clinical trials by the biotech company Regeneron, and he attributes his apparently quick recovery to the advanced medical treatment he received.
    The Fred Hutch research team holds out hope that the protein they discovered, CV30, may prove to be useful in the prevention or treatment of COVID-19. To find out, this antibody, along with other candidate proteins their team is studying, need to be tested preclinically and then in human trials.
    “It is too early to tell how good they might be,” Pancera said.
    This work was supported by donations to the Fred Hutch COVID-19 Research Fund. More

  • in

    Random effects key to containing epidemics

    To control an epidemic, authorities will often impose varying degrees of lockdown. In a paper in the journal Chaos, by AIP Publishing, scientists have discovered, using mathematics and computer simulations, why dividing a large population into multiple subpopulations that do not intermix can help contain outbreaks without imposing contact restrictions within those local communities.
    “The key idea is that, at low infection numbers, fluctuations can alter the course of the epidemics significantly, even if you expect an exponential increase in infection numbers on average,” said author Ramin Golestanian.
    When infection numbers are high, random effects can be ignored. But subdividing a population can create communities so small that the random effects matter.
    “When a large population is divided into smaller communities, these random effects completely change the dynamics of the full population. Randomness causes peak infection numbers to be brought way down,” said author Philip Bittihn.
    To tease out the way randomness affects an epidemic, the investigators first considered a so-called deterministic model without random events. For this test, they assumed that individuals in each subpopulation encounter others at the same rate they would have in the large population. Even though subpopulations are not allowed to intermix, the same dynamics are observed in the subdivided population as in the initial large population.
    If, however, random effects are included in the model, dramatic changes ensue, even though the contact rate in the subpopulations is the same as in the full one.
    A population of 8 million individuals with 500 initially infected ones was studied using an infectious contact rate seen for COVID-19 with mild social distancing measures in place. With these parameters, the disease spreads exponentially with infections doubling every 12 days.
    “If this population is allowed to mix homogeneously, the dynamics will evolve according to the deterministic prediction with a peak around 5% infected individuals,” said Bittihn.
    However, if the population is split into 100 subpopulations of 80,000 people each, the peak percentage of infected individuals drops to 3%. If the community is split up even further to 500 subgroups of 16,000 each, the infection peaks at only 1% of the initial population.
    The main reason subdividing the population works is because the epidemic is completely extinguished in a significant fraction of the subgroups. This “extinction effect” occurs when infection chains spontaneously terminate.
    Another way subdividing works is by desynchronizing the full population. Even if outbreaks occur in the smaller communities, the peaks may come at different times and cannot synchronize and add up to a large number.
    “In reality, subpopulations cannot be perfectly isolated, so local extinction might only be temporary,” Golestanian said. “Further study is ongoing to take this and suitable countermeasures into account.”

    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Theoreticians show which quantum systems are suitable for quantum simulations

    A joint research group led by Prof. Jens Eisert of Freie Universität Berlin and Helmholtz-Zentrum Berlin (HZB) has shown a way to simulate the quantum physical properties of complex solid state systems. This is done with the help of complex solid state systems that can be studied experimentally. The study was published in the journal Proceedings of the National Academy of Sciences (PNAS).
    “The real goal is a robust quantum computer that generates stable results even when errors occur and corrects these errors,” explains Jens Eisert, professor at Freie Universität Berlin and head of a joint research group at HZB. So far, the development of robust quantum computers is still a long way off, because quantum bits react extremely sensitively to the smallest fluctuations in environmental parameters.
    But now a new approach could promise success: two postdocs from the group around Jens Eisert, Maria Laura Baez and Marek Gluza have taken up an idea of Richard Feynman, a brilliant US-American physicist of the post-war period. Feynman had proposed to use real systems of atoms with their quantum physical properties to simulate other quantum systems. These quantum systems can consist of atoms strung together like pearls in a string with special spin properties, but could also be ion traps, Rydberg atoms, superconducting Qbits or atoms in optical lattices. What they have in common is that they can be created and controlled in the laboratory. Their quantum physical properties could be used to predict the behaviour of other quantum systems. But which quantum systems would be good candidates? Is there a way to find out in advance?
    Eisert’s team has now investigated this question using a combination of mathematical and numerical methods. In fact, the group showed that the so-called dynamic structure factor of such systems is a possible tool to make statements about other quantum systems. This factor indirectly maps how spins or other quantum quantities behave over time, it is calculated by a Fourier transformation.
    “This work builds a bridge between two worlds,” explains Jens Eisert. “On the one hand, there is the Condensed Matter Community, which studies quantum systems and gains new insights from them — and on the other hand there is Quantum Informatics — which deals with quantum information. We believe that great progress will be possible if we bring the two worlds together,” says the scientist.

    Story Source:
    Materials provided by Helmholtz-Zentrum Berlin für Materialien und Energie. Note: Content may be edited for style and length. More