More stories

  • in

    Rewards only promote cooperation if the other person also learns about them

    Researchers at the Max Planck Institute in Plön show that reputation plays a key role in determining which rewarding policies people adopt. Using game theory, they explain why individuals learn to use rewards to specifically promote good behaviour.
    Often, we use positive incentives like rewards to promote cooperative behaviour. But why do we predominantly reward cooperation? Why is defection rarely rewarded? Or more generally, why do we bother to engage in any form of rewarding in the first place? Theoretical work done by researchers Saptarshi Pal and Christian Hilbe at the Max Planck Research Group ‘Dynamics of Social Behaviour’ suggests that reputation effects can explain why individuals learn to reward socially.
    With tools from evolutionary game theory, the researchers construct a model where individuals in a population (the players) can adopt different strategies of cooperation and rewarding over time. In this model, the players’ reputation is a key element. The players know, with a degree of certainty (characterized by the information transmissibility of the population), how their interaction partners are going to react to their behaviour (that is, which behaviours they deem worthy of rewards). If the information transmissibility is sufficiently high, players learn to reward cooperation. In contrast, without sufficient information about peers, players refrain from using rewards. The researchers show that these effects of reputation also play out in a similar way when individuals interact in groups with more than two individuals.
    Antisocial rewarding
    In addition to highlighting the role of reputation in catalyzing cooperation and social rewarding, the scientists identify a couple of scenarios where antisocial rewarding may evolve. Antisocial rewarding either requires populations to be assorted or rewards to be mutually beneficial for both the recipient and the provider of the reward. “These conditions under which people may learn to reward defection are however a bit restrictive since they additionally require information to be scarce” adds Saptarshi Pal.
    The results from this study suggest that rewards are only effective in promoting cooperation when they can sway individuals to act opportunistically. These opportunistic players only cooperate when they anticipate a reward for their cooperation. A higher information transmissibility increases both, the incentive to reward others for cooperating, and the incentive to cooperate in the first place. Overall, the model suggests that when people reward cooperation in an environment where information transmissibility is high, they ultimately benefit themselves. This interpretation takes the altruism out of social rewarding — people may not use rewards to enhance others’ welfare, but to help themselves.
    Story Source:
    Materials provided by Max-Planck-Gesellschaft. Note: Content may be edited for style and length. More

  • in

    New form of universal quantum computers

    Computing power of quantum machines is currently still very low. Increasing it is still proving to be a major challenge. Physicists now present a new architecture for a universal quantum computer that overcomes such limitations and could be the basis of the next generation of quantum computers soon.
    Quantum bits (qubits) in a quantum computer serve as a computing unit and memory at the same time. Because quantum information cannot be copied, it cannot be stored in a memory as in a classical computer. Due to this limitation, all qubits in a quantum computer must be able to interact with each other. This is currently still a major challenge for building powerful quantum computers. In 2015, theoretical physicist Wolfgang Lechner, together with Philipp Hauke and Peter Zoller, addressed this difficulty and proposed a new architecture for a quantum computer, now named LHZ architecture after the authors.
    “This architecture was originally designed for optimization problems,” recalls Wolfgang Lechner of the Department of Theoretical Physics at the University of Innsbruck, Austria. “In the process, we reduced the architecture to a minimum in order to solve these optimization problems as efficiently as possible.” The physical qubits in this architecture do not represent individual bits but encode the relative coordination between the bits. “This means that not all qubits have to interact with each other anymore,” explains Wolfgang Lechner. With his team, he has now shown that this parity concept is also suitable for a universal quantum computer.
    Complex operations are simplified
    Parity computers can perform operations between two or more qubits on a single qubit. “Existing quantum computers already implement such operations very well on a small scale,” Michael Fellner from Wolfgang Lechner’s team explains. “However, as the number of qubits increases, it becomes more and more complex to implement these gate operations.” In two publications in Physical Review Letters and Physical Review A, the Innsbruck scientists now show that parity computers can, for example, perform quantum Fourier transformations — a fundamental building block of many quantum algorithms — with significantly fewer computation steps and thus more quickly. “The high parallelism of our architecture means that, for example, the well-known Shor algorithm for factoring numbers can be executed very efficiently,” Fellner explains.
    Two-stage error correction
    The new concept also offers hardware-efficient error correction. Because quantum systems are very sensitive to disturbances, quantum computers must correct errors continuously. Significant resources must be devoted to protecting quantum information, which greatly increases the number of qubits required. “Our model operates with a two-stage error correction, one type of error (bit flip error or phase error) is prevented by the hardware used,” say Anette Messinger and Kilian Ender, also members of the Innsbruck research team. There are already initial experimental approaches for this on different platforms. “The other type of error can be detected and corrected via the software,” Messinger and Ender say. This would allow a next generation of universal quantum computers to be realized with manageable effort. The spin-off company ParityQC, co-founded by Wolfgang Lechner and Magdalena Hauser, is already working in Innsbruck with partners from science and industry on possible implementations of the new model.
    The research at the University of Innsbruck was financially supported by the Austrian Science Fund FWF and the Austrian Research Promotion Agency FFG.
    Story Source:
    Materials provided by University of Innsbruck. Note: Content may be edited for style and length. More

  • in

    Unveiling the dimensionality of complex networks through hyperbolic geometry

    Reducing redundant information to find simplifying patterns in data sets and complex networks is a scientific challenge in many knowledge fields. Moreover, detecting the dimensionality of the data is still a hard-to-solve problem. An article published in the journal Nature Communications presents a method to infer the dimensionality of complex networks through the application of hyperbolic geometrics, which capture the complexity of relational structures of the real world in many diverse domains.
    Among the authors of the study are the researchers M. Ángeles Serrano and Marián Boguñá, from the Faculty of Physics and the Institute of Complex Systems of the UB (UBICS), and Pedro Almargo, from the Higher Technical School of Engineering of the University of Sevilla. The research study provides a multidimensional hyperbolic model of complex networks that reproduces its connectivity, with an ultra-low and customizable dimensionality for each specific network. This enables a better characterization of its structure — e.g. at a community scale — and the improvement of its predictive capability.
    The study reveals unexpected regularities, such as the extremely low dimensions of molecular networks associated with biological tissues; the slightly higher dimensionality required by social networks and the Internet; and the discovery that brain connectomes are close to three dimensions in their automatic organisation.
    Hyperbolic versus Euclidean geometry
    The intrinsic geometry of data sets or complex networks is not obvious, which becomes an obstacle in determining the dimensionality of real networks. Another challenge is that the definition of distance has to be established according to their relational and connectivity structure, and this also requires sophisticated models.
    Now, the new approach is based on the geometry of complex networks, and more specifically, on the configurational geometric model or SD model. “This model, which we have developed in previous work, describes the structure of complex networks based on fundamental principles,” says the lecturer M. Ángeles, ICREA researcher at the Department of Condensed Matter Physics of the UB. More

  • in

    Mathematical modeling suggests U.S. counties are still unprepared for COVID spikes

    America was unprepared for the magnitude of the pandemic, which overwhelmed many counties and filled some hospitals to capacity. A new paper in PNAS suggests there may have been a mathematical method, of sorts, to the madness of those early COVID days.
    The study tests a model that closely matches the patterns of case counts and deaths reported, county by county, across the United States between April 2020 and June 2021. The model suggests that unprecedented COVID spikes could, even now, overwhelm local jurisdictions.
    “Our best estimate, based on the data, is that the numbers of cases and deaths per county have infinite variance, which means that a county could get hit with a tremendous number of cases or deaths,” says Rockefeller’s Joel Cohen. “We cannot reasonably anticipate that any county will have the resources to cope with extremely large, rare events, so it is crucial that counties — as well as states and even countries — develop plans, ahead of time, to share resources.”
    Predicting 99 percent of a pandemic
    Ecologists might have guessed that the spread of COVID cases and deaths would at least roughly conform to Taylor’s Law, a formula that relates a population’s mean to its variance (a measure of the scatter around the average). From how crop yields fluctuate, to the frequency of tornado outbreaks, to how cancer cells multiply, Taylor’s Law forms the backbone of many statistical models that experts use to describe thousands of species, including humans.
    But when Cohen began looking into whether Taylor’s Law could also describe the grim COVID statistics provided by The New York Times, he ran into a surprise. More

  • in

    New hybrid structures could pave the way to more stable quantum computers

    A new way to combine two materials with special electrical properties — a monolayer superconductor and a topological insulator — provides the best platform to date to explore an unusual form of superconductivity called topological superconductivity. The combination could provide the basis for topological quantum computers that are more stable than their traditional counterparts.
    Superconductors — used in powerful magnets, digital circuits, and imaging devices — allow the electric current to pass without resistance, while topological insulators are thin films only a few atoms thick that restrict the movement of electrons to their edges, which can result in unique properties. A team led by researchers at Penn State describe how they have paired the two materials in a paper appearing Oct. 27 in the journal Nature Materials.
    “The future of quantum computing depends on a kind of material that we call a topological superconductor, which can be formed by combining a topological insulator with a superconductor, but the actual process of combining these two materials is challenging,” said Cui-Zu Chang, Henry W. Knerr Early Career Professor and Associate Professor of Physics at Penn State and leader of the research team. “In this study, we used a technique called molecular beam epitaxy to synthesize both topological insulator and superconductor films and create a two-dimensional heterostructure that is an excellent platform to explore the phenomenon of topological superconductivity.”
    In previous experiments to combine the two materials, the superconductivity in thin films usually disappears once a topological insulator layer is grown on top. Physicists have been able to add a topological insulator film onto a three-dimensional “bulk” superconductor and retain the properties of both materials. However, applications for topological superconductors, such as chips with low power consumption inside quantum computers or smartphones, would need to be two-dimensional.
    In this paper, the research team stacked a topological insulator film made of bismuth selenide (Bi2Se3) with different thicknesses on a superconductor film made of monolayer niobium diselenide (NbSe2), resulting in a two-dimensional end-product. By synthesizing the heterostructures at very lower temperature, the team was able to retain both the topological and superconducting properties.
    “In superconductors, electrons form ‘Cooper pairs’ and can flow with zero resistance, but a strong magnetic field can break those pairs,” said Hemian Yi, a postdoctoral scholar in the Chang Research Group at Penn State and the first author of the paper. “The monolayer superconductor film we used is known for its ‘Ising-type superconductivity,’ which means that the Cooper pairs are very robust against the in-plane magnetic fields. We would also expect the topological superconducting phase formed in our heterostructures to be robust in this way.”
    By subtly adjusting the thickness of the topological insulator, the researchers found that the heterostructure shifted from Ising-type superconductivity — where the electron spin is perpendicular to the film — to another kind of superconductivity called “Rashba-type superconductivity” — where the electron spin is parallel to the film. This phenomenon is also observed in the researchers’ theoretical calculations and simulations.
    This heterostructure could also be a good platform for the exploration of Majorana fermions, an elusive particle that would be a major contributor to making a topological quantum computer more stable than its predecessors.
    “This is an excellent platform for the exploration of topological superconductors, and we are hopeful that we will find evidence of topological superconductivity in our continuing work,” said Chang. “Once we have solid evidence of topological superconductivity and demonstrate Majorana physics, then this type of system could be adapted for quantum computing and other applications.”
    In addition to Chang and Yi, the research team at Penn State includes Lun-Hui Hu, Yuanxi Wang, Run Xiao, Danielle Reifsnyder Hickey, Chengye Dong, Yi-Fan Zhao, Ling-Jie Zhou, Ruoxi Zhang, Antony Richardella, Nasim Alem, Joshua Robinson, Moses Chan, Nitin Samarth, and Chao-Xing Liu. The team also includes Jiaqi Cai and Xiaodong Xu at the University of Washington.
    This work was primarily supported by the Penn State MRSEC for Nanoscale Science and also partially supported by the National Science Foundation, the Department of Energy, the University of North Texas, and the Gordon and Betty Moore Foundation.
    Story Source:
    Materials provided by Penn State. Original written by Gail McCormick. Note: Content may be edited for style and length. More

  • in

    Music class in sync with higher math scores — but only at higher-income schools

    Music and arts classes are often first on the chopping block when schools face tight budgets and pressure to achieve high scores on standardized tests. But it’s precisely those classes that can increase student interest in school and even benefit their math achievement, according to a new study.
    Daniel Mackin Freeman, a doctoral candidate in sociology, and Dara Shifrer, an associate professor of sociology, used a large nationally representative dataset to see which types of arts classes impact math achievement and how it varies based on the socio-economic composition of the school. Schools with lower socio-economic status (SES) have a higher percentage of students eligible for free or reduced lunch.
    The researchers found that taking music courses at higher- or mid-SES schools relates to higher math scores. Mackin Freeman said that’s not a surprise given the ways in which music and math overlap.
    “If you think about it at an intuitive level, reading music is just doing math,” he said. “Of course, it’s a different type of math but it might be a more engaging form of math for students than learning calculus.”
    However, the positive relationship between music course-taking and math achievement is primarily isolated to schools that serve more socially privileged students. The study suggests this could be because arts courses in low-SES schools are of lower quality and/or under-resourced. Students in low-SES schools also take fewer music and arts classes on average compared to their peers, also suggesting low-SES schools are under-resourced when it comes to arts courses.
    “It’d be reasonable to expect that at under-resourced schools, the quality of the music program would differentiate any potential connection to other subjects,” Mackin Freeman said. “For programs as resource-intensive as something like band, under-resourced schools are less likely to even have working instruments, let alone an instructor who can teach students to read music in a way that they can make connections to arithmetic.”
    Mackin Freeman said the findings suggest that learning shouldn’t happen in subject silos and the ways some schools have attempted to increase math achievement — by doubling down on math and cutting the arts — is shortsighted and counterproductive.
    “Creating an environment where students have access to a well-rounded curriculum might indirectly affect math achievement,” he said. “That could be something as simple as, they’re willing to go to class because they have band or painting class to look forward to.”
    Story Source:
    Materials provided by Portland State University. Original written by Cristina Rojas. Note: Content may be edited for style and length. More

  • in

    Building with nanoparticles, from the bottom up

    Researchers at MIT have developed a technique for precisely controlling the arrangement and placement of nanoparticles on a material, like the silicon used for computer chips, in a way that does not damage or contaminate the surface of the material.
    The technique, which combines chemistry and directed assembly processes with conventional fabrication techniques, enables the efficient formation of high-resolution, nanoscale features integrated with nanoparticles for devices like sensors, lasers, and LEDs, which could boost their performance.
    Transistors and other nanoscale devices are typically fabricated from the top down — materials are etched away to reach the desired arrangement of nanostructures. But creating the smallest nanostructures, which can enable the highest performance and new functionalities, requires expensive equipment and remains difficult to do at scale and with the desired resolution.
    A more precise way to assemble nanoscale devices is from the bottom up. In one scheme, engineers have used chemistry to “grow” nanoparticles in solution, drop that solution onto a template, arrange the nanoparticles, and then transfer them to a surface. However, this technique also involves steep challenges. First, thousands of nanoparticles must be arranged on the template efficiently. And transferring them to a surface typically requires a chemical glue, large pressure, or high temperatures, which could damage the surfaces and the resulting device.
    The MIT researchers developed a new approach to overcome these limitations. They used the powerful forces that exist at the nanoscale to efficiently arrange particles in a desired pattern and then transfer them to a surface without any chemicals or high pressures, and at lower temperatures. Because the surface material remains pristine, these nanoscale structures can be incorporated into components for electronic and optical devices, where even minuscule imperfections can hamper performance.
    “This approach allows you, through engineering of forces, to place the nanoparticles, despite their very small size, in deterministic arrangements with single-particle resolution and on diverse surfaces, to create libraries of nanoscale building blocks that can have very unique properties, whether it is their light-matter interactions, electronic properties, mechanical performance, etc.,” says Farnaz Niroui, the EE Landsman Career Development Assistant Professor of Electrical Engineering and Computer Science (EECS) at MIT, a member of the MIT Research Laboratory of Electronics, and senior author on a new paper describing the work. “By integrating these building blocks with other nanostructures and materials we can then achieve devices with unique functionalities that would not be readily feasible to make if we were to use the conventional top-down fabrication strategies alone.”
    The research is published in Science Advances. Niroui’s co-authors are lead author Weikun “Spencer” Zhu, a graduate student in the Department of Chemical Engineering, as well as EECS graduate students Peter F. Satterthwaite, Patricia Jastrzebska-Perfect, and Roberto Brenes. More

  • in

    Breakthrough: World's smallest photon in a dielectric material

    Until recently, it was widely believed among physicists that it was impossible to compress light below the so-called diffraction limit, except when using metal nanoparticles, which unfortunately also absorb light. It therefore seemed impossible to compress light strongly in dielectric materials such as silicon, which are key materials in information technologies and come with the important advantage that they do not absorb light. Interestingly, it was shown theoretically already in 2006 that the diffraction limit also does not apply to dielectrics. Still, no one has succeeded in showing this in the real world, simply because it requires such advanced nanotechnology that no one has been able to build the necessary dielectric nanostructures until now.
    A research team from DTU has successfully designed and built a structure, a so-called dielectric nanocavity, which concentrates light in a volume 12 times below the diffraction limit. The result is ground-breaking in optical research and has just been published in Nature Communications.
    “Although computer calculations show that you can concentrate light at an infinitely small point, this only applies in theory. The actual results are limited by how small details can be made, for example, on a microchip,” says Marcus Albrechtsen, PhD-student at DTU Electro and first author of the new article.
    “We programmed our knowledge of real photonic nanotechnology and its current limitations into a computer. Then we asked the computer to find a pattern that collects the photons in an unprecedentedly small area — in an optical nanocavity — which we were also able to build in the laboratory.”
    Optical nanocavities are structures specially designed to retain light so that it does not propagate as we are used to but is thrown back and forth as if you put two mirrors facing each other. The closer you place the mirrors to each other, the more intense the light between the mirrors becomes. For this experiment, the researchers have designed a so-called bowtie structure, which is particularly effective at squeezing the photons together due to its special shape.
    Interdisciplinary efforts and excellent methods
    The nanocavity is made of silicon, the dielectric material on which most advanced modern technology is based. The material for the nanocavity was developed in cleanroom laboratories at DTU, and the patterns on which the cavity is based are optimized and designed using a unique method for topology optimization developed at DTU. Initially developed to design bridges and aircraft wings, it is now also used for nanophotonic structures. More