More stories

  • in

    Wind turbines could help capture carbon dioxide while providing power

    Wind turbines could offer a double whammy in the fight against climate change.

    Besides harnessing wind to generate clean energy, turbines may help to funnel carbon dioxide to systems that pull the greenhouse gas out of the air (SN: 8/10/21). Researchers say their simulations show that wind turbines can drag dirty air from above a city or a smokestack into the turbines’ wakes. That boosts the amount of CO2 that makes it to machines that can remove it from the atmosphere. The researchers plan to describe their simulations and a wind tunnel test of a scaled-down system at a meeting of the American Physical Society’s Division of Fluid Dynamics in Indianapolis on November 21.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Addressing climate change will require dramatic reductions in the amount of carbon dioxide that humans put into the air — but that alone won’t be enough (SN: 3/10/22). One part of the solution could be direct air capture systems that remove some CO2 from the atmosphere (SN: 9/9/22).

    But the large amounts of CO2 produced by factories, power plants and cities are often concentrated at heights that put it out of reach of machinery on the ground that can remove it. “We’re looking into the fluid dynamics benefits of utilizing the wake of the wind turbine to redirect higher concentrations” down to carbon capture systems, says mechanical engineer Clarice Nelson of Purdue University in West Lafayette, Ind.

    As large, power-generating wind turbines rotate, they cause turbulence that pulls air down into the wakes behind them, says mechanical engineer Luciano Castillo, also of Purdue. It’s an effect that can concentrate carbon dioxide enough to make capture feasible, particularly near large cities like Chicago.

    “The beauty is that [around Chicago], you have one of the best wind resources in the region, so you can use the wind turbine to take some of the dirty air in the city and capture it,” Castillo says. Wind turbines don’t require the cooling that nuclear and fossil fuel plants need. “So not only are you producing clean energy,” he says, “you are not using water.”

    Running the capture systems from energy produced by the wind turbines can also address the financial burden that often goes along with removing CO2 from the air. “Even with tax credits and potentially selling the CO2, there’s a huge gap between the value that you can get from capturing it and the actual cost” that comes with powering capture with energy that comes from other sources, Nelson says. “Our method would be a no-cost added benefit” to wind turbine farms.

    There are probably lots of factors that will impact CO2 transport by real-world turbines, including the interactions the turbine wakes have with water, plants and the ground, says Nicholas Hamilton, a mechanical engineer at the National Renewable Energy Laboratory in Golden, Colo., who was not involved with the new studies. “I’m interested to see how this group scaled their experiment for wind tunnel investigation.” More

  • in

    Artificial intelligence and molecule machine join forces to generalize automated chemistry

    Artificial intelligence, “building-block” chemistry and a molecule-making machine teamed up to find the best general reaction conditions for synthesizing chemicals important to biomedical and materials research — a finding that could speed innovation and drug discovery as well as make complex chemistry automated and accessible.
    With the machine-generated optimized conditions, researchers at the University of Illinois Urbana-Champaign and collaborators in Poland and Canada doubled the average yield of a special, hard-to-optimize type of reaction linking carbon atoms together in pharmaceutically important molecules. The researchers say their system provides a platform that also could be used to find general conditions for other classes of reactions and solutions for similarly complex problems. They reported their findings in the journal Science.
    “Generality is critical for automation, and thus making molecular innovation accessible even to nonchemists,” said study co-leader Dr. Martin D. Burke, an Illinois professor of chemistry and of the Carle Illinois College of Medicine, as well as a medical doctor. “The challenge is the haystack of possible reaction conditions is astronomical, and the needle is hidden somewhere inside. By leveraging the power of artificial intelligence and building-block chemistry to create a feedback loop, we were able to shrink the haystack. And we found the needle.”
    Automated synthesis machines for proteins and nucleic acids such as DNA have revolutionized research and chemical manufacturing in those fields, but many chemicals of importance for pharmaceutical, clinical, manufacturing and materials applications are small molecules with complex structures, the researchers say.
    Burke’s group has pioneered the development of simple chemical building blocks for small molecules. His lab also developed an automated molecule-making machine that snaps together the buildings blocks to create a wide range of possible structures.
    However, general reaction conditions to make the automated process broadly applicable have remained elusive. More

  • in

    A faster experiment to find and study topological materials

    Topological materials, an exotic class of materials whose surfaces exhibit different electrical or functional properties than their interiors, have been a hot area of research since their experimental realization in 2007 — a finding that sparked further research and precipitated a Nobel Prize in Physics in 2016. These materials are thought to have great potential in a variety of fields, and might someday be used in ultraefficient electronic or optical devices, or key components of quantum computers.
    But there are many thousands of compounds that may theoretically have topological characteristics, and synthesizing and testing even one such material to determine its topological properties can take months of experiments and analysis. Now a team of researchers at MIT and elsewhere have come up with a new approach that can rapidly screen candidate materials and determine with more than 90 percent accuracy whether they are topological.
    Using this new method, the researchers have produced a list candidate materials. A few of these were already known to have topological properties, but the rest are newly predicted by this approach.
    The findings are reported in the journal Advanced Materials in a paper by Mingda Li, the Class ’47 Career Development Professor at MIT, graduate students (and twin sisters) Nina Andrejevic at MIT and Jovana Andrejevic at Harvard University, and seven others at MIT, Harvard, Princeton University, and Argonne National Laboratory.
    Topological materials are named after a branch of mathematics that describes shapes based on their invariant characteristics, which persist no matter how much an object is continuously stretched or squeezed out of its original shape. Topological materials, similarly, have properties that remain constant despite changes in their conditions, such as external perturbations or impurities.
    There are several varieties of topological materials, including semiconductors, conductors, and semimetals, among others. Initially, it was thought that there were only a handful of such materials, but recent theory and calculations have predicted that in fact thousands of different compounds may have at least some topological characteristics. The hard part is figuring out experimentally which compounds may be topological. More

  • in

    Rewards only promote cooperation if the other person also learns about them

    Researchers at the Max Planck Institute in Plön show that reputation plays a key role in determining which rewarding policies people adopt. Using game theory, they explain why individuals learn to use rewards to specifically promote good behaviour.
    Often, we use positive incentives like rewards to promote cooperative behaviour. But why do we predominantly reward cooperation? Why is defection rarely rewarded? Or more generally, why do we bother to engage in any form of rewarding in the first place? Theoretical work done by researchers Saptarshi Pal and Christian Hilbe at the Max Planck Research Group ‘Dynamics of Social Behaviour’ suggests that reputation effects can explain why individuals learn to reward socially.
    With tools from evolutionary game theory, the researchers construct a model where individuals in a population (the players) can adopt different strategies of cooperation and rewarding over time. In this model, the players’ reputation is a key element. The players know, with a degree of certainty (characterized by the information transmissibility of the population), how their interaction partners are going to react to their behaviour (that is, which behaviours they deem worthy of rewards). If the information transmissibility is sufficiently high, players learn to reward cooperation. In contrast, without sufficient information about peers, players refrain from using rewards. The researchers show that these effects of reputation also play out in a similar way when individuals interact in groups with more than two individuals.
    Antisocial rewarding
    In addition to highlighting the role of reputation in catalyzing cooperation and social rewarding, the scientists identify a couple of scenarios where antisocial rewarding may evolve. Antisocial rewarding either requires populations to be assorted or rewards to be mutually beneficial for both the recipient and the provider of the reward. “These conditions under which people may learn to reward defection are however a bit restrictive since they additionally require information to be scarce” adds Saptarshi Pal.
    The results from this study suggest that rewards are only effective in promoting cooperation when they can sway individuals to act opportunistically. These opportunistic players only cooperate when they anticipate a reward for their cooperation. A higher information transmissibility increases both, the incentive to reward others for cooperating, and the incentive to cooperate in the first place. Overall, the model suggests that when people reward cooperation in an environment where information transmissibility is high, they ultimately benefit themselves. This interpretation takes the altruism out of social rewarding — people may not use rewards to enhance others’ welfare, but to help themselves.
    Story Source:
    Materials provided by Max-Planck-Gesellschaft. Note: Content may be edited for style and length. More

  • in

    New form of universal quantum computers

    Computing power of quantum machines is currently still very low. Increasing it is still proving to be a major challenge. Physicists now present a new architecture for a universal quantum computer that overcomes such limitations and could be the basis of the next generation of quantum computers soon.
    Quantum bits (qubits) in a quantum computer serve as a computing unit and memory at the same time. Because quantum information cannot be copied, it cannot be stored in a memory as in a classical computer. Due to this limitation, all qubits in a quantum computer must be able to interact with each other. This is currently still a major challenge for building powerful quantum computers. In 2015, theoretical physicist Wolfgang Lechner, together with Philipp Hauke and Peter Zoller, addressed this difficulty and proposed a new architecture for a quantum computer, now named LHZ architecture after the authors.
    “This architecture was originally designed for optimization problems,” recalls Wolfgang Lechner of the Department of Theoretical Physics at the University of Innsbruck, Austria. “In the process, we reduced the architecture to a minimum in order to solve these optimization problems as efficiently as possible.” The physical qubits in this architecture do not represent individual bits but encode the relative coordination between the bits. “This means that not all qubits have to interact with each other anymore,” explains Wolfgang Lechner. With his team, he has now shown that this parity concept is also suitable for a universal quantum computer.
    Complex operations are simplified
    Parity computers can perform operations between two or more qubits on a single qubit. “Existing quantum computers already implement such operations very well on a small scale,” Michael Fellner from Wolfgang Lechner’s team explains. “However, as the number of qubits increases, it becomes more and more complex to implement these gate operations.” In two publications in Physical Review Letters and Physical Review A, the Innsbruck scientists now show that parity computers can, for example, perform quantum Fourier transformations — a fundamental building block of many quantum algorithms — with significantly fewer computation steps and thus more quickly. “The high parallelism of our architecture means that, for example, the well-known Shor algorithm for factoring numbers can be executed very efficiently,” Fellner explains.
    Two-stage error correction
    The new concept also offers hardware-efficient error correction. Because quantum systems are very sensitive to disturbances, quantum computers must correct errors continuously. Significant resources must be devoted to protecting quantum information, which greatly increases the number of qubits required. “Our model operates with a two-stage error correction, one type of error (bit flip error or phase error) is prevented by the hardware used,” say Anette Messinger and Kilian Ender, also members of the Innsbruck research team. There are already initial experimental approaches for this on different platforms. “The other type of error can be detected and corrected via the software,” Messinger and Ender say. This would allow a next generation of universal quantum computers to be realized with manageable effort. The spin-off company ParityQC, co-founded by Wolfgang Lechner and Magdalena Hauser, is already working in Innsbruck with partners from science and industry on possible implementations of the new model.
    The research at the University of Innsbruck was financially supported by the Austrian Science Fund FWF and the Austrian Research Promotion Agency FFG.
    Story Source:
    Materials provided by University of Innsbruck. Note: Content may be edited for style and length. More

  • in

    Unveiling the dimensionality of complex networks through hyperbolic geometry

    Reducing redundant information to find simplifying patterns in data sets and complex networks is a scientific challenge in many knowledge fields. Moreover, detecting the dimensionality of the data is still a hard-to-solve problem. An article published in the journal Nature Communications presents a method to infer the dimensionality of complex networks through the application of hyperbolic geometrics, which capture the complexity of relational structures of the real world in many diverse domains.
    Among the authors of the study are the researchers M. Ángeles Serrano and Marián Boguñá, from the Faculty of Physics and the Institute of Complex Systems of the UB (UBICS), and Pedro Almargo, from the Higher Technical School of Engineering of the University of Sevilla. The research study provides a multidimensional hyperbolic model of complex networks that reproduces its connectivity, with an ultra-low and customizable dimensionality for each specific network. This enables a better characterization of its structure — e.g. at a community scale — and the improvement of its predictive capability.
    The study reveals unexpected regularities, such as the extremely low dimensions of molecular networks associated with biological tissues; the slightly higher dimensionality required by social networks and the Internet; and the discovery that brain connectomes are close to three dimensions in their automatic organisation.
    Hyperbolic versus Euclidean geometry
    The intrinsic geometry of data sets or complex networks is not obvious, which becomes an obstacle in determining the dimensionality of real networks. Another challenge is that the definition of distance has to be established according to their relational and connectivity structure, and this also requires sophisticated models.
    Now, the new approach is based on the geometry of complex networks, and more specifically, on the configurational geometric model or SD model. “This model, which we have developed in previous work, describes the structure of complex networks based on fundamental principles,” says the lecturer M. Ángeles, ICREA researcher at the Department of Condensed Matter Physics of the UB. More

  • in

    Mathematical modeling suggests U.S. counties are still unprepared for COVID spikes

    America was unprepared for the magnitude of the pandemic, which overwhelmed many counties and filled some hospitals to capacity. A new paper in PNAS suggests there may have been a mathematical method, of sorts, to the madness of those early COVID days.
    The study tests a model that closely matches the patterns of case counts and deaths reported, county by county, across the United States between April 2020 and June 2021. The model suggests that unprecedented COVID spikes could, even now, overwhelm local jurisdictions.
    “Our best estimate, based on the data, is that the numbers of cases and deaths per county have infinite variance, which means that a county could get hit with a tremendous number of cases or deaths,” says Rockefeller’s Joel Cohen. “We cannot reasonably anticipate that any county will have the resources to cope with extremely large, rare events, so it is crucial that counties — as well as states and even countries — develop plans, ahead of time, to share resources.”
    Predicting 99 percent of a pandemic
    Ecologists might have guessed that the spread of COVID cases and deaths would at least roughly conform to Taylor’s Law, a formula that relates a population’s mean to its variance (a measure of the scatter around the average). From how crop yields fluctuate, to the frequency of tornado outbreaks, to how cancer cells multiply, Taylor’s Law forms the backbone of many statistical models that experts use to describe thousands of species, including humans.
    But when Cohen began looking into whether Taylor’s Law could also describe the grim COVID statistics provided by The New York Times, he ran into a surprise. More

  • in

    New hybrid structures could pave the way to more stable quantum computers

    A new way to combine two materials with special electrical properties — a monolayer superconductor and a topological insulator — provides the best platform to date to explore an unusual form of superconductivity called topological superconductivity. The combination could provide the basis for topological quantum computers that are more stable than their traditional counterparts.
    Superconductors — used in powerful magnets, digital circuits, and imaging devices — allow the electric current to pass without resistance, while topological insulators are thin films only a few atoms thick that restrict the movement of electrons to their edges, which can result in unique properties. A team led by researchers at Penn State describe how they have paired the two materials in a paper appearing Oct. 27 in the journal Nature Materials.
    “The future of quantum computing depends on a kind of material that we call a topological superconductor, which can be formed by combining a topological insulator with a superconductor, but the actual process of combining these two materials is challenging,” said Cui-Zu Chang, Henry W. Knerr Early Career Professor and Associate Professor of Physics at Penn State and leader of the research team. “In this study, we used a technique called molecular beam epitaxy to synthesize both topological insulator and superconductor films and create a two-dimensional heterostructure that is an excellent platform to explore the phenomenon of topological superconductivity.”
    In previous experiments to combine the two materials, the superconductivity in thin films usually disappears once a topological insulator layer is grown on top. Physicists have been able to add a topological insulator film onto a three-dimensional “bulk” superconductor and retain the properties of both materials. However, applications for topological superconductors, such as chips with low power consumption inside quantum computers or smartphones, would need to be two-dimensional.
    In this paper, the research team stacked a topological insulator film made of bismuth selenide (Bi2Se3) with different thicknesses on a superconductor film made of monolayer niobium diselenide (NbSe2), resulting in a two-dimensional end-product. By synthesizing the heterostructures at very lower temperature, the team was able to retain both the topological and superconducting properties.
    “In superconductors, electrons form ‘Cooper pairs’ and can flow with zero resistance, but a strong magnetic field can break those pairs,” said Hemian Yi, a postdoctoral scholar in the Chang Research Group at Penn State and the first author of the paper. “The monolayer superconductor film we used is known for its ‘Ising-type superconductivity,’ which means that the Cooper pairs are very robust against the in-plane magnetic fields. We would also expect the topological superconducting phase formed in our heterostructures to be robust in this way.”
    By subtly adjusting the thickness of the topological insulator, the researchers found that the heterostructure shifted from Ising-type superconductivity — where the electron spin is perpendicular to the film — to another kind of superconductivity called “Rashba-type superconductivity” — where the electron spin is parallel to the film. This phenomenon is also observed in the researchers’ theoretical calculations and simulations.
    This heterostructure could also be a good platform for the exploration of Majorana fermions, an elusive particle that would be a major contributor to making a topological quantum computer more stable than its predecessors.
    “This is an excellent platform for the exploration of topological superconductors, and we are hopeful that we will find evidence of topological superconductivity in our continuing work,” said Chang. “Once we have solid evidence of topological superconductivity and demonstrate Majorana physics, then this type of system could be adapted for quantum computing and other applications.”
    In addition to Chang and Yi, the research team at Penn State includes Lun-Hui Hu, Yuanxi Wang, Run Xiao, Danielle Reifsnyder Hickey, Chengye Dong, Yi-Fan Zhao, Ling-Jie Zhou, Ruoxi Zhang, Antony Richardella, Nasim Alem, Joshua Robinson, Moses Chan, Nitin Samarth, and Chao-Xing Liu. The team also includes Jiaqi Cai and Xiaodong Xu at the University of Washington.
    This work was primarily supported by the Penn State MRSEC for Nanoscale Science and also partially supported by the National Science Foundation, the Department of Energy, the University of North Texas, and the Gordon and Betty Moore Foundation.
    Story Source:
    Materials provided by Penn State. Original written by Gail McCormick. Note: Content may be edited for style and length. More