More stories

  • in

    Dark energy: Neutron stars will tell us if it's only an illusion

    A huge amount of mysterious dark energy is necessary to explain cosmological phenomena, such as the accelerated expansion of the Universe, with Einstein’s theory. But what if dark energy was just an illusion and general relativity itself had to be modified? A new SISSA study, published in Physical Review Letters, offers a new approach to answer this question. Thanks to huge computational and mathematical effort, scientists produced the first simulation ever of merging binary neutron stars in theories beyond general relativity that reproduce a dark- energy like behavior on cosmological scales. This allows the comparison of Einstein’s theory and modified versions of it, and, with sufficiently accurate data, may solve the dark energy mystery.
    For about 100 years now, general relativity has been very successful at describing gravity on a variety of regimes, passing all experimental tests on Earth and the solar system. However, to explain cosmological observations such as the observed accelerated expansion of the Universe, we need to introduce dark components, such as dark matter and dark energy, which still remain a mystery.
    Enrico Barausse, astrophysicist at SISSA (Scuola Internazionale Superiore di Studi Avanzati) and principal investigator of the ERC grant GRAMS (GRavity from Astrophysical to Microscopic Scales) questions whether dark energy is real or, instead, it may be interpreted as a breakdown of our understanding of gravity. “The existence of dark energy could be just an illusion,” he says, “the accelerated expansion of the Universe might be caused by some yet unknown modifications of general relativity, a sort of ‘dark gravity’.”
    The merger of neutron stars offers a unique situation to test this hypothesis because gravity around them is pushed to the extreme. “Neutron stars are the densest stars that exist, typically only 10 kilometers in radius, but with a mass between one or two times the mass of our Sun,” explains the scientist. “This makes gravity and the spacetime around them extreme, allowing for abundant production of gravitational waves when two of them collide. We can use the data acquired during such events to study the workings of gravity and test Einstein’s theory in a new window.”
    In this study, published in Physical Review Letters, SISSA scientists in collaboration with physicists from Universitat de les Illes Balears in Palma de Mallorca, produced the first simulation of merging binary neutron stars in theories of modified gravity relevant for cosmology: “This type of simulations is extremely challenging,” clarifies Miguel Bezares, first author of the paper, “because of the highly non-linear nature of the problem. It requires a huge computational effort -months of run in supercomputers — that was made possible also by the agreement between SISSA and CINECA consortium as well as novel mathematical formulations that we developed. These represented major roadblocks for many years till our first simulation.”
    Thanks to these simulations, researchers are finally able to compare general relativity and modified gravity. “Surprisingly, we found that the ‘dark gravity’ hypothesis is equally good as general relativity at explaining the data acquired by the LIGO and Virgo interferometers during past binary neutron star collisions. Indeed, the differences between the two theories in these systems are quite subtle, but they may be detectable by next-generation gravitational interferometers, such as the Einstein telescope in Europe and Cosmic Explorer in USA. This opens the exciting possibility of using gravitational waves to discriminate between dark energy and ‘dark gravity’,” Barausse concludes.
    Story Source:
    Materials provided by Scuola Internazionale Superiore di Studi Avanzati. Note: Content may be edited for style and length. More

  • in

    The physics of fire ant rafts could help engineers design swarming robots

    Noah rode out his flood in an ark. Winnie-the-Pooh had an upside-down umbrella. Fire ants (Solenopsis invicta), meanwhile, form floating rafts made up of thousands or even hundreds of thousands of individual insects.
    A new study by engineers at the University of Colorado Boulder lays out the simple physics-based rules that govern how these ant rafts morph over time: shrinking, expanding or growing long protrusions like an elephant’s trunk. The team’s findings could one day help researchers design robots that work together in swarms or next-generation materials in which molecules migrate to fix damaged spots.
    The results appeared recently in the journal PLOS Computational Biology.
    “The origins of such behaviors lie in fairly simple rules,” said Franck Vernerey, primary investigator on the new study and professor in the Paul M. Rady Department of Mechanical Engineering. “Single ants are not as smart as one may think, but, collectively, they become very intelligent and resilient communities.”
    Fire ants form these giant floating blobs of wriggling insects after storms in the southeastern United States to survive raging waters.
    In their latest study, Vernerey and lead author Robert Wagner drew on mathematical simulations, or models, to try to figure out the mechanics underlying these lifeboats. They discovered, for example, that the faster the ants in a raft move, the more those rafts will expand outward, often forming long protrusions. More

  • in

    The interplay between topology and magnetism has a bright future

    The new review paper on magnetic topological materials of Andrei Bernevig, Princeton University, USA, Haim Beidenkopf, Weizmann Institute of Science, Israel, and Claudia Felser, Max Planck Institute for Chemical Physics of Solids, Dresden, Germany, introduces the new theoretical concept that interweave magnetism and topology. It identifies and surveys potential new magnetic topological materials, mentions their possible future applications in spin and quantum electronics and as materials for efficient energy conversion. The review discusses the connection between topology, symmetry and magnetism at a level suitable for graduate students in physics, chemistry and materials science that have a basic knowledge of condensed matter physics.

    advertisement More

  • in

    Taking a systems approach to cyber security

    The frequency and severity of cyber-attacks on critical infrastructure is a subject of concern for many governments, as are the costs associated with cyber security, making the efficient allocation of resources paramount. A new study proposes a framework featuring a more holistic picture of the cybersecurity landscape, along with a model that explicitly represents multiple dimensions of the potential impacts of successful cyberattacks.
    As critical infrastructure such as electric power grids become more sophisticated, they are also becoming increasingly more reliant on digital networks and smart sensors to optimize their operations, and thus more vulnerable to cyber-attacks. Over the past couple of years, cyber-attacks on critical infrastructure have become ever more complex and disruptive, causing systems to shut down, disrupting operations, or enabling attackers to remotely control affected systems. Importantly, the impacts of successful attacks on critical cyber-physical systems are multidimensional in nature, which means that impacts are not only limited to losses incurred by the operators of the compromised system, but also economic losses to other parties relying on their services as well as public safety or environmental hazards.
    According to the study just published in the journal Risk Analysis, this makes it important to have a tool that distinguishes between different dimensions of cyber-risks and also allows for the design of security measures that are able to make the most efficient use of limited resources. The authors set out to answer two main questions in this regard: first, whether it is possible to find vulnerabilities, the exploitation of which opens ways for several attack scenarios to proceed; and second, if it is possible to take advantage of this knowledge and deploy countermeasures to simultaneously protect the system from several threats.
    One of the ways in which cyber threats are commonly managed, is to conduct an analysis of individual attack scenarios through risk matrices, prioritizing the scenarios according to their perceived urgency (depending on their likelihoods of occurrence and severity of potential impacts), and then addressing them in order until all the resources available for cybersecurity are spent. According to the authors, this approach may however lead to suboptimal resource allocations, given that potential synergies between different attack scenarios and among available security measures are not taken into consideration.
    “Existing assessment frameworks and cybersecurity models assume the perspective of the operator of the system and support her cost-benefit analysis, in other words, the cost of security measures versus potential losses in the case of a successful cyber-attack. Yet, this approach is not satisfactory in the context of security of critical infrastructure, where the potential impacts are multidimensional and may affect multiple stakeholders. We endeavored to address this problem by explicitly modeling multiple relevant impact dimensions of successful cyber-attacks,” explains lead author Piotr Żebrowski a researcher in the Exploratory Modeling of Human-natural Systems Research Group of the IIASA Advancing Systems Analysis Program.
    To overcome this shortcoming, the researchers propose a quantitative framework that features a more holistic picture of the cybersecurity landscape that encompasses multiple attack scenarios, thus allowing for a better appreciation of vulnerabilities. To do this, the team developed a Bayesian network model representing a cybersecurity landscape of a system. This method has gained popularity in the last few years due to its ability to describe risks in probabilistic terms and to explicitly incorporate prior knowledge about them into a model that can be used to monitor the exposure to cyber threats and allow for real-time updates if some vulnerabilities have been exploited.
    In addition to this, the researchers built a multi-objective optimization model on top of the Bayesian network that explicitly represents multiple dimensions of the potential impacts of successful cyberattacks. The framework adopts a broader perspective than the standard cost-benefit analysis and allows for the formulation of more nuanced security objectives. The study also proposes an algorithm that is able to identify a set of optimal portfolios of security measures that simultaneously minimize various types of expected cyberattack impacts, while also satisfying budgetary and other constraints.
    The researchers note that while the use of models like this in cybersecurity is not entirely unheard of, the practical implementation of such models usually requires extensive study of systems vulnerabilities. In their study, the team however suggests how such a model can be built based on a set of attack trees, which is a standard representation of attack scenarios commonly used by the industry in security assessments. The researchers demonstrated their method with the help of readily available attack trees presented in security assessments of electric power grids in the US.
    “Our method offers the possibility to explicitly represent and mitigate the exposure of different stakeholders other than system operators to the consequences of successful cyber-attacks. This allows relevant stakeholders to meaningfully participate in shaping the cybersecurity of critical infrastructure,” notes Żebrowski.
    In conclusion, the researchers highlight that it is important to have a systemic perspective on the issue of cyber security. This is crucial both in terms of establishing a more accurate landscape of cyber threats to critical infrastructure and in the efficient and inclusive management of important systems in the interest of multiple stakeholders. More

  • in

    How to make a 'computer' out of liquid crystals

    Researchers with the University of Chicago Pritzker School of Molecular Engineering have shown for the first time how to design the basic elements needed for logic operations using a kind of material called a liquid crystal — paving the way for a completely novel way of performing computations.
    The results, published Feb. 23 in Science Advances, are not likely to become transistors or computers right away, but the technique could point the way towards devices with new functions in sensing, computing and robotics.
    “We showed you can create the elementary building blocks of a circuit — gates, amplifiers, and conductors — which means you should be able to assemble them into arrangements capable of performing more complex operations,” said Juan de Pablo, the Liew Family Professor in Molecular Engineering and senior scientist at Argonne National Laboratory, and the senior corresponding author on the paper. “It’s a really exciting step for the field of active materials.”
    The details in the defect
    The research aimed to take a closer look at a type of material called a liquid crystal. The molecules in a liquid crystal tend to be elongated, and when packed together they adopt a structure that has some order, like the straight rows of atoms in a diamond crystal — but instead of being stuck in place as in a solid, this structure can also shift around as a liquid does. Scientists are always looking for these kinds of oddities because they can utilize these unusual properties as the basis of new technologies; liquid crystals, for example, are in the LCD TV you may already have in your home or in the screen of your laptop.
    One consequence of this odd molecular order is that there are spots in all liquid crystals where the ordered regions bump up against each other and their orientations don’t quite match, creating what scientists call “topological defects.” These spots move around as the liquid crystal moves. More

  • in

    Bonding exercise: Quantifying biexciton binding energy

    A rare spectroscopy technique performed at Swinburne University of Technology directly quantifies the energy required to bind two excitons together, providing for the first time a direct measurement of the biexciton binding energy in WS2.
    As well as improving our fundamental understanding of biexciton dynamics and characteristic energy scales, these findings directly inform those working to realise biexciton-based devices such as more compact lasers and chemical-sensors.
    The study also brings closer exotic new quantum materials, and quantum phases, with novel properties.
    The study is a collaboration between FLEET researchers at Swinburne and the Australian National University.
    Understanding Excitons
    Particles of opposite charge in close proximity will feel the ‘pull’ of electrostatic forces, binding them together. The electrons of two hydrogen atoms are pulled in by opposing protons to form H2, for example, while other compositions of such electrostatic (Coulomb-mediated) attraction can result in more exotic molecular states. More

  • in

    Mammoths, meet the metaverse

    Fearsome dire wolves and saber-toothed cats no longer prowl around La Brea Tar Pits, but thanks to new research, anyone can bring these extinct animals back to life through augmented reality (AR). Dr. Matt Davis and colleagues at the Natural History Museum of Los Angeles County and La Brea Tar Pits collaborated with researchers and designers at the University of Southern California (USC) to create more than a dozen new, scientifically accurate virtual models of Ice Age animals, published recently in Palaeontologia Electronica.
    The team is investigating how AR impacts learning in museums, but soon realized there weren’t any accurate Ice Age animals in the metaverse yet that they could use. So, they took all the latest paleontological research and made their own. The models were built in a blocky, low poly style so that they could be scientifically accurate, but still simple enough to run on normal cell phones with limited processing power.
    According to study co-author Dr. William Swartout, Chief Technology Officer at the USC Institute for Creative Technologies, “The innovation of this approach is that it allows us to create scientifically accurate artwork for the metaverse without overcommitting to details where we still lack good fossil evidence.”
    The researchers hope this article will also bring more respect to paleoart, the kind of art that recreates what extinct animals might have looked like. “Paleoart can be very influential in how the public, and even scientists, understand fossil life,” said Dr. Emily Lindsey, Assistant Curator at La Brea Tar Pits and senior author of the study. A lot of paleoart is treated as an afterthought, though, and not subjected to the same rigorous scrutiny as other scientific research. This can lead to particularly bad reconstructions of extinct animals being propagated for generations in both the popular media and academic publications.
    “We think paleoart is a crucial part of paleontological research,” said Dr. Davis, the study’s lead author. “That’s why we decided to publish all the scientific research and artistic decisions that went into creating these models. This will make it easier for other scientists and paleoartists to critique and build off our team’s work.”
    Dr. Davis notes that it is just as important to acknowledge what we don’t know about these animals’ appearances as it is to record what we do know. For example, we can accurately depict the shaggy fur of Shasta ground sloths because paleontologists have found a whole skeleton of this species with hair and skin still preserved. But for mastodons, paleontologists have only found a few strands of hair. Their thick fur pelt was an artistic decision. Dr. Davis and colleagues hope that other paleoartists and scientists will follow their example by publishing all the research that goes into their reconstructions of extinct species. It will lead to better and more accurate paleoart for everyone.
    This research was funded by an NSF AISL collaborative grant (1811014; 1810984) led by Dr. Benjamin Nye of the USC Institute for Creative Technologies, Dr. Gale Sinatra of the USC Rossier School of Education, Dr. William Swartout of the USC Institute for Creative Technologies, and Dr. Emily Lindsey of La Brea Tar Pits.
    Story Source:
    Materials provided by Natural History Museum of Los Angeles County. Note: Content may be edited for style and length. More

  • in

    Deciphering algorithms used by ants and the Internet

    Scientists found that ants and other natural systems use optimization algorithms similar to those used by engineered systems, including the Internet. These algorithms invest incrementally more resources as long as signs are encouraging but pull back quickly at the first sign of trouble. The systems are designed to be robust, allowing for portions to fail without harming the entire system. Understanding how these algorithms work in the real world may help solve engineering problems, whereas engineered systems may offer clues to understanding the behavior of ants, cells, and other natural systems.
    Engineers sometimes turn to nature for inspiration. Cold Spring Harbor Laboratory Associate Professor Saket Navlakha and research scientist Jonathan Suen found that adjustment algorithms — the same feedback control process by which the Internet optimizes data traffic — are used by several natural systems to sense and stabilize behavior, including ant colonies, cells, and neurons.
    Internet engineers route data around the world in small packets, which are analogous to ants. As Navlakha explains:
    “The goal of this work was to bring together ideas from machine learning and Internet design and relate them to the way ant colonies forage.”
    The same algorithm used by Internet engineers is used by ants when they forage for food. At first, the colony may send out a single ant. When the ant returns, it provides information about how much food it got and how long it took to get it. The colony would then send out two ants. If they return with food, the colony may send out three, then four, five, and so on. But if ten ants are sent out and most do not return, then the colony does not decrease the number it sends to nine. Instead, it cuts the number by a large amount, a multiple (say half) of what it sent before: only five ants. In other words, the number of ants slowly adds up when the signals are positive, but is cut dramatically lower when the information is negative. Navlakha and Suen note that the system works even if individual ants get lost and parallels a particular type of “additive-increase/multiplicative-decrease algorithm” used on the Internet.
    Suen thinks ants might inspire new ways to protect computer systems against hackers or cyberattacks. Engineers could emulate how nature withstands a range of threats to health and viability. Suen explains:
    “Nature has been shown to be incredibly robust in a lot of aspects responding to changing environments. In cybersecurity [however] we find that a lot of our systems can be tampered with, can be easily broken, and are simply not robust. We wanted to look at nature, which survives across all sorts of natural disasters, evolutionary changes, human changes, and learn a lot from how nature changes its systems dynamically to survive.”
    While Suen plans to apply nature’s algorithms to engineering programs, Navlakha would like to see if engineering solutions might offer alternative approaches to understanding gene regulation and immune feedback control. Navlakha hopes that “successful strategies in one realm could lead to improvements in the other.”
    Story Source:
    Materials provided by Cold Spring Harbor Laboratory. Original written by Eliene Augenbraun. Note: Content may be edited for style and length. More