More stories

  • in

    Measuring tiny quantum effects with high precision

    Most quantum information technologies including quantum computers — considered a step above supercomputers — and quantum communication that cannot be hacked are based on the principle of quantum entanglement. However, entangled systems exist in a small microscopic world and are pretty fragile. Quantum metrology, which provides enhanced sensitivity over conventional measurements in precision metrology, has also mainly relied on quantum entanglement, so that it is hard to implement in real life applications. Recently, a Korean research team has proposed a method to achieve the quantum metrology precision without using entangled resources.
    A POSTECH research team led by Professor Yoon-Ho Kim and Dr. Yosep Kim (Department of Physics) has discovered a weak-value amplification (WVA) method that reaches the Heisenberg limit without using quantum entanglement. Heisenberg-limit refers to the precision ultimately achievable in quantum metrology.
    WVA-based metrology, which is one of the methods for measuring quantum effects, is an approach to obtain the most information on the quantum system with minimal impact. It can efficiently measure the system without collapsing the quantum state.
    By using the weak value measured in this way, it is possible to amplify tiny physical effects such as ultrasmall phase shifts. Though this method has fewer errors compared to the conventional ones, it has a critical limitation of lower detection probability. Methods to overcome this limitation has been proposed by utilizing entanglement, but the difficulty in generating a large-scale quantum entanglement has been a major challenge to realize the Heisenberg-limited metrology.
    The researchers have confirmed that in the weak-value amplification, the Heisenberg limit is reached without using entanglement through the iterative interaction between different quantum states. They explain that this results from the local iterative interactions between each particle of an entangled system and a meter, rather than from the quantum entanglement itself.
    “This study will contribute to the practical use of quantum metrology by verifying that entanglement is not an absolute requirement for reaching the Heisenberg limit,” remarked Professor Yoon-Ho Kim who led the study.
    Story Source:
    Materials provided by Pohang University of Science & Technology (POSTECH). Note: Content may be edited for style and length. More

  • in

    Computer games in the classroom: Educational success depends on the teacher

    Future teachers see educational potential in computer games, study shows. Teacher training should therefore address their potential in the classroom.
    New study results by a research team at the University of Cologne show that future teachers increasingly want to use computer games in the classroom. The study identifies particularly relevant aspects that should be addressed in teacher training programmes in order to support this intention. The study results have been published under the title ‘Teaching with digital games: How intentions to adopt digital game-based learning are related to personal characteristics of pre-service teachers’ in the British Journal of Educational Technology.
    Computer games play a major role in the lives and media use of children and adolescents people. However, current school teaching rarely takes this medium into account. The future generation of teachers currently being trained at universities could change this. ‘In our current study, we focused on the teachers of tomorrow and how they can be better prepared to employ computer games in the classroom because computer games have great potential for teaching’, said Marco Rüth from the University of Cologne’s Psychology Department.
    In previous studies, the authors had already shown that as a learning tool in the classroom, computer games can support students’ skills development. They also found that after using computer games in class, students can reflect critically and constructively on their experiences with the medium. Based on this, the researchers surveyed 402 teacher trainees from German-speaking universities online about their intention to integrate computer games as learning tools and as an object of reflection in their future school lessons. The team examined 21 personal characteristics, including perceived effectiveness of computer games, knowledge about computer games, and fear of using computer games in the classroom. ‘Above all, the perceived effectiveness of computer games and perceived connections of computer games to curricula play a central role in the intention of teacher trainees to actually want to use them in school lessons,’ Professor Kai Kaspar explained.
    The current survey also revealed differences between the scenarios in which computer games are used: ‘If teacher trainees want to use computer games to promote the competencies of students, they pay particular attention to their own fear of using computer games and the extent to which people important to them think they should use computer games,’ explained Marco Rüth. ‘If, on the other hand, they want to use computer games for media-critical discussions, the focus was instead on the effort involved for them.’
    Since computer games are currently rarely included as a relevant medium in teacher training programmes, the researchers recommend that, above all, insights into the effectiveness of computer games and their relevance to curricula should be included in teacher training programmes. Likewise, teacher trainees should be aware of potential pitfalls in practical implementation and be able to deal with them ,so that teaching competencies with computer games are promoted in the long term. ‘This would require not only adjustments to the curriculum of the teacher training programme, but also further support services and research findings so that teachers in their later school practice know exactly when and how they can use computer games effectively in the classroom,’ said Professor Kaspar.
    Story Source:
    Materials provided by University of Cologne. Note: Content may be edited for style and length. More

  • in

    Dark energy: Neutron stars will tell us if it's only an illusion

    A huge amount of mysterious dark energy is necessary to explain cosmological phenomena, such as the accelerated expansion of the Universe, with Einstein’s theory. But what if dark energy was just an illusion and general relativity itself had to be modified? A new SISSA study, published in Physical Review Letters, offers a new approach to answer this question. Thanks to huge computational and mathematical effort, scientists produced the first simulation ever of merging binary neutron stars in theories beyond general relativity that reproduce a dark- energy like behavior on cosmological scales. This allows the comparison of Einstein’s theory and modified versions of it, and, with sufficiently accurate data, may solve the dark energy mystery.
    For about 100 years now, general relativity has been very successful at describing gravity on a variety of regimes, passing all experimental tests on Earth and the solar system. However, to explain cosmological observations such as the observed accelerated expansion of the Universe, we need to introduce dark components, such as dark matter and dark energy, which still remain a mystery.
    Enrico Barausse, astrophysicist at SISSA (Scuola Internazionale Superiore di Studi Avanzati) and principal investigator of the ERC grant GRAMS (GRavity from Astrophysical to Microscopic Scales) questions whether dark energy is real or, instead, it may be interpreted as a breakdown of our understanding of gravity. “The existence of dark energy could be just an illusion,” he says, “the accelerated expansion of the Universe might be caused by some yet unknown modifications of general relativity, a sort of ‘dark gravity’.”
    The merger of neutron stars offers a unique situation to test this hypothesis because gravity around them is pushed to the extreme. “Neutron stars are the densest stars that exist, typically only 10 kilometers in radius, but with a mass between one or two times the mass of our Sun,” explains the scientist. “This makes gravity and the spacetime around them extreme, allowing for abundant production of gravitational waves when two of them collide. We can use the data acquired during such events to study the workings of gravity and test Einstein’s theory in a new window.”
    In this study, published in Physical Review Letters, SISSA scientists in collaboration with physicists from Universitat de les Illes Balears in Palma de Mallorca, produced the first simulation of merging binary neutron stars in theories of modified gravity relevant for cosmology: “This type of simulations is extremely challenging,” clarifies Miguel Bezares, first author of the paper, “because of the highly non-linear nature of the problem. It requires a huge computational effort -months of run in supercomputers — that was made possible also by the agreement between SISSA and CINECA consortium as well as novel mathematical formulations that we developed. These represented major roadblocks for many years till our first simulation.”
    Thanks to these simulations, researchers are finally able to compare general relativity and modified gravity. “Surprisingly, we found that the ‘dark gravity’ hypothesis is equally good as general relativity at explaining the data acquired by the LIGO and Virgo interferometers during past binary neutron star collisions. Indeed, the differences between the two theories in these systems are quite subtle, but they may be detectable by next-generation gravitational interferometers, such as the Einstein telescope in Europe and Cosmic Explorer in USA. This opens the exciting possibility of using gravitational waves to discriminate between dark energy and ‘dark gravity’,” Barausse concludes.
    Story Source:
    Materials provided by Scuola Internazionale Superiore di Studi Avanzati. Note: Content may be edited for style and length. More

  • in

    The physics of fire ant rafts could help engineers design swarming robots

    Noah rode out his flood in an ark. Winnie-the-Pooh had an upside-down umbrella. Fire ants (Solenopsis invicta), meanwhile, form floating rafts made up of thousands or even hundreds of thousands of individual insects.
    A new study by engineers at the University of Colorado Boulder lays out the simple physics-based rules that govern how these ant rafts morph over time: shrinking, expanding or growing long protrusions like an elephant’s trunk. The team’s findings could one day help researchers design robots that work together in swarms or next-generation materials in which molecules migrate to fix damaged spots.
    The results appeared recently in the journal PLOS Computational Biology.
    “The origins of such behaviors lie in fairly simple rules,” said Franck Vernerey, primary investigator on the new study and professor in the Paul M. Rady Department of Mechanical Engineering. “Single ants are not as smart as one may think, but, collectively, they become very intelligent and resilient communities.”
    Fire ants form these giant floating blobs of wriggling insects after storms in the southeastern United States to survive raging waters.
    In their latest study, Vernerey and lead author Robert Wagner drew on mathematical simulations, or models, to try to figure out the mechanics underlying these lifeboats. They discovered, for example, that the faster the ants in a raft move, the more those rafts will expand outward, often forming long protrusions. More

  • in

    The interplay between topology and magnetism has a bright future

    The new review paper on magnetic topological materials of Andrei Bernevig, Princeton University, USA, Haim Beidenkopf, Weizmann Institute of Science, Israel, and Claudia Felser, Max Planck Institute for Chemical Physics of Solids, Dresden, Germany, introduces the new theoretical concept that interweave magnetism and topology. It identifies and surveys potential new magnetic topological materials, mentions their possible future applications in spin and quantum electronics and as materials for efficient energy conversion. The review discusses the connection between topology, symmetry and magnetism at a level suitable for graduate students in physics, chemistry and materials science that have a basic knowledge of condensed matter physics.

    advertisement More

  • in

    Taking a systems approach to cyber security

    The frequency and severity of cyber-attacks on critical infrastructure is a subject of concern for many governments, as are the costs associated with cyber security, making the efficient allocation of resources paramount. A new study proposes a framework featuring a more holistic picture of the cybersecurity landscape, along with a model that explicitly represents multiple dimensions of the potential impacts of successful cyberattacks.
    As critical infrastructure such as electric power grids become more sophisticated, they are also becoming increasingly more reliant on digital networks and smart sensors to optimize their operations, and thus more vulnerable to cyber-attacks. Over the past couple of years, cyber-attacks on critical infrastructure have become ever more complex and disruptive, causing systems to shut down, disrupting operations, or enabling attackers to remotely control affected systems. Importantly, the impacts of successful attacks on critical cyber-physical systems are multidimensional in nature, which means that impacts are not only limited to losses incurred by the operators of the compromised system, but also economic losses to other parties relying on their services as well as public safety or environmental hazards.
    According to the study just published in the journal Risk Analysis, this makes it important to have a tool that distinguishes between different dimensions of cyber-risks and also allows for the design of security measures that are able to make the most efficient use of limited resources. The authors set out to answer two main questions in this regard: first, whether it is possible to find vulnerabilities, the exploitation of which opens ways for several attack scenarios to proceed; and second, if it is possible to take advantage of this knowledge and deploy countermeasures to simultaneously protect the system from several threats.
    One of the ways in which cyber threats are commonly managed, is to conduct an analysis of individual attack scenarios through risk matrices, prioritizing the scenarios according to their perceived urgency (depending on their likelihoods of occurrence and severity of potential impacts), and then addressing them in order until all the resources available for cybersecurity are spent. According to the authors, this approach may however lead to suboptimal resource allocations, given that potential synergies between different attack scenarios and among available security measures are not taken into consideration.
    “Existing assessment frameworks and cybersecurity models assume the perspective of the operator of the system and support her cost-benefit analysis, in other words, the cost of security measures versus potential losses in the case of a successful cyber-attack. Yet, this approach is not satisfactory in the context of security of critical infrastructure, where the potential impacts are multidimensional and may affect multiple stakeholders. We endeavored to address this problem by explicitly modeling multiple relevant impact dimensions of successful cyber-attacks,” explains lead author Piotr Żebrowski a researcher in the Exploratory Modeling of Human-natural Systems Research Group of the IIASA Advancing Systems Analysis Program.
    To overcome this shortcoming, the researchers propose a quantitative framework that features a more holistic picture of the cybersecurity landscape that encompasses multiple attack scenarios, thus allowing for a better appreciation of vulnerabilities. To do this, the team developed a Bayesian network model representing a cybersecurity landscape of a system. This method has gained popularity in the last few years due to its ability to describe risks in probabilistic terms and to explicitly incorporate prior knowledge about them into a model that can be used to monitor the exposure to cyber threats and allow for real-time updates if some vulnerabilities have been exploited.
    In addition to this, the researchers built a multi-objective optimization model on top of the Bayesian network that explicitly represents multiple dimensions of the potential impacts of successful cyberattacks. The framework adopts a broader perspective than the standard cost-benefit analysis and allows for the formulation of more nuanced security objectives. The study also proposes an algorithm that is able to identify a set of optimal portfolios of security measures that simultaneously minimize various types of expected cyberattack impacts, while also satisfying budgetary and other constraints.
    The researchers note that while the use of models like this in cybersecurity is not entirely unheard of, the practical implementation of such models usually requires extensive study of systems vulnerabilities. In their study, the team however suggests how such a model can be built based on a set of attack trees, which is a standard representation of attack scenarios commonly used by the industry in security assessments. The researchers demonstrated their method with the help of readily available attack trees presented in security assessments of electric power grids in the US.
    “Our method offers the possibility to explicitly represent and mitigate the exposure of different stakeholders other than system operators to the consequences of successful cyber-attacks. This allows relevant stakeholders to meaningfully participate in shaping the cybersecurity of critical infrastructure,” notes Żebrowski.
    In conclusion, the researchers highlight that it is important to have a systemic perspective on the issue of cyber security. This is crucial both in terms of establishing a more accurate landscape of cyber threats to critical infrastructure and in the efficient and inclusive management of important systems in the interest of multiple stakeholders. More

  • in

    How to make a 'computer' out of liquid crystals

    Researchers with the University of Chicago Pritzker School of Molecular Engineering have shown for the first time how to design the basic elements needed for logic operations using a kind of material called a liquid crystal — paving the way for a completely novel way of performing computations.
    The results, published Feb. 23 in Science Advances, are not likely to become transistors or computers right away, but the technique could point the way towards devices with new functions in sensing, computing and robotics.
    “We showed you can create the elementary building blocks of a circuit — gates, amplifiers, and conductors — which means you should be able to assemble them into arrangements capable of performing more complex operations,” said Juan de Pablo, the Liew Family Professor in Molecular Engineering and senior scientist at Argonne National Laboratory, and the senior corresponding author on the paper. “It’s a really exciting step for the field of active materials.”
    The details in the defect
    The research aimed to take a closer look at a type of material called a liquid crystal. The molecules in a liquid crystal tend to be elongated, and when packed together they adopt a structure that has some order, like the straight rows of atoms in a diamond crystal — but instead of being stuck in place as in a solid, this structure can also shift around as a liquid does. Scientists are always looking for these kinds of oddities because they can utilize these unusual properties as the basis of new technologies; liquid crystals, for example, are in the LCD TV you may already have in your home or in the screen of your laptop.
    One consequence of this odd molecular order is that there are spots in all liquid crystals where the ordered regions bump up against each other and their orientations don’t quite match, creating what scientists call “topological defects.” These spots move around as the liquid crystal moves. More

  • in

    Bonding exercise: Quantifying biexciton binding energy

    A rare spectroscopy technique performed at Swinburne University of Technology directly quantifies the energy required to bind two excitons together, providing for the first time a direct measurement of the biexciton binding energy in WS2.
    As well as improving our fundamental understanding of biexciton dynamics and characteristic energy scales, these findings directly inform those working to realise biexciton-based devices such as more compact lasers and chemical-sensors.
    The study also brings closer exotic new quantum materials, and quantum phases, with novel properties.
    The study is a collaboration between FLEET researchers at Swinburne and the Australian National University.
    Understanding Excitons
    Particles of opposite charge in close proximity will feel the ‘pull’ of electrostatic forces, binding them together. The electrons of two hydrogen atoms are pulled in by opposing protons to form H2, for example, while other compositions of such electrostatic (Coulomb-mediated) attraction can result in more exotic molecular states. More