More stories

  • in

    Researchers engineer electrically tunable graphene devices to study rare physics

    An international team, co-led by researchers at The University of Manchester’s National Graphene Institute (NGI) in the UK and the Penn State College of Engineering in the US, has developed a tunable graphene-based platform that allows for fine control over the interaction between light and matter in the terahertz (THz) spectrum to reveal rare phenomena known as exceptional points. The team published their results today (8 April) in Science.
    The work could advance optoelectronic technologies to better generate, control and sense light and potentially communications, according to the researchers. They demonstrated a way to control THz waves, which exist at frequencies between those of microwaves and infrared waves. The feat could contribute to the development of ‘beyond-5G’ wireless technology for high-speed communication networks.
    Weak and strong interactions
    Light and matter can couple, interacting at different levels: weakly, where they might be correlated but do not change each other’s constituents; or strongly, where their interactions can fundamentally change the system. The ability to control how the coupling shifts from weak to strong and back again has been a major challenge to advancing optoelectronic devices — a challenge researchers have now solved.
    “We have demonstrated a new class of optoelectronic devices using concepts of topology — a branch of mathematics studying properties of geometric objects,” said co-corresponding author Coskun Kocabas, professor of 2D device materials at The University of Manchester. “Using exceptional point singularities, we show that topological concepts can be used to engineer optoelectronic devices that enable new ways to manipulate terahertz light.”
    Kocabas is also affiliated with the Henry Royce Institute for Advanced Materials, headquartered in Manchester. More

  • in

    Nano particle trapped between mirrors works as a quantum sensor

    Sensors are a pillar of the Internet of Things, providing the data to control all sorts of objects. Here, precision is essential, and this is where quantum technologies could make a difference. Researchers are now demonstrating how nanoparticles in tiny optical resonators can be transferred into quantum regime and used as high-precision sensors.
    Advances in quantum physics offer new opportunities to significantly improve the precision of sensors and thus enable new technologies. A team led by Oriol Romero-Isart of the Institute of Quantum Optics and Quantum Information at the Austrian Academy of Sciences and the Department of Theoretical Physics at the University of Innsbruck and a team lead by Romain Quidant of ETH Zurich are now proposing a new concept for a high-precision quantum sensor. The researchers suggest that the motional fluctuations of a nanoparticle trapped in a microscopic optical resonator could be reduced significantly below the zero-point motion, by exploiting the fast unstable dynamics of the system.
    Particle caught between mirrors
    Mechanical quantum squeezing reduces the uncertainty of motional fluctuations below the zero-point motion, and it has been experimentally demonstrated in the past with micromechanical resonators in the quantum regime. The researchers now propose a novel approach, especially tailored to levitated mechanical systems. “We demonstrate that a properly designed optical cavity can be used to rapidly and strongly squeeze the motion of a levitated nanoparticle,” says Katja Kustura of Oriol Romero-Isart’s team in Innsbruck. In an optical resonator, light is reflected between mirrors and it interacts with the levitated nanoparticle. Such interaction can give rise to dynamical instabilities, which are often considered undesirable.
    The researchers now show how they can instead be used as a resource. “In the present work, we show how, by properly controlling these instabilities, the resulting unstable dynamics of a mechanical oscillator inside an optical cavity leads to mechanical squeezing,” Kustura says. The new protocol is robust in the presence of dissipation, making it particularly feasible in levitated optomechanics. In the paper, published in the journal Physical Review Letters, the researchers apply this approach to a silica nanoparticle coupled to a microcavity via coherent scattering. “This example shows that we can squeeze the particle by orders of magnitude below the zero-point motion, even if starting from an initial thermal state,” Oriol Romero-Isart is pleased to say.
    The work provides a new use of optical cavities as mechanical quantum squeezers, and it suggests a viable new route in levitated optomechanics beyond the quantum ground state cooling. Micro-resonators thus offer an interesting new platform for the design of quantum sensors, which could be used, for example, in satellite missions, self-driving cars, and in seismology. The research in Innsbruck and Zurich was financially supported by the European Union.
    Story Source:
    Materials provided by University of Innsbruck. Note: Content may be edited for style and length. More

  • in

    Blockchain offers a solution to post-Brexit border digitization to build supply chain trust, research shows

    As a result of the UK leaving the European Union, logistics firms have faced additional friction at UK borders. Consequently, there have been calls for automated digital borders, but few such systems exist. Surrey researchers have now discovered that a blockchain-based platform can improve supply chain efficiency and trust development at our borders.
    Blockchain is a system in which a record of transactions made in bitcoin, or another cryptocurrency, are maintained across several computers that are linked in a peer-to-peer network. The blockchain based platform studied in this case is known as an RFIT platform; a pilot implementation blockchain system that links data together and ensures that this data is unalterable. This end-to-end visibility of unchangeable data helps to build trust between supply partners.
    Professor of Digital Transformation at the University of Surrey and co-author of the study, Glenn Parry, said:
    “Since the UK’s withdrawal from the EU Customs Union, businesses have faced increased paperwork, border delays and higher costs. A digitally managed border system that identifies trusted shipments appears an obvious solution, but we needed to define what trust actually means and how a digital system can help.
    “Supply chain participants have long recognised the importance of trust in business relationships. Trust is the primary reason companies cite when supply chain relationships break down, which is especially true at customs borders. Current supply chain friction at UK borders is replicated across the world. Delay is caused by a lack of trust in goods flows, and hence a need to inspect.”
    Surrey academics stressed that the introduction of this platform does not remove the need for trust and trust-building processes in established buyer-supplier relationships. It’s crucial that blockchain platform providers continue to build a position of trust with all participants.
    In the case of the import of wine from Australia to the UK, researchers found that the RFIT platform can employ a blockchain layer to make documentation unalterable. The platform facilitates the building of trust across the supply chain by providing a single source of validated data and increasing visibility. Reduced data asymmetry between border agencies and suppliers improves accuracy, timeliness, and integrity.
    Through its 2025 UK Border Strategy, the UK Government is seeking to establish technology leadership in reducing friction in cross-border supply chains.
    Visiting Fellow at Surrey and co-author of the study, Mike Brookbanks, said:
    “The broader findings from the case study are influencing the UK Government on how to address the current challenges with supply chains at UK customs borders. We hope our work will also influence the Government’s current focus on trust ecosystems, as part of the single trade window (STW) initiative. We truly believe that the use of this innovative digital technology will form the Government’s first step in developing a utility trade platform, encouraging broader digitisation of our borders.”
    Story Source:
    Materials provided by University of Surrey. Note: Content may be edited for style and length. More

  • in

    AI predicts if — and when — someone will have cardiac arrest

    A new artificial intelligence-based approach can predict, significantly more accurately than a doctor, if and when a patient could die of cardiac arrest. The technology, built on raw images of patient’s diseased hearts and patient backgrounds, stands to revolutionize clinical decision making and increase survival from sudden and lethal cardiac arrhythmias, one of medicine’s deadliest and most puzzling conditions.
    The work, led by Johns Hopkins University researchers, is detailed today in Nature Cardiovascular Research.
    “Sudden cardiac death caused by arrhythmia accounts for as many as 20 percent of all deaths worldwide and we know little about why it’s happening or how to tell who’s at risk,” said senior author Natalia Trayanova, the Murray B. Sachs professor of Biomedical Engineering and Medicine. “There are patients who may be at low risk of sudden cardiac death getting defibrillators that they might not need and then there are high-risk patients that aren’t getting the treatment they need and could die in the prime of their life. What our algorithm can do is determine who is at risk for cardiac death and when it will occur, allowing doctors to decide exactly what needs to be done.”
    The team is the first to use neural networks to build a personalized survival assessment for each patient with heart disease. These risk measures provide with high accuracy the chance for a sudden cardiac death over 10 years, and when it’s most likely to happen.
    The deep learning technology is called Survival Study of Cardiac Arrhythmia Risk (SSCAR). The name alludes to cardiac scarring caused by heart disease that often results in lethal arrhythmias, and the key to the algorithm’s predictions.
    The team used contrast-enhanced cardiac imagesthat visualize scar distribution from hundreds of real patients at Johns Hopkins Hospital with cardiac scarring to train an algorithm to detect patterns and relationships not visible to the naked eye. Current clinical cardiac image analysis extracts only simple scar features like volume and mass, severely underutilizing what’s demonstrated in this work to be critical data. More

  • in

    Deep-sea osmolyte finds applications in molecular machines

    The molecule trimethylamine N-oxide (TMAO) can be used to reversibly modulate the rigidity of microtubules, a key component of molecular machines and molecular robots.
    Kinesin and microtubules (MTs) are major components of cytoskeleton in cells of living organisms. Kinesin and microtubules together play crucial roles in a wide range of cellular functions, most significantly intracellular transport. Recent developments in bioengineering and biotechnology allows for using these natural molecules as components of molecular machines and molecular robots. In vitro gliding assay has been the best platform to evaluate the potential of these biomolecules for molecular machines.
    A team of scientists led by Assistant Professor Arif Md. Rashedul Kabir of Hokkaido University has reported a simple and straightforward method to reversibly and dynamically control the rigidity of kinesin propelled MTs. Their findings have been published in ACS Omega, a journal published by the American Chemical Society (ACS).
    In an in vitro gliding assay, kinesin molecules are attached to a base material, and propel MTs as the molecular shuttles. The rigidity of the motile MTs is a crucial metric that determines the success of their applications as the component of molecular machines. One of the major hurdles in regulating the rigidity of MTs is that previous methods affected the rigidity of MTs permanently and were irreversible. The development of a method to control the rigidity of MTs in a reversible manner would allow for dynamic adjustment of MT property and functions, and would be a massive development in molecular machines, molecular robotics, and related fields.
    Kabir and his colleagues employed trimethylamine N-oxide (TMAO), a molecule that acts as an osmolyte in many deep-sea organisms, to study its effects on MTs in an in vitro gliding assay. TMAO is known to stabilize proteins under stressful or denaturing conditions of heat, pressure, and chemicals. The team demonstrated that TMAO affects the rigidity of MTs without depending on the need for any modifications to MT structures.
    At relatively low TMAO concentrations (0 mM to 200 mM), MTs remained straight and rigid and the motion of the MTs in the gliding assay was unaffected. As the TMAO concentration was increased further, the MTs showed bending or buckling, and their velocity decreased. The team quantified this effect of TMAO on the conformation of the MT, showing that the persistence length, a measure of rigidity, of MTs was 285 ± 47 ?m in the absence of TMAO and that decreased to 37 ± 4 ?m in the presence of 1500 mM TMAO.
    The team further demonstrated that the process was completely reversible, with MTs regaining their original persistence length and velocity when the TMAO was eliminated. These results confirmed that TMAO can be used to reversibly modulate the mechanical property and dynamic functions of MTs.
    Finally, the team has investigated the mechanism by which TMAO alteres the rigidity of MTs. Based on their investigations, Dr. Arif Md. Rashedul Kabir and his team members concluded that TMAO mediates disruption of the uniformity in force applied by the kinesins along MTs in the gliding assay; the non-uniform force generated by the kinesins appeared to be responsible for the change in rigidity or persistence length of the kinesin propelled MTs.
    “This study has demonstrated a facile method for regulating the MT rigidity reversibly in an in vitro gliding assay without depending on any modifications to the MT structures,” Kabir said. Future works will focus on elucidating the exact mechanism by which TMAO acts, as well as, on utilizing TMAO for controlling the properties and functions of MTs and kinesins, which in turn will be beneficial for the molecular machines and molecular robotics.
    Story Source:
    Materials provided by Hokkaido University. Note: Content may be edited for style and length. More

  • in

    A mathematical shortcut for determining quantum information lifetimes

    A new, elegant equation allows scientists to easily compute the quantum information lifetime of 12,000 different materials.
    Scientists have uncovered a mathematical shortcut for calculating an all-important feature of quantum devices.
    Having crunched the numbers on the quantum properties of 12,000 elements and compounds, researchers have published a new equation for approximating the length of time the materials can maintain quantum information, called “coherence time.”
    The elegant formula allows scientists to estimate the materials’ coherence times in an instant — versus the hours or weeks it would take to calculate an exact value.
    The team, comprising scientists at the U.S. Department of Energy’s (DOE) Argonne National Laboratory, the University of Chicago, Tohoku University in Japan and Ajou University in Korea, published their result in April in the Proceedings of the National Academy of Sciences.
    Their work is supported the Center for Novel Pathways to Quantum Coherence in Materials, an Energy Frontier Research Center funded by the U.S. Department of Energy, and by Q-NEXT, a DOE National Quantum Information Science Research Center led by Argonne. More

  • in

    The side effects of quantum error correction and how to cope with them

    Quantum systems can interact with one another and with their surroundings in ways that are fundamentally different from those of their classical counterparts. In a quantum sensor, the particularities of these interactions are exploited to obtain characteristic information about the environment of the quantum system, for instance the strength of a magnetic and electric field in which it is immersed. Crucially, when such a device suitably harnesses the laws of quantum mechanics, then its sensitivity can surpass what is possible, even in principle, with conventional, classical technologies. Unfortunately, quantum sensors are exquisitely sensitive not only to the physical quantities of interest, but also to noise.
    One way to suppress these unwanted contributions is to apply schemes collectively known as quantum error correction (QEC). This approach is attracting considerable and increasing attention, as it might enable practical high-precision quantum sensors in a wider range of applications than is possible today. But the benefits of error-It is well established that quantum error correction can improve the performance of quantum sensors. But new theory work cautions that, unexpectedly, the approach can also give rise to inaccurate and misleading results — and shows how to rectify these shortcomings.
    corrected quantum sensing come with major potential side effects, as a team led by Florentin Reiter, an Ambizione fellow of the Swiss National Science Foundation working in the group of Jonathan Home in the Departement of Physics at ETH Zurich, has now found. Writing in Physical Review Letters, they report theoretical work in which they show that in realistic settings QEC can distort the output of quantum sensors and might even lead to unphysical results. But not all is lost — the researchers describe as well procedures how to restore the correct results.
    Drifting off track
    In applying QEC to quantum sensing, errors are repeatedly corrected as the sensor acquires information about the target quantity. As an analogy, imagine a car that keeps departing from the centre of the lane it travels in. In the ideal case, the drift is corrected by constant counter-steering. In the equivalent scenario for quantum sensing, it has been shown that by constant — or very frequent — error correction, the detrimental effects of noise can be suppressed completely, at least in principle. The story is rather different when, for practical reasons, the driver can perform correcting interventions with the steering wheel only at specific points in time. Then, as experience tells us, the sequence of driving ahead and making corrective movements has to be finely tuned. If the sequence would not matter, then the motorist could simply perform all steering manoeuvres at home in the garage and then confidently put the foot down on the accelerator. The reason why this does not work is that rotation and translation are not commutative — the order in which the actions of one type or the other are executed changes the outcome.
    For quantum sensors somewhat of a similar situation with non-commuting actions can arise, specifically for the ‘sensing action’ and the ‘error action’. The former is described by the Hamilton operator of the sensor, the latter by error operators. Now, Ivan Rojkov, a doctoral researcher working at ETH with Reiter and collaborating with colleagues at the Massachusetts Institute of Technology (MIT), found that the sensor output experiences a systematic bias — or, ‘drift’ — when there is a delay between an error and its subsequent correction. Depending on the length of this delay time, the dynamics of the quantum system, which should ideally be governed by the Hamiltonian alone, becomes contaminated by interference by the error operators. The upshot is that during the delay the sensor typically acquires less information about the quantity of interest, such as a magnetic or electric field, compared to the situation in which no error had occurred. These different speeds in information acquisition then result in a distortion of the output .
    Sensical sensing
    This QEC-induced bias matters. If unaccounted for, then for example estimates for the minimum signal that the quantum sensor can detect might end up being overly optimistic, as Rojkov et al. show. For experiments that push the limits of precision such wrong estimates are particularly deceptive. But the team also provides an escape route to overcome the bias. The amount of bias introduced by the finite-rate QEC can be calculated, and through appropriate measures be rectified in post-processing — so that the sensor output makes again perfect sense. Also, factoring in that the QEC can give rise to systematic bias can help to devise the ideal sensing protocol ahead of the measurement.
    Given that the effect identified in this work is present in various common error-corrected quantum sensing schemes, these results are set to provide an import contribution to tweaking out the highest precision from a broad range or quantum sensors — and keep them on track to deliver on their promise of leading us into regimes that cannot be explored with classical sensors.
    Story Source:
    Materials provided by ETH Zurich Department of Physics. Original written by Andreas Trabesinger. Note: Content may be edited for style and length. More

  • in

    Machine learning model could better measure baseball players' performance

    In the movie “Moneyball,” a young economics graduate and a cash-strapped Major League Baseball coach introduce a new way to evaluate baseball players’ value. Their innovative idea to compute players’ statistical data and salaries enabled the Oakland A’s to recruit quality talent overlooked by other teams — completely revitalizing the team without exceeding budget.
    New research at the Penn State College of Information Sciences and Technology could make a similar impact on the sport. The team has developed a machine learning model that could better measure baseball players’ and teams’ short- and long-term performance, compared to existing statistical analysis methods for the sport. Drawing on recent advances in natural language processing and computer vision, their approach would completely change, and could enhance, the way the state of a game and a player’s impact on the game is measured.
    According to Connor Heaton, doctoral candidate in the College of IST, the existing family of methods, known as sabermetrics, rely upon the number of times a player or team achieves a discrete event — such as hitting a double or home run. However, it doesn’t consider the surrounding context of each action.
    “Think about a scenario in which a player recorded a single in his last plate appearance,” said Heaton. “He could have hit a dribbler down the third base line, advancing a runner from first to second and beat the throw to first, or hit a ball to deep left field and reached first base comfortably but didn’t have the speed to push for a double. Describing both situations as resulting in ‘a single’ is accurate but does not tell the whole story.”
    Heaton’s model instead learns the meaning of in-game events based on the impact they have on the game and the context in which they occur, then outputs numerical representations of how players impact the game by viewing the game as a sequence of events.
    “We often talk about baseball in terms of ‘this player had two singles and a double yesterday,’ or ‘he went one for four,” said Heaton. “A lot of the ways in which we talk about the game just summarize the events with one summary statistic. Our work is trying to take a more holistic picture of the game and to get a more nuanced, computational description of how players impact the game.”
    In Heaton’s novel method, he leverages sequential modeling techniques used in natural language processing to help computers learn the role or meaning of different words. He applied that approach to teach his model the role or meaning of different events in a baseball game — for example, when a batter hits a single. Then, he modeled the game as a sequence of events to offer new insight on existing statistics. More