More stories

  • in

    Deep-sea osmolyte finds applications in molecular machines

    The molecule trimethylamine N-oxide (TMAO) can be used to reversibly modulate the rigidity of microtubules, a key component of molecular machines and molecular robots.
    Kinesin and microtubules (MTs) are major components of cytoskeleton in cells of living organisms. Kinesin and microtubules together play crucial roles in a wide range of cellular functions, most significantly intracellular transport. Recent developments in bioengineering and biotechnology allows for using these natural molecules as components of molecular machines and molecular robots. In vitro gliding assay has been the best platform to evaluate the potential of these biomolecules for molecular machines.
    A team of scientists led by Assistant Professor Arif Md. Rashedul Kabir of Hokkaido University has reported a simple and straightforward method to reversibly and dynamically control the rigidity of kinesin propelled MTs. Their findings have been published in ACS Omega, a journal published by the American Chemical Society (ACS).
    In an in vitro gliding assay, kinesin molecules are attached to a base material, and propel MTs as the molecular shuttles. The rigidity of the motile MTs is a crucial metric that determines the success of their applications as the component of molecular machines. One of the major hurdles in regulating the rigidity of MTs is that previous methods affected the rigidity of MTs permanently and were irreversible. The development of a method to control the rigidity of MTs in a reversible manner would allow for dynamic adjustment of MT property and functions, and would be a massive development in molecular machines, molecular robotics, and related fields.
    Kabir and his colleagues employed trimethylamine N-oxide (TMAO), a molecule that acts as an osmolyte in many deep-sea organisms, to study its effects on MTs in an in vitro gliding assay. TMAO is known to stabilize proteins under stressful or denaturing conditions of heat, pressure, and chemicals. The team demonstrated that TMAO affects the rigidity of MTs without depending on the need for any modifications to MT structures.
    At relatively low TMAO concentrations (0 mM to 200 mM), MTs remained straight and rigid and the motion of the MTs in the gliding assay was unaffected. As the TMAO concentration was increased further, the MTs showed bending or buckling, and their velocity decreased. The team quantified this effect of TMAO on the conformation of the MT, showing that the persistence length, a measure of rigidity, of MTs was 285 ± 47 ?m in the absence of TMAO and that decreased to 37 ± 4 ?m in the presence of 1500 mM TMAO.
    The team further demonstrated that the process was completely reversible, with MTs regaining their original persistence length and velocity when the TMAO was eliminated. These results confirmed that TMAO can be used to reversibly modulate the mechanical property and dynamic functions of MTs.
    Finally, the team has investigated the mechanism by which TMAO alteres the rigidity of MTs. Based on their investigations, Dr. Arif Md. Rashedul Kabir and his team members concluded that TMAO mediates disruption of the uniformity in force applied by the kinesins along MTs in the gliding assay; the non-uniform force generated by the kinesins appeared to be responsible for the change in rigidity or persistence length of the kinesin propelled MTs.
    “This study has demonstrated a facile method for regulating the MT rigidity reversibly in an in vitro gliding assay without depending on any modifications to the MT structures,” Kabir said. Future works will focus on elucidating the exact mechanism by which TMAO acts, as well as, on utilizing TMAO for controlling the properties and functions of MTs and kinesins, which in turn will be beneficial for the molecular machines and molecular robotics.
    Story Source:
    Materials provided by Hokkaido University. Note: Content may be edited for style and length. More

  • in

    A mathematical shortcut for determining quantum information lifetimes

    A new, elegant equation allows scientists to easily compute the quantum information lifetime of 12,000 different materials.
    Scientists have uncovered a mathematical shortcut for calculating an all-important feature of quantum devices.
    Having crunched the numbers on the quantum properties of 12,000 elements and compounds, researchers have published a new equation for approximating the length of time the materials can maintain quantum information, called “coherence time.”
    The elegant formula allows scientists to estimate the materials’ coherence times in an instant — versus the hours or weeks it would take to calculate an exact value.
    The team, comprising scientists at the U.S. Department of Energy’s (DOE) Argonne National Laboratory, the University of Chicago, Tohoku University in Japan and Ajou University in Korea, published their result in April in the Proceedings of the National Academy of Sciences.
    Their work is supported the Center for Novel Pathways to Quantum Coherence in Materials, an Energy Frontier Research Center funded by the U.S. Department of Energy, and by Q-NEXT, a DOE National Quantum Information Science Research Center led by Argonne. More

  • in

    The side effects of quantum error correction and how to cope with them

    Quantum systems can interact with one another and with their surroundings in ways that are fundamentally different from those of their classical counterparts. In a quantum sensor, the particularities of these interactions are exploited to obtain characteristic information about the environment of the quantum system, for instance the strength of a magnetic and electric field in which it is immersed. Crucially, when such a device suitably harnesses the laws of quantum mechanics, then its sensitivity can surpass what is possible, even in principle, with conventional, classical technologies. Unfortunately, quantum sensors are exquisitely sensitive not only to the physical quantities of interest, but also to noise.
    One way to suppress these unwanted contributions is to apply schemes collectively known as quantum error correction (QEC). This approach is attracting considerable and increasing attention, as it might enable practical high-precision quantum sensors in a wider range of applications than is possible today. But the benefits of error-It is well established that quantum error correction can improve the performance of quantum sensors. But new theory work cautions that, unexpectedly, the approach can also give rise to inaccurate and misleading results — and shows how to rectify these shortcomings.
    corrected quantum sensing come with major potential side effects, as a team led by Florentin Reiter, an Ambizione fellow of the Swiss National Science Foundation working in the group of Jonathan Home in the Departement of Physics at ETH Zurich, has now found. Writing in Physical Review Letters, they report theoretical work in which they show that in realistic settings QEC can distort the output of quantum sensors and might even lead to unphysical results. But not all is lost — the researchers describe as well procedures how to restore the correct results.
    Drifting off track
    In applying QEC to quantum sensing, errors are repeatedly corrected as the sensor acquires information about the target quantity. As an analogy, imagine a car that keeps departing from the centre of the lane it travels in. In the ideal case, the drift is corrected by constant counter-steering. In the equivalent scenario for quantum sensing, it has been shown that by constant — or very frequent — error correction, the detrimental effects of noise can be suppressed completely, at least in principle. The story is rather different when, for practical reasons, the driver can perform correcting interventions with the steering wheel only at specific points in time. Then, as experience tells us, the sequence of driving ahead and making corrective movements has to be finely tuned. If the sequence would not matter, then the motorist could simply perform all steering manoeuvres at home in the garage and then confidently put the foot down on the accelerator. The reason why this does not work is that rotation and translation are not commutative — the order in which the actions of one type or the other are executed changes the outcome.
    For quantum sensors somewhat of a similar situation with non-commuting actions can arise, specifically for the ‘sensing action’ and the ‘error action’. The former is described by the Hamilton operator of the sensor, the latter by error operators. Now, Ivan Rojkov, a doctoral researcher working at ETH with Reiter and collaborating with colleagues at the Massachusetts Institute of Technology (MIT), found that the sensor output experiences a systematic bias — or, ‘drift’ — when there is a delay between an error and its subsequent correction. Depending on the length of this delay time, the dynamics of the quantum system, which should ideally be governed by the Hamiltonian alone, becomes contaminated by interference by the error operators. The upshot is that during the delay the sensor typically acquires less information about the quantity of interest, such as a magnetic or electric field, compared to the situation in which no error had occurred. These different speeds in information acquisition then result in a distortion of the output .
    Sensical sensing
    This QEC-induced bias matters. If unaccounted for, then for example estimates for the minimum signal that the quantum sensor can detect might end up being overly optimistic, as Rojkov et al. show. For experiments that push the limits of precision such wrong estimates are particularly deceptive. But the team also provides an escape route to overcome the bias. The amount of bias introduced by the finite-rate QEC can be calculated, and through appropriate measures be rectified in post-processing — so that the sensor output makes again perfect sense. Also, factoring in that the QEC can give rise to systematic bias can help to devise the ideal sensing protocol ahead of the measurement.
    Given that the effect identified in this work is present in various common error-corrected quantum sensing schemes, these results are set to provide an import contribution to tweaking out the highest precision from a broad range or quantum sensors — and keep them on track to deliver on their promise of leading us into regimes that cannot be explored with classical sensors.
    Story Source:
    Materials provided by ETH Zurich Department of Physics. Original written by Andreas Trabesinger. Note: Content may be edited for style and length. More

  • in

    Machine learning model could better measure baseball players' performance

    In the movie “Moneyball,” a young economics graduate and a cash-strapped Major League Baseball coach introduce a new way to evaluate baseball players’ value. Their innovative idea to compute players’ statistical data and salaries enabled the Oakland A’s to recruit quality talent overlooked by other teams — completely revitalizing the team without exceeding budget.
    New research at the Penn State College of Information Sciences and Technology could make a similar impact on the sport. The team has developed a machine learning model that could better measure baseball players’ and teams’ short- and long-term performance, compared to existing statistical analysis methods for the sport. Drawing on recent advances in natural language processing and computer vision, their approach would completely change, and could enhance, the way the state of a game and a player’s impact on the game is measured.
    According to Connor Heaton, doctoral candidate in the College of IST, the existing family of methods, known as sabermetrics, rely upon the number of times a player or team achieves a discrete event — such as hitting a double or home run. However, it doesn’t consider the surrounding context of each action.
    “Think about a scenario in which a player recorded a single in his last plate appearance,” said Heaton. “He could have hit a dribbler down the third base line, advancing a runner from first to second and beat the throw to first, or hit a ball to deep left field and reached first base comfortably but didn’t have the speed to push for a double. Describing both situations as resulting in ‘a single’ is accurate but does not tell the whole story.”
    Heaton’s model instead learns the meaning of in-game events based on the impact they have on the game and the context in which they occur, then outputs numerical representations of how players impact the game by viewing the game as a sequence of events.
    “We often talk about baseball in terms of ‘this player had two singles and a double yesterday,’ or ‘he went one for four,” said Heaton. “A lot of the ways in which we talk about the game just summarize the events with one summary statistic. Our work is trying to take a more holistic picture of the game and to get a more nuanced, computational description of how players impact the game.”
    In Heaton’s novel method, he leverages sequential modeling techniques used in natural language processing to help computers learn the role or meaning of different words. He applied that approach to teach his model the role or meaning of different events in a baseball game — for example, when a batter hits a single. Then, he modeled the game as a sequence of events to offer new insight on existing statistics. More

  • in

    How did visitors experience the domestic space in Pompeii?

    Researchers at Lund University in Sweden have used virtual reality and 3D eye-tracking technology to examine what drew the attention of the visitors when entering the stunning environment of an ancient Roman house. The team recreated the House of Greek Epigrams in 3D and tracked the gaze of study participants as they viewed the home.
    Unlike today, Roman houses were not a place of refuge from work. Work and daily activities were intermingled during the day. Houses were designed to communicate the personal power and status of the owner and his family. The visual impression was so important that architects moved architectural elements such as columns to frame views, added fountains as focal points, or simply decorated the space by imitating those elements when it was not possible to build them.
    “By tracking how people view the house, we can get closer to unlock what was in the mind of those that designed it. What messages are being conveyed, even in the smallest detail? We found many ways in which the owner was conveying a sense of power and wealth to visitors,” says Giacomo Landeschi, researcher at the Department of Archaeology and Ancient History, Lund University.
    The House of Greek Epigrams was destroyed in the eruption of Mount Vesuvius in AD 79. It had a room completely covered with wall paintings accompanied by Greek inscriptions that gave the house its name.
    The house was elaborately designed and featured wall paintings that were partially visible from the outside, but with details that only close visitors could see, for example. There was also erotic art where natural light illuminated the work primarily at appropriate times. Certain visual and architectural elements echoed a tension between Greek and Roman cultures at the time.
    A follow-up study will analyse the results in more detail.
    The researchers say that the unique nature of the research could be further enhanced by adding other sensory experiences, such as auditory involvement, in the future.
    “This study shows that we can now not only recreate the physical space but also understand the actual experience of the people at the time. This is an entirely new field of research for archaeology, that opens up new possibilities,” concludes Danilo Marco Campanaro, PhD candidate at the Department of Archaeology and Ancient History, Lund University.
    About the Study
    The study marks a significant advance in the use of Virtual Reality in archaeology, where its heuristic potential is employed to make more advanced spatial analysis. It set out to establish a methodology to accurately record information about participants’ gaze and attention and analyse it. To do this, the researchers used a 3D eye-tracker, a game engine, and Geographic Information Systems.
    Video: https://youtu.be/sNcAkkNR-qU
    Story Source:
    Materials provided by Lund University. Note: Content may be edited for style and length. More

  • in

    New approach better predicts air pollution models’ performance in health studies

    Nine out of 10 people in the world breathe air that exceeds the World Health Organization’s guidelines for air pollution. The era of big data and machine learning has facilitated predicting air pollution concentrations across both space and time. With approximately seven million people dying each year as a result of air pollution, leveraging these novel air pollution prediction models for studies of health is important. However, it is not always known whether these air pollution prediction models can be used in health studies.
    A new study from Jenna Krall, assistant professor of the Department of Global and Community Health, develops a new approach to aid air quality modelers in determining whether their air pollution prediction models can be used in epidemiologic studies, studies that assess health effects.
    “Understanding the relationship between air pollution and health often requires predicting air pollution concentrations. Our approach will be useful for determining whether an air pollution prediction model can be used in subsequent health studies. As a result, our work can help translate new prediction models to better understand air pollution health impacts,” said Krall.
    Existing air pollution prediction models are generally evaluated on how well they can predict air pollution levels. Using data from 17 locations in the US, Krall found that the new evaluation approach was able to better identify errors in air pollution prediction models most relevant for health studies.
    “Assessing the health estimation capacity of air pollution exposure prediction models” was published in Environmental Health in March 2022.
    Joshua P. Keller of Colorado State University and Roger D. Peng of the Johns Hopkins Bloomberg School of Public Health were a part of the research team. Krall was supported in part by the Thomas F. and Kate Miller Jeffress Memorial Trust, Bank of America, Trustee. Peng was supported in part by the US Environmental Protection Agency (EPA) through award RD835871. This work has not been formally reviewed by the EPA. The views expressed in this document are solely those of the authors and do not necessarily reflect those of the agency. EPA does not endorse any products or commercial services mentioned in this publication.
    Story Source:
    Materials provided by George Mason University. Note: Content may be edited for style and length. More

  • in

    Rational neural network advances machine-human discovery

    Math is the language of the physical world, and Alex Townsend sees mathematical patterns everywhere: in weather, in the way soundwaves move, and even in the spots or stripes zebra fish develop in embryos.
    “Since Newton wrote down calculus, we have been deriving calculus equations called differential equations to model physical phenomena,” said Townsend, associate professor of mathematics in the College of Arts and Sciences.
    This way of deriving laws of calculus works, Townsend said, if you already know the physics of the system. But what about learning physical systems for which the physics remains unknown?
    In the new and growing field of partial differential equation (PDE) learning, mathematicians collect data from natural systems and then use trained computer neural networks in order to try to derive underlying mathematical equations. In a new paper, Townsend, together with co-authors Nicolas Boullé of the University of Oxford and Christopher Earls, professor of civil and environmental engineering in the College of Engineering, advance PDE learning with a novel “rational” neural network, which reveals its findings in a manner that mathematicians can understand: through Green’s functions — a right inverse of a differential equation in calculus.
    This machine-human partnership is a step toward the day when deep learning will enhance scientific exploration of natural phenomena such as weather systems, climate change, fluid dynamics, genetics and more. “Data-Driven Discovery of Green’s Functions With Human-Understandable Deep Learning” was published in Scientific Reports, Nature on March 22.
    A subset of machine learning, neural networks are inspired by the simple animal brain mechanism of neurons and synapses — inputs and outputs, Townsend said. Neurons — called “activation functions” in the context of computerized neural networks — collect inputs from other neurons. Between the neurons are synapses, called weights, that send signals to the next neuron. More

  • in

    Novel framework for classifying chaos and thermalization

    One popular example of chaotic behavior is the butterfly effect — a butterfly may flap its wings in somewhere in the Atlantic Ocean and cause a tornado in Colorado. This remarkable fable illustrates how the extreme sensitivity of the dynamics of chaotic systems can yield dramatically different results despite slight differences in initial conditions. The fundamental laws of nature governing the dynamics of physical systems are inherently nonlinear, often leading to chaos and subsequent thermalization.
    However one may ask why are there no rampant increase in tornadoes in Colorado caused by a massive disappointment of butterflies in global affairs, such as say global warming? This is because physical dynamics, although chaotic, are capable of demonstrating remarkably stable states. One example is the stability of our solar system — it obeys nonlinear laws of physics, which can seemingly induce chaos in the system.
    The reason for this stability relies on the fact that weakly chaotic systems may display very ordered periodic dynamics that can last for millions of years. This discovery was made in the 1950s by great mathematicians Kolmogorov, Arnold, and Moser. Their discovery, however, works only in the case of systems with a small number of interacting elements. If the system includes many constituent parts, then its fate is not that well understood.
    Researchers from the Center for Theoretical Physics of Complex Systems (PCS) within the Institute for Basic Science (IBS), South Korea have recently introduced a novel framework for characterizing weakly chaotic dynamics in complex systems containing a large number of constituent particles. To achieve this, they used a quantum computing-based model — Unitary Circuits Map — to simulate chaos.
    Investigating time scales of chaoticity is a challenging task, requiring efficient computational methods. The Unitary Circuit Map model implemented in this study addresses this requirement. “The model allows for efficient and error-free propagation of states in time,” Merab Malishava explains, “which is essential for modeling extremely weak chaoticity in large systems. Such models were used to achieve record-breaking nonlinear evolution times before, which was also done in our group.”
    As a result, they were able to classify the dynamics within the system by identifying time and length scales that emerges as thermalization dramatically slows down. The researchers found that if the constituent parts are connected in a long-range network (LRN) manner (for example in an all-to-all manner), then the thermalization dynamics are characterized by one unique time scale, called the Lyapunov time. However, if the coupling is of a short-range network (SRN) nature (for example nearest neighbor) then an additional length scale emerges related to the freezing of larger parts of the system over long times with rare chaotic splashes.
    Typically the studies on such sensitive dynamics are done using the techniques of analyzing the behavior of observables. These techniques date back to the 1950s when the first experiments on chaoticity and thermalization were performed. The authors identified a novel method of analysis — by investigating the Lyapunov spectrum scaling.
    Merab Malishava says: “Previous methods might result in ambiguous outcomes. You choose an observable and seemingly notice thermalization and think that the dynamics are chaotic. However if another observable is studied, from another perspective, then you conclude that the system is frozen and nothing changes, meaning no thermalization. This is the ambiguity, which we overcame. The Lyapunov spectrum is a set of timescales characterizing the dynamics fully and completely. And what’s more, it’s the same from every point of view! Unique, and unambiguous.”
    The results are not only interesting from a fundamental standpoint. They also have the potential to shed light on the realizations of quantum computers. Quantum computation requires coherent dynamics, which means no thermalization. In the current work, a dramatic slowdown of thermal dynamics was studied with emerging quasi-conserved quantities. Quantizing this case could possibly explain such phenomena as many-body localization, which is one of the basic ideas for avoiding thermalization in quantum computers.
    Another great accomplishment of the study relates to the applicability of the results to a vast majority of physical models ranging from simple oscillator networks to complex spin network dynamics. Dr. Sergej Flach, the leader of the research group and the director of PCS explains: “We have been working for five years on developing a framework to classify weakly chaotic dynamics in macroscopic systems, which resulted in a series of works significantly advancing the area. We put aside narrowly focused case-by-case studies in favor of fostering a conceptual approach that is reliable and relatable in a great number of physical realizations. This specific work is a highly important building block in the aforementioned framework. We found that a traditional way of looking at things is sometimes not the most informative and offered a novel alternative approach. Our work by no means stops here, as we look forward to advancing science with more breakthrough ideas.”
    This research was recently published in Physical Review Letters.
    Story Source:
    Materials provided by Institute for Basic Science. Note: Content may be edited for style and length. More