More stories

  • in

    Blockchain offers a solution to post-Brexit border digitization to build supply chain trust, research shows

    As a result of the UK leaving the European Union, logistics firms have faced additional friction at UK borders. Consequently, there have been calls for automated digital borders, but few such systems exist. Surrey researchers have now discovered that a blockchain-based platform can improve supply chain efficiency and trust development at our borders.
    Blockchain is a system in which a record of transactions made in bitcoin, or another cryptocurrency, are maintained across several computers that are linked in a peer-to-peer network. The blockchain based platform studied in this case is known as an RFIT platform; a pilot implementation blockchain system that links data together and ensures that this data is unalterable. This end-to-end visibility of unchangeable data helps to build trust between supply partners.
    Professor of Digital Transformation at the University of Surrey and co-author of the study, Glenn Parry, said:
    “Since the UK’s withdrawal from the EU Customs Union, businesses have faced increased paperwork, border delays and higher costs. A digitally managed border system that identifies trusted shipments appears an obvious solution, but we needed to define what trust actually means and how a digital system can help.
    “Supply chain participants have long recognised the importance of trust in business relationships. Trust is the primary reason companies cite when supply chain relationships break down, which is especially true at customs borders. Current supply chain friction at UK borders is replicated across the world. Delay is caused by a lack of trust in goods flows, and hence a need to inspect.”
    Surrey academics stressed that the introduction of this platform does not remove the need for trust and trust-building processes in established buyer-supplier relationships. It’s crucial that blockchain platform providers continue to build a position of trust with all participants.
    In the case of the import of wine from Australia to the UK, researchers found that the RFIT platform can employ a blockchain layer to make documentation unalterable. The platform facilitates the building of trust across the supply chain by providing a single source of validated data and increasing visibility. Reduced data asymmetry between border agencies and suppliers improves accuracy, timeliness, and integrity.
    Through its 2025 UK Border Strategy, the UK Government is seeking to establish technology leadership in reducing friction in cross-border supply chains.
    Visiting Fellow at Surrey and co-author of the study, Mike Brookbanks, said:
    “The broader findings from the case study are influencing the UK Government on how to address the current challenges with supply chains at UK customs borders. We hope our work will also influence the Government’s current focus on trust ecosystems, as part of the single trade window (STW) initiative. We truly believe that the use of this innovative digital technology will form the Government’s first step in developing a utility trade platform, encouraging broader digitisation of our borders.”
    Story Source:
    Materials provided by University of Surrey. Note: Content may be edited for style and length. More

  • in

    AI predicts if — and when — someone will have cardiac arrest

    A new artificial intelligence-based approach can predict, significantly more accurately than a doctor, if and when a patient could die of cardiac arrest. The technology, built on raw images of patient’s diseased hearts and patient backgrounds, stands to revolutionize clinical decision making and increase survival from sudden and lethal cardiac arrhythmias, one of medicine’s deadliest and most puzzling conditions.
    The work, led by Johns Hopkins University researchers, is detailed today in Nature Cardiovascular Research.
    “Sudden cardiac death caused by arrhythmia accounts for as many as 20 percent of all deaths worldwide and we know little about why it’s happening or how to tell who’s at risk,” said senior author Natalia Trayanova, the Murray B. Sachs professor of Biomedical Engineering and Medicine. “There are patients who may be at low risk of sudden cardiac death getting defibrillators that they might not need and then there are high-risk patients that aren’t getting the treatment they need and could die in the prime of their life. What our algorithm can do is determine who is at risk for cardiac death and when it will occur, allowing doctors to decide exactly what needs to be done.”
    The team is the first to use neural networks to build a personalized survival assessment for each patient with heart disease. These risk measures provide with high accuracy the chance for a sudden cardiac death over 10 years, and when it’s most likely to happen.
    The deep learning technology is called Survival Study of Cardiac Arrhythmia Risk (SSCAR). The name alludes to cardiac scarring caused by heart disease that often results in lethal arrhythmias, and the key to the algorithm’s predictions.
    The team used contrast-enhanced cardiac imagesthat visualize scar distribution from hundreds of real patients at Johns Hopkins Hospital with cardiac scarring to train an algorithm to detect patterns and relationships not visible to the naked eye. Current clinical cardiac image analysis extracts only simple scar features like volume and mass, severely underutilizing what’s demonstrated in this work to be critical data. More

  • in

    Deep-sea osmolyte finds applications in molecular machines

    The molecule trimethylamine N-oxide (TMAO) can be used to reversibly modulate the rigidity of microtubules, a key component of molecular machines and molecular robots.
    Kinesin and microtubules (MTs) are major components of cytoskeleton in cells of living organisms. Kinesin and microtubules together play crucial roles in a wide range of cellular functions, most significantly intracellular transport. Recent developments in bioengineering and biotechnology allows for using these natural molecules as components of molecular machines and molecular robots. In vitro gliding assay has been the best platform to evaluate the potential of these biomolecules for molecular machines.
    A team of scientists led by Assistant Professor Arif Md. Rashedul Kabir of Hokkaido University has reported a simple and straightforward method to reversibly and dynamically control the rigidity of kinesin propelled MTs. Their findings have been published in ACS Omega, a journal published by the American Chemical Society (ACS).
    In an in vitro gliding assay, kinesin molecules are attached to a base material, and propel MTs as the molecular shuttles. The rigidity of the motile MTs is a crucial metric that determines the success of their applications as the component of molecular machines. One of the major hurdles in regulating the rigidity of MTs is that previous methods affected the rigidity of MTs permanently and were irreversible. The development of a method to control the rigidity of MTs in a reversible manner would allow for dynamic adjustment of MT property and functions, and would be a massive development in molecular machines, molecular robotics, and related fields.
    Kabir and his colleagues employed trimethylamine N-oxide (TMAO), a molecule that acts as an osmolyte in many deep-sea organisms, to study its effects on MTs in an in vitro gliding assay. TMAO is known to stabilize proteins under stressful or denaturing conditions of heat, pressure, and chemicals. The team demonstrated that TMAO affects the rigidity of MTs without depending on the need for any modifications to MT structures.
    At relatively low TMAO concentrations (0 mM to 200 mM), MTs remained straight and rigid and the motion of the MTs in the gliding assay was unaffected. As the TMAO concentration was increased further, the MTs showed bending or buckling, and their velocity decreased. The team quantified this effect of TMAO on the conformation of the MT, showing that the persistence length, a measure of rigidity, of MTs was 285 ± 47 ?m in the absence of TMAO and that decreased to 37 ± 4 ?m in the presence of 1500 mM TMAO.
    The team further demonstrated that the process was completely reversible, with MTs regaining their original persistence length and velocity when the TMAO was eliminated. These results confirmed that TMAO can be used to reversibly modulate the mechanical property and dynamic functions of MTs.
    Finally, the team has investigated the mechanism by which TMAO alteres the rigidity of MTs. Based on their investigations, Dr. Arif Md. Rashedul Kabir and his team members concluded that TMAO mediates disruption of the uniformity in force applied by the kinesins along MTs in the gliding assay; the non-uniform force generated by the kinesins appeared to be responsible for the change in rigidity or persistence length of the kinesin propelled MTs.
    “This study has demonstrated a facile method for regulating the MT rigidity reversibly in an in vitro gliding assay without depending on any modifications to the MT structures,” Kabir said. Future works will focus on elucidating the exact mechanism by which TMAO acts, as well as, on utilizing TMAO for controlling the properties and functions of MTs and kinesins, which in turn will be beneficial for the molecular machines and molecular robotics.
    Story Source:
    Materials provided by Hokkaido University. Note: Content may be edited for style and length. More

  • in

    A mathematical shortcut for determining quantum information lifetimes

    A new, elegant equation allows scientists to easily compute the quantum information lifetime of 12,000 different materials.
    Scientists have uncovered a mathematical shortcut for calculating an all-important feature of quantum devices.
    Having crunched the numbers on the quantum properties of 12,000 elements and compounds, researchers have published a new equation for approximating the length of time the materials can maintain quantum information, called “coherence time.”
    The elegant formula allows scientists to estimate the materials’ coherence times in an instant — versus the hours or weeks it would take to calculate an exact value.
    The team, comprising scientists at the U.S. Department of Energy’s (DOE) Argonne National Laboratory, the University of Chicago, Tohoku University in Japan and Ajou University in Korea, published their result in April in the Proceedings of the National Academy of Sciences.
    Their work is supported the Center for Novel Pathways to Quantum Coherence in Materials, an Energy Frontier Research Center funded by the U.S. Department of Energy, and by Q-NEXT, a DOE National Quantum Information Science Research Center led by Argonne. More

  • in

    The side effects of quantum error correction and how to cope with them

    Quantum systems can interact with one another and with their surroundings in ways that are fundamentally different from those of their classical counterparts. In a quantum sensor, the particularities of these interactions are exploited to obtain characteristic information about the environment of the quantum system, for instance the strength of a magnetic and electric field in which it is immersed. Crucially, when such a device suitably harnesses the laws of quantum mechanics, then its sensitivity can surpass what is possible, even in principle, with conventional, classical technologies. Unfortunately, quantum sensors are exquisitely sensitive not only to the physical quantities of interest, but also to noise.
    One way to suppress these unwanted contributions is to apply schemes collectively known as quantum error correction (QEC). This approach is attracting considerable and increasing attention, as it might enable practical high-precision quantum sensors in a wider range of applications than is possible today. But the benefits of error-It is well established that quantum error correction can improve the performance of quantum sensors. But new theory work cautions that, unexpectedly, the approach can also give rise to inaccurate and misleading results — and shows how to rectify these shortcomings.
    corrected quantum sensing come with major potential side effects, as a team led by Florentin Reiter, an Ambizione fellow of the Swiss National Science Foundation working in the group of Jonathan Home in the Departement of Physics at ETH Zurich, has now found. Writing in Physical Review Letters, they report theoretical work in which they show that in realistic settings QEC can distort the output of quantum sensors and might even lead to unphysical results. But not all is lost — the researchers describe as well procedures how to restore the correct results.
    Drifting off track
    In applying QEC to quantum sensing, errors are repeatedly corrected as the sensor acquires information about the target quantity. As an analogy, imagine a car that keeps departing from the centre of the lane it travels in. In the ideal case, the drift is corrected by constant counter-steering. In the equivalent scenario for quantum sensing, it has been shown that by constant — or very frequent — error correction, the detrimental effects of noise can be suppressed completely, at least in principle. The story is rather different when, for practical reasons, the driver can perform correcting interventions with the steering wheel only at specific points in time. Then, as experience tells us, the sequence of driving ahead and making corrective movements has to be finely tuned. If the sequence would not matter, then the motorist could simply perform all steering manoeuvres at home in the garage and then confidently put the foot down on the accelerator. The reason why this does not work is that rotation and translation are not commutative — the order in which the actions of one type or the other are executed changes the outcome.
    For quantum sensors somewhat of a similar situation with non-commuting actions can arise, specifically for the ‘sensing action’ and the ‘error action’. The former is described by the Hamilton operator of the sensor, the latter by error operators. Now, Ivan Rojkov, a doctoral researcher working at ETH with Reiter and collaborating with colleagues at the Massachusetts Institute of Technology (MIT), found that the sensor output experiences a systematic bias — or, ‘drift’ — when there is a delay between an error and its subsequent correction. Depending on the length of this delay time, the dynamics of the quantum system, which should ideally be governed by the Hamiltonian alone, becomes contaminated by interference by the error operators. The upshot is that during the delay the sensor typically acquires less information about the quantity of interest, such as a magnetic or electric field, compared to the situation in which no error had occurred. These different speeds in information acquisition then result in a distortion of the output .
    Sensical sensing
    This QEC-induced bias matters. If unaccounted for, then for example estimates for the minimum signal that the quantum sensor can detect might end up being overly optimistic, as Rojkov et al. show. For experiments that push the limits of precision such wrong estimates are particularly deceptive. But the team also provides an escape route to overcome the bias. The amount of bias introduced by the finite-rate QEC can be calculated, and through appropriate measures be rectified in post-processing — so that the sensor output makes again perfect sense. Also, factoring in that the QEC can give rise to systematic bias can help to devise the ideal sensing protocol ahead of the measurement.
    Given that the effect identified in this work is present in various common error-corrected quantum sensing schemes, these results are set to provide an import contribution to tweaking out the highest precision from a broad range or quantum sensors — and keep them on track to deliver on their promise of leading us into regimes that cannot be explored with classical sensors.
    Story Source:
    Materials provided by ETH Zurich Department of Physics. Original written by Andreas Trabesinger. Note: Content may be edited for style and length. More

  • in

    Machine learning model could better measure baseball players' performance

    In the movie “Moneyball,” a young economics graduate and a cash-strapped Major League Baseball coach introduce a new way to evaluate baseball players’ value. Their innovative idea to compute players’ statistical data and salaries enabled the Oakland A’s to recruit quality talent overlooked by other teams — completely revitalizing the team without exceeding budget.
    New research at the Penn State College of Information Sciences and Technology could make a similar impact on the sport. The team has developed a machine learning model that could better measure baseball players’ and teams’ short- and long-term performance, compared to existing statistical analysis methods for the sport. Drawing on recent advances in natural language processing and computer vision, their approach would completely change, and could enhance, the way the state of a game and a player’s impact on the game is measured.
    According to Connor Heaton, doctoral candidate in the College of IST, the existing family of methods, known as sabermetrics, rely upon the number of times a player or team achieves a discrete event — such as hitting a double or home run. However, it doesn’t consider the surrounding context of each action.
    “Think about a scenario in which a player recorded a single in his last plate appearance,” said Heaton. “He could have hit a dribbler down the third base line, advancing a runner from first to second and beat the throw to first, or hit a ball to deep left field and reached first base comfortably but didn’t have the speed to push for a double. Describing both situations as resulting in ‘a single’ is accurate but does not tell the whole story.”
    Heaton’s model instead learns the meaning of in-game events based on the impact they have on the game and the context in which they occur, then outputs numerical representations of how players impact the game by viewing the game as a sequence of events.
    “We often talk about baseball in terms of ‘this player had two singles and a double yesterday,’ or ‘he went one for four,” said Heaton. “A lot of the ways in which we talk about the game just summarize the events with one summary statistic. Our work is trying to take a more holistic picture of the game and to get a more nuanced, computational description of how players impact the game.”
    In Heaton’s novel method, he leverages sequential modeling techniques used in natural language processing to help computers learn the role or meaning of different words. He applied that approach to teach his model the role or meaning of different events in a baseball game — for example, when a batter hits a single. Then, he modeled the game as a sequence of events to offer new insight on existing statistics. More

  • in

    How did visitors experience the domestic space in Pompeii?

    Researchers at Lund University in Sweden have used virtual reality and 3D eye-tracking technology to examine what drew the attention of the visitors when entering the stunning environment of an ancient Roman house. The team recreated the House of Greek Epigrams in 3D and tracked the gaze of study participants as they viewed the home.
    Unlike today, Roman houses were not a place of refuge from work. Work and daily activities were intermingled during the day. Houses were designed to communicate the personal power and status of the owner and his family. The visual impression was so important that architects moved architectural elements such as columns to frame views, added fountains as focal points, or simply decorated the space by imitating those elements when it was not possible to build them.
    “By tracking how people view the house, we can get closer to unlock what was in the mind of those that designed it. What messages are being conveyed, even in the smallest detail? We found many ways in which the owner was conveying a sense of power and wealth to visitors,” says Giacomo Landeschi, researcher at the Department of Archaeology and Ancient History, Lund University.
    The House of Greek Epigrams was destroyed in the eruption of Mount Vesuvius in AD 79. It had a room completely covered with wall paintings accompanied by Greek inscriptions that gave the house its name.
    The house was elaborately designed and featured wall paintings that were partially visible from the outside, but with details that only close visitors could see, for example. There was also erotic art where natural light illuminated the work primarily at appropriate times. Certain visual and architectural elements echoed a tension between Greek and Roman cultures at the time.
    A follow-up study will analyse the results in more detail.
    The researchers say that the unique nature of the research could be further enhanced by adding other sensory experiences, such as auditory involvement, in the future.
    “This study shows that we can now not only recreate the physical space but also understand the actual experience of the people at the time. This is an entirely new field of research for archaeology, that opens up new possibilities,” concludes Danilo Marco Campanaro, PhD candidate at the Department of Archaeology and Ancient History, Lund University.
    About the Study
    The study marks a significant advance in the use of Virtual Reality in archaeology, where its heuristic potential is employed to make more advanced spatial analysis. It set out to establish a methodology to accurately record information about participants’ gaze and attention and analyse it. To do this, the researchers used a 3D eye-tracker, a game engine, and Geographic Information Systems.
    Video: https://youtu.be/sNcAkkNR-qU
    Story Source:
    Materials provided by Lund University. Note: Content may be edited for style and length. More

  • in

    New approach better predicts air pollution models’ performance in health studies

    Nine out of 10 people in the world breathe air that exceeds the World Health Organization’s guidelines for air pollution. The era of big data and machine learning has facilitated predicting air pollution concentrations across both space and time. With approximately seven million people dying each year as a result of air pollution, leveraging these novel air pollution prediction models for studies of health is important. However, it is not always known whether these air pollution prediction models can be used in health studies.
    A new study from Jenna Krall, assistant professor of the Department of Global and Community Health, develops a new approach to aid air quality modelers in determining whether their air pollution prediction models can be used in epidemiologic studies, studies that assess health effects.
    “Understanding the relationship between air pollution and health often requires predicting air pollution concentrations. Our approach will be useful for determining whether an air pollution prediction model can be used in subsequent health studies. As a result, our work can help translate new prediction models to better understand air pollution health impacts,” said Krall.
    Existing air pollution prediction models are generally evaluated on how well they can predict air pollution levels. Using data from 17 locations in the US, Krall found that the new evaluation approach was able to better identify errors in air pollution prediction models most relevant for health studies.
    “Assessing the health estimation capacity of air pollution exposure prediction models” was published in Environmental Health in March 2022.
    Joshua P. Keller of Colorado State University and Roger D. Peng of the Johns Hopkins Bloomberg School of Public Health were a part of the research team. Krall was supported in part by the Thomas F. and Kate Miller Jeffress Memorial Trust, Bank of America, Trustee. Peng was supported in part by the US Environmental Protection Agency (EPA) through award RD835871. This work has not been formally reviewed by the EPA. The views expressed in this document are solely those of the authors and do not necessarily reflect those of the agency. EPA does not endorse any products or commercial services mentioned in this publication.
    Story Source:
    Materials provided by George Mason University. Note: Content may be edited for style and length. More