More stories

  • in

    Calculations reveal high-resolution view of quarks inside protons

    A collaboration of nuclear theorists at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory, Argonne National Laboratory, Temple University, Adam Mickiewicz University of Poland, and the University of Bonn, Germany, has used supercomputers to predict the spatial distributions of charges, momentum, and other properties of “up” and “down” quarks within protons. The results, just published in Physical Review D, revealed key differences in the characteristics of the up and down quarks.
    “This work is the first to leverage a new theoretical approach to obtain a high-resolution map of quarks within a proton,” said Swagato Mukherjee of Brookhaven Lab’s nuclear theory group and a coauthor on the paper. “Our calculations show that the up quark is more symmetrically distributed and spread over a smaller distance than the down quark. These differences imply that up and down quarks may make different contributions to the fundamental properties and structure of the proton, including its internal energy and spin.”
    Coauthor Martha Constantinou of Temple University noted, “Our calculations provide input for interpreting data from nuclear physics experiments exploring how quarks and the gluons that hold them together are distributed within the proton, giving rise to the proton’s overall properties.”
    Such experiments are already taking place at the Continuous Electron Beam Accelerator Facility (CEBAF), a DOE Office of Science user facility at Thomas Jefferson National Accelerator Facility. Higher resolution versions are planned for the future Electron-Ion Collider (EIC) at Brookhaven Lab. In these experiments, high-energy electrons emit virtual particles of light that scatter off and change the overall momentum of a proton without breaking it apart. The way the momentum of the proton changes in response to these scatterings reveals details about the quarks and gluons — the inner components of the proton — sort of like an x-ray imaging technique for the building blocks of bulk matter.
    New theoretical approach to GPD
    Specifically, the scatterings give scientists access to the Generalized Parton Distribution (GPD) of the proton — parton being the collective name for quarks and gluons. If you picture the proton as a bag filled with marbles representing quarks and gluons, the GPD provides a description of how the energy-momentum and other characteristics of these marbles are distributed within the bag — for example, when the bag is shaken and the marbles move around. It can be compared to a map that indicates the likelihood of finding a marble with a specific energy-momentum at a particular position inside the bag. Knowing the distribution of these quark and gluon characteristics allows scientists to understand the inner workings of the proton, which may lead to new ways to apply that knowledge.
    “To obtain a detailed map, we need to analyze many scattering interactions, involving various values of momentum change of the proton,” said Shohini Bhattacharya, a research associate in Brookhaven’s nuclear theory group and the RIKEN BNL Research Center (RBRC). More

  • in

    Scientists discover unusual ultrafast motion in layered magnetic materials

    A common metal paper clip will stick to a magnet. Scientists classify such iron-containing materials as ferromagnets. A little over a century ago, physicists Albert Einstein and Wander de Haas reported a surprising effect with a ferromagnet. If you suspend an iron cylinder from a wire and expose it to a magnetic field, it will start rotating if you simply reverse the direction of the magnetic field.
    “Einstein and de Haas’s experiment is almost like a magic show,” said Haidan Wen, a physicist in the Materials Science and X-ray Science divisions of the U.S. Department of Energy’s (DOE) Argonne National Laboratory. ​”You can cause a cylinder to rotate without ever touching it.”
    In Nature magazine, a team of researchers from Argonne and other U.S. national laboratories and universities now report an analogous yet different effect in an ​”anti”-ferromagnet. This could have important applications in devices requiring ultra-precise and ultrafast motion control. One example is high-speed nanomotors for biomedical applications, such as use in nanorobots for minimally invasive diagnosis and surgery.
    The difference between a ferromagnet and antiferromagnet has to do with a property called electron spin. This spin has a direction. Scientists represent the direction with an arrow, which can point up or down or any direction in between. In the magnetized ferromagnet mentioned above, the arrows associated with all the electrons in the iron atoms can point in the same direction, say, up. Reversing the magnetic field reverses the direction of the electron spins. So, all arrows are pointing down. This reversal leads to the cylinder’s rotation.
    “In this experiment, a microscopic property, electron spin, is exploited to elicit a mechanical response in a cylinder, a macroscopic object,” said Alfred Zong, a Miller Research Fellow at the University of California, Berkeley.
    In antiferromagnets, instead of the electron spins all pointing up, for example, they alternate from up to down between adjacent electrons. These opposite spins cancel each other out, and antiferromagnets thus do not respond to changes in a magnetic field as ferromagnets do.
    “The question we asked ourselves is, can electron spin elicit a response in an antiferromagnet that is different but similar in spirit to that from the cylinder rotation in the Einstein-de Hass experiment?” Wen said. More

  • in

    Workaround for randomized experiments

    A new statistical tool can help researchers get meaningful results when a randomized experiment, considered the gold standard, is not possible.
    Randomized experiments split participants into groups by chance, with one undergoing an intervention and the other not. But in real-world situations, they can’t always be done. Companies might not want to use the method, or such experiments might be against the law.
    Developed by a researcher at The University of Texas at Austin, the new tool called two-step synthetic control adapts an existing research workaround, known as the synthetic control method.
    The traditional synthetic control method creates synthetic control groups from the data, in place of real ones. The groups are weighted statistically and compared with a group undergoing an intervention.
    But the synthetic control method does not perfectly apply to all situations, especially ones in which the intervention group is different from control groups, according to Kathleen Li, an assistant professor of marketing at the McCombs School of Business. In these scenarios, the method’s lack of flexibility could lead to less accurate results.
    “Our framework allows managers and policymakers to estimate effects they previously weren’t able to estimate accurately,” said Li, who developed the tool along with Venkatesh Shankar of Texas A&M University. “They get a more precise estimate that can help them make more informed decisions.”
    The study, published in advance online in the journal Management Science, offers a two-step synthetic control approach: First, it determines whether the traditional synthetic control method applies to a given case. If it does not, the second step uses a more flexible framework that allows weighted controls to differ from 100% or to shift the control group up and down.The researchers tested the new method on a real-world situation by looking at sales of tampons: how they responded in 2016, when New York repealed a sales tax on them. More

  • in

    Faster thin film devices for energy storage and electronics

    An international research team from the Max Planck Institute of Microstructure Physics, Halle (Saale), Germany, the University of Cambridge, UK and the University of Pennsylvania, USA reported the first realization of single-crystalline T-Nb2O5 thin films having two-dimensional (2D) vertical ionic transport channels, which results in a fast and colossal insulator-metal transition via Li ion intercalation through the 2D channels.
    Since the 1940s, scientists have been exploring the use of niobium oxide, specifically a form of niobium oxide known as T-Nb2O5, to create more efficient batteries. This unique material is known for its ability to allow lithium ions, the tiny charged particles that make batteries work, to move quickly within it. The faster these lithium ions can move, the faster a battery can be charged.
    The challenge, however, has always been to grow this niobium oxide material into thin, flat layers, or ‘films’ that are of high enough quality to be used in practical applications. This problem stems from the complex structure of T-Nb2O5 and the existence of many similar forms, or polymorphs, of niobium oxide.
    Now, in a paper published in Nature Materials, researchers from the Max Planck Institute of Microstructure Physics, University of Cambridge and the University of Pennsylvania have successfully demonstrated the growth of high-quality, single-crystal thin films of T-Nb2O5, aligned in such a way that the lithium ions can move even faster along vertical ionic transport channels.
    The T-Nb2O5 films undergo a significant electrical change at an early stage of Li insertion into the initially insulating films. This is a dramatic shift — the resistivity of the material decreases by a factor of 100 billion. The research team further demonstrate tunable and low voltage operation of thin film devices by altering the chemical composition of the ‘gate’ electrode, a component that controls the flow of ions in a device, further extending the potential applications.
    The Max Planck Institute of Microstructure Physics group realized the growth of the single-crystalline T-Nb2O5 thin films, and showed how Li-ion intercalation can dramatically increase their electrical conductivity. Together with the University of Cambridge group multiple previously unknown transitions in the material’s structure were discovered as the concentration of lithium ions was changed. These transitions change the electronic properties of the material, allowing it to switch from being an insulator to a metal, meaning that it goes from blocking electric current to conducting it. Researchers from the University of Pennsylvania rationalized the multiple phase transitions they observed, as well as, how these phases might be related to the concentration of lithium ions and their arrangement within the crystal structure.
    These results could only have been successful through synergies between the three international groups with diverse specialties: thin films from the Max Planck Institute of Microstructure Physics, batteries from the University of Cambridge, and theory from the University of Pennsylvania. More

  • in

    Safety of AI-supported mammography screening

    Mammography screening supported by artificial intelligence (AI) is a safe alternative to today’s conventional double reading by radiologists and can reduce heavy workloads for doctors. This has now been shown in an interim analysis of a prospective, randomised controlled trial, which addressed the clinical safety of using AI in mammography screening. The trial, led by researchers from Lund University in Sweden, has been published in The Lancet Oncology.
    Each year around one million women in Sweden are called to mammography screening. Each screening examination is reviewed by two breast radiologists to ensure a high sensitivity, so called double reading. There is however a workforce shortage of breast radiologists, in Sweden and elsewhere, which can put the screening service at risk. Lately, the potential of AI to support mammography screening has attracted much attention, but how this is to be optimally conducted and what the clinical consequences will be, remains unclear.
    To know with certainty what happens when radiologists work with the support of AI requires studies in which women are randomly allocated to AI-supported screening or to standard screening. The Mammography Screening with Artificial Intelligence (MASAI) trial is the first randomised controlled trial evaluating the effect of AI-supported screening.
    “In our trial, we used AI to identify screening examinations with a high risk of breast cancer, which underwent double reading by radiologists. The remaining examinations were classified as low risk and were read only by one radiologist. In the screen reading, radiologists used AI as detection support, in which it highlighted suspicious findings on the images,” says Kristina Lång, researcher and associate professor in diagnostic radiology at Lund University and consultant at Skåne University Hospital, who led the study.
    The 80,033 women included in the safety analysis were randomly allocated into two groups: 40,003 women in the intervention group that underwent AI-supported screening and 40,030 in the control group that underwent standard double reading without AI support.
    “We found that using AI resulted in the detection of 20 % (41) more cancers compared with standard screening, without affecting false positives. A false positive in screening occurs when a woman is recalled but cleared of suspicion of cancer after workup,” says Kristina Lång.
    At the same time, the screen-reading workload for radiologists was reduced by 44 %. The number of screen readings with AI-supported screening was 46,345 compared with 83,231 with standard screening. More

  • in

    Machine learning, blockchain technology could help counter spread of fake news

    A proposed machine learning framework and expanded use of blockchain technology could help counter the spread of fake news by allowing content creators to focus on areas where the misinformation is likely to do the most public harm, according to new research from Binghamton University, State University of New York.
    The research led by Thi Tran, assistant professor of management information systems at Binghamton University’s School of Management, expands on existing studies by offering tools for recognizing patterns in misinformation and helping content creators zero in the worst offenders.
    “I hope this research helps us educate more people about being aware of the patterns,” Tran said, “so they know when to verify something before sharing it and are more alert to mismatches between the headline and the content itself, which would keep the misinformation from spreading unintentionally.”
    Tran’s research proposed machine learning systems — a branch of artificial intelligence (AI) and computer science that uses data and algorithms to imitate the way humans learn while gradually improving its accuracy — to help determine the scale to which content could cause the most harm to its audience.
    Examples could include stories that circulated during the height of the COVID-19 pandemic touting false alternate treatments to the vaccine.
    The framework would use data and algorithms to spot indicators of misinformation and use those examples to inform and improve the detection process. It would also consider user characteristics from people with prior experience or knowledge about fake news to help piece together a harm index. The index would reflect the severity of possible harm to a person in certain contexts if they were exposed and victimized by the misinformation.
    “We’re most likely to care about fake news if it causes a harm that impacts readers or audiences. If people perceive there’s no harm, they’re more likely to share the misinformation,” Tran said. “The harms come from whether audiences act according to claims from the misinformation, or if they refuse the proper action because of it. If we have a systematic way of identifying where misinformation will do the most harm, that will help us know where to focus on mitigation.”
    Based on the information gathered, Tran said, the machine learning system could help fake news mitigators discern which messages are likely to be the most damaging if allowed to spread unchallenged. More

  • in

    Thermal imaging innovation allows AI to see through pitch darkness like broad daylight

    Researchers at Purdue University are advancing the world of robotics and autonomy with their patent-pending method that improves on traditional machine vision and perception.
    Zubin Jacob, the Elmore Associate Professor of Electrical and Computer Engineering in the Elmore Family School of Electrical and Computer Engineering, and research scientist Fanglin Bao have developed HADAR, or heat-assisted detection and ranging. Their research was featured on the cover of the July 26 issue of the peer-reviewed journal Nature. A video about HADAR is available on YouTube. Nature also has released a podcast episode that includes an interview with Jacob.
    Jacob said it is expected that one in 10 vehicles will be automated and that there will be 20 million robot helpers that serve people by 2030.
    “Each of these agents will collect information about its surrounding scene through advanced sensors to make decisions without human intervention,” Jacob said. “However, simultaneous perception of the scene by numerous agents is fundamentally prohibitive.”
    Traditional active sensors like LiDAR, or light detection and ranging, radar and sonar emit signals and subsequently receive them to collect 3D information about a scene. These methods have drawbacks that increase as they are scaled up, including signal interference and risks to people’s eye safety. In comparison, video cameras that work based on sunlight or other sources of illumination are advantageous, but low-light conditions such as nighttime, fog or rain present a serious impediment.
    Traditional thermal imaging is a fully passive sensing method that collects invisible heat radiation originating from all objects in a scene. It can sense through darkness, inclement weather and solar glare. But Jacob said fundamental challenges hinder its use today.
    “Objects and their environment constantly emit and scatter thermal radiation, leading to textureless images famously known as the ‘ghosting effect,'” Bao said. “Thermal pictures of a person’s face show only contours and some temperature contrast; there are no features, making it seem like you have seen a ghost. This loss of information, texture and features is a roadblock for machine perception using heat radiation.”
    HADAR combines thermal physics, infrared imaging and machine learning to pave the way to fully passive and physics-aware machine perception. More

  • in

    Scientists uncover a surprising connection between number theory and evolutionary genetics

    An interdisciplinary team of mathematicians, engineers, physicists, and medical scientists has uncovered an unexpected link between pure mathematics and genetics, that reveals key insights into the structure of neutral mutations and the evolution of organisms.
    Number theory, the study of the properties of positive integers, is perhaps the purest form of mathematics. At first sight, it may seem far too abstract to apply to the natural world. In fact, the influential American number theorist Leonard Dickson wrote ‘Thank God that number theory is unsullied by any application.’ And yet, again and again, number theory finds unexpected applications in science and engineering, from leaf angles that (almost) universally follow the Fibonacci sequence, to modern encryption techniques based on factoring prime numbers. Now, researchers have demonstrated an unexpected link between number theory and evolutionary genetics.
    Specifically, the team of researchers (from Oxford, Harvard, Cambridge, GUST, MIT, Imperial, and the Alan Turing Institute) have discovered a deep connection between the sums-of-digits function from number theory and a key quantity in genetics, the phenotype mutational robustness. This quality is defined as the average probability that a point mutation does not change a phenotype (a characteristic of an organism).
    The discovery may have important implications for evolutionary genetics. Many genetic mutations are neutral, meaning that they can slowly accumulate over time without affecting the viability of the phenotype. These neutral mutations cause genome sequences to change at a steady rate over time. Because this rate is known, scientists can compare the percentage difference in the sequence between two organisms and infer when their latest common ancestor lived.
    But the existence of these neutral mutations posed an important question: what fraction of mutations to a sequence are neutral? This property, called the phenotype mutational robustness, defines the average amount of mutations that can occur across all sequences without affecting the phenotype.
    Professor Ard Louis from the University of Oxford, who led the study, said: ‘We have known for some time that many biological systems exhibit remarkably high phenotype robustness, without which evolution would not be possible. But we didn’t know what the absolute maximal robustness possible would be, or if there even was a maximum.’
    It is precisely this question that the team has answered. They proved that the maximum robustness is proportional to the logarithm of the fraction of all possible sequences that map to a phenotype, with a correction which is given by the sums of digits function sk(n), defined as the sum of the digits of a natural number n in base k. For example, for n = 123 in base 10, the digit sum would be s10(123) = 1 + 2 + 3 = 6.
    Another surprise was that the maximum robustness also turns out to be related to the famous Tagaki function, a bizarre function that is continuous everywhere, but differentiable nowhere. This fractal function is also called the blancmange curve, because it looks like the French dessert.
    First author Dr. Vaibhav Mohanty (Harvard Medical School) added: ‘What is most surprising is that we found clear evidence in the mapping from sequences to RNA secondary structures that nature in some cases achieves the exact maximum robustness bound. It’s as if biology knows about the fractal sums-of-digits function.’
    Professor Ard Louis added: ‘The beauty of number theory lies not only in the abstract relationships it uncovers between integers, but also in the deep mathematical structures it illuminates in our natural world. We believe that many intriguing new links between number theory and genetics will be found in the future.’ More