More stories

  • in

    New insights into the structure of the neutron

    All known atomic nuclei and therefore almost all visible matter consists of protons and neutrons, yet many of the properties of these omnipresent natural building blocks remain unknown. As an uncharged particle, the neutron in particular resists many types of measurement and 90 years after its discovery there are still many unanswered questions regarding its size and lifetime, among other things. The neutron consists of three quarks which whirl around inside it, held together by gluons. Physicists use electromagnetic form factors to describe this dynamic inner structure of the neutron. These form factors represent an average distribution of electric charge and magnetization within the neutron and can be determined by means of experimentation.
    Blank space on the form factor map filled with precise data
    “A single form factor, measured at a certain energy level, does not say much at first,” explained Professor Frank Maas, a researcher at the PRISMA+ Cluster of Excellence in Mainz, the Helmholtz Institute Mainz (HIM), and GSI Helmholtzzentrum für Schwerionenforschung Darmstadt. “Measurements of the form factors at various energies are needed in order to draw conclusions on the structure of the neutron.” In certain energy ranges, which are accessible using standard electron-proton scattering experiments, form factors can be determined fairly accurately. However, so far this has not been the case with other ranges for which so-called annihilation techniques are needed that involve matter and antimatter mutually destroying each other.
    In the BESIII Experiment being undertaken in China, it has recently proved possible to precisely determine the corresponding data in the energy range of 2 to 3.8 gigaelectronvolts. As pointed out in an article published by the partnership in the current issue of Nature Physics, this is over 60 times more accurate compared to previous measurements. “With this new data, we have, so to speak, filled a blank space on the neutron form factor ‘map’, which until now was unknown territory,” Professor Frank Maas pointed out. “This data is now as precise as that obtained in corresponding scattering experiments. As a result, our knowledge of the form factors of the neutron will change dramatically and as such we will get a far more comprehensive picture of this important building block of nature.”
    Truly pioneering work in a difficult field of research
    To make inroads into completing the required fields of the form factor ‘map’, the physicists needed antiparticles. The international partnership therefore used the Beijing Electron-Positron Collider II for its measurements. Here, electrons and their positive antiparticles, i.e., positrons, are allowed to collide in an accelerator and destroy each other, creating other new particle pairs — a process known as ‘annihilation’ in physics. Using the BESIII detector, the researchers observed and analyzed the outcome, in which the electrons and positrons form neutrons and anti-neutrons. “Annihilation experiments like these are nowhere near as well-established as the standard scattering experiments,” added Maas. “Substantial development work was needed to carry out the current experiment — the intensity of the accelerator had to be improved and the detection method for the elusive neutron had to be practically reinvented in the analysis of the experimental data. This was by no means straightforward. Our partnership has done truly pioneering work here.”
    Other interesting phenomena
    As if this was not enough, the measurements showed the physicists that the results for the form factor do not produce a consistent slope relative to the energy level, but rather an oscillating pattern in which fluctuations become smaller as the energy level increases. They observed similar surprising behavior in the case of the proton — here, however, the fluctuations were mirrored, i.e., phase-shifted. “This new finding indicates first and foremost that nucleons do not have a simple structure,” Professor Frank Maas explained. “Now our colleagues on the theoretical side have been asked to develop models to account for this extraordinary behavior.”
    Finally, on the basis of their measurements, the BESIII partnership has modified how the relative ratio of the neutron to proton form factors needs to be viewed. Many years ago, the result produced in the FENICE experiment was a ratio greater than one, which means that the neutron must have a consistently larger form factor than the proton. “But as the proton is charged, you would expect it to be completely the other way round,” Maas asserted. “And that’s just what we see when we compare our neutron data with the proton data we’ve recently acquired through BESIII. So here we’ve rectified how we need to perceive the very smallest particles.”
    From the micro- to the macrocosm
    According to Maas, the new findings are especially important because they are so fundamental. “They provide new perspectives on the basic properties of the neutron. What’s more, by looking at the smallest building blocks of matter we can also understand phenomena that occur in the largest dimensions — such as the fusion of two neutron stars. This physics of extremes is already very fascinating.” More

  • in

    Experts master defects in semiconductors

    Researchers at The City College of New York have discovered a novel way to manipulate defects in semiconductors. The study holds promising opportunities for novel forms of precision sensing, or the transfer of quantum information between physically separate qubits, as well as for improving the fundamental understanding of charge transport in semiconductors.
    Using laser optics and confocal microscopy, the researchers demonstrated that they could make one defect eject charges — holes — under laser illumination allowing the other defect several micrometers away to catch them. The charge state of the latter defect is then altered from a negative into a neutral one via a charge capture.
    The study utilized a special type of point defect — nitrogen-vacancy center in diamond. These color centers possess spin — an inherent form of angular momentum carried by elementary particles — making them attractive for quantum sensing and quantum information processing. The researchers used a specific protocol to filter out the charges originating solely from the nitrogen vacancy based on its spin projection.
    “The key was isolating the source defect, with only the nitrogen vacancy being present, which we achieved by making charge ejection conditional on the defect’s spin state” said Artur Lozovoi, physics postdoctoral researcher in CCNY’s Division of Science and the paper’s lead author. “Another crucial aspect was having a “clean” diamond with as few defects as possible. Then, the long-range attractive Coulombic interaction between a defect and a hole substantially increases the probability of the charge going towards the target, which ultimately made our observations possible.”
    The present study uncovered that in the clean material the charge transport efficiency is a thousand times higher than observed in previous experiments, a phenomenon characterized by the researchers as a “giant capture cross-section.” This discovery could pave the way towards establishing a quantum information bus between color center qubits in semiconductors.
    “This process of a charge capture by an individual defect has only been described theoretically before,” added Lozovoi. “There is now an experimental platform that enables us to look into how these defects interact with free charges in crystals and how we can use it for quantum information processing.”
    Story Source:
    Materials provided by City College of New York. Note: Content may be edited for style and length. More

  • in

    Just a game? Study shows no evidence that violent video games lead to real-life violence

    As the latest Call of Duty video game is released in the UK today, and with Battlefield 2042 and a remastered Grand Theft Auto trilogy to follow later this month, new research finds no evidence that violence increases after a new video game is released.
    Mass media and general public often link violent video games to real-life violence, although there is limited evidence to support the link.
    Debate on the topic generally intensifies after mass public shootings, with some commentators linking these violent acts to the perpetrators’ interests in violent video games.
    However, others have pointed out that different factors, such as mental health issues and/or easy access to guns, are more likely explanations.
    In the light of these conflicting claims, President Obama called in 2013 for more government funding for research on video games and violence.
    But before governments introduce any policies restricting access to violent video games, it is important to establish whether violent video games do indeed make players behave violently in the real world.
    Research by Dr Agne Suziedelyte, Senior Lecturer in the Department of Economics at City, University of London, provides evidence of the effects of violent video game releases on children’s violent behaviour using data from the US. Dr Suziedelyte examined the effects of violent video games on two types of violence: aggression against other people, and destruction of things/property.
    The study, published in the Journal of Economic Behavior & Organization, focused on boys aged 8-18 years — the group most likely to play violent video games.
    Dr Suziedelyte used econometric methods that identify plausibly causal effects of violent video games on violence, rather than only associations. She found no evidence that violence against other people increases after a new violent video game is released. Parents reported, however, that children were more likely to destroy things after playing violent video games.
    Dr Suziedelyte said: “Taken together, these results suggest that violent video games may agitate children, but this agitation does not translate into violence against other people — which is the type of violence which we care about most.
    “A likely explanation for my results is that video game playing usually takes place at home, where opportunities to engage in violence are lower. This ‘incapacitation’ effect is especially important for violence-prone boys who may be especially attracted to violent video games.
    “Therefore, policies that place restrictions on video game sales to minors are unlikely to reduce violence.”
    Story Source:
    Materials provided by City University London. Original written by Chris Lines. Note: Content may be edited for style and length. More

  • in

    Autonomous driving: Saving millions in test kilometers

    Driving simulator tests are popular — for understandable reasons: any scenario can be simulated at the touch of a button. They are independent of time and weather conditions and without any safety risk for the vehicle, people or the environment. Moreover, an hour in the driving simulator is cheaper and requires less organization than a real driving lesson on a test track. “In the field of highly automated driving, however, driving simulator studies are often questioned because of the lack of realism. In addition, until recently there were no standardized test procedures that could have been used to check complex tasks such as the mutual interaction between human and system (handover procedures),” says Arno Eichberger, head of the research area “Automated Driving & Driver Assistance Systems” at the Institute of Automotive Engineering at Graz University of Technology (TU Graz).
    New regulation as initial spark
    Recently, because the first global regulation for Automated Lane Keeping Systems (ALKS) has been in force since the beginning of 2021. This law resolves the road approval dilemma, as Eichberger explains: “Until now, regulatory authorities did not know how to test and approve autonomous driving systems. The vehicle manufacturers, in turn, did not know what requirements the systems had to meet in order to be approved.” In the regulations, the approval criteria for highly automated systems (autonomous driving level 3) up to a maximum speed of 60 km/h have now been specified for the first time on the basis of a traffic jam assistant. When the assistant is activated, responsibility for control is transferred to the machine. The driver may take their hands off the steering wheel, but has to immediately take over again in the event of a malfunction. The system must recognize that the person behind the wheel is capable of doing this.
    Based on this regulation, Eichberger and his research partners from Fraunhofer Austria, AVL and JOANNEUM RESEARCH have developed an efficient method over the last few months by which the readiness to take over control can be tested safely, efficiently, and to a high degree realistically in a driving simulator and the results can be used for the certification of ALKS systems.
    Identical machine perception of the environment
    Processes were required to prove the validity of the driving simulation using the test drive. The basis for this was a direct comparison — driving simulation and real driving (the AVL test track in Gratkorn, Styria, served as the test location) had to match as closely as possible. Here, the machine perception of the environment posed a challenge. Figuratively speaking, machine perception is the sensory organs of the vehicle. It has the task of precisely recording the vehicle’s surroundings — from the landscape and environmental objects to other road users — so that the driving assistance system can react appropriately to the situation. Eichberger: “If this is to run the same as in reality, the environments in the simulation have to match the real environment to the exact centimetre.”
    Transferring the driving routes to the driving simulator
    his accuracy is achieved using so-called “Ultra High Definition Maps” (UHDmaps®) from JOANNEUM RESEARCH (JR), one of the world’s leading research institutions in the field of digital twins. “We use a mobile mapping system to measure the test environments. Finally, a seamless 3D map with an extremely high level of detail is created from the measurement data. In addition to traffic infrastructure objects such as traffic signs, lane markings and guard rails, vegetation and buildings are also represented in this map,” says Patrick Luley, head of the research laboratory for highly automated driving at the DIGITAL Institute. While comparable accuracy can be achieved with manual 3D modelling, JR’s automated UHD mapping process is many times cheaper and faster.
    The high-resolution 3D environment is finally transferred to the driving simulator. This is where the Fraunhofer Austria team come in. Volker Settgast from the Visual Computing business unit: “We prepare the data in such a way that the 3D environment can be displayed at high speed.” Even reflective and transparent surfaces or trees and bushes blown by the wind can be perceived naturally. Depending on the test scenario, additional vehicles or even people can then be added to the virtual environment.
    The validation is finally verified with the help of comparative runs on the real route. “With our method, it is possible for car manufacturers to easily compare and validate a certain sampling on the real track and in the driving simulator. This means that the test can ultimately be transferred from the real track to the driving simulator,” says Eichberger. The TU Graz researcher and his team are now working on setting up virtual approval tests over the next few months.
    Story Source:
    Materials provided by Graz University of Technology. Original written by Christoph Pelzl. Note: Content may be edited for style and length. More

  • in

    Giving robots social skills

    Robots can deliver food on a college campus and hit a hole in one on the golf course, but even the most sophisticated robot can’t perform basic social interactions that are critical to everyday human life.
    MIT researchers have now incorporated certain social interactions into a framework for robotics, enabling machines to understand what it means to help or hinder one another, and to learn to perform these social behaviors on their own. In a simulated environment, a robot watches its companion, guesses what task it wants to accomplish, and then helps or hinders this other robot based on its own goals.
    The researchers also showed that their model creates realistic and predictable social interactions. When they showed videos of these simulated robots interacting with one another to humans, the human viewers mostly agreed with the model about what type of social behavior was occurring.
    Enabling robots to exhibit social skills could lead to smoother and more positive human-robot interactions. For instance, a robot in an assisted living facility could use these capabilities to help create a more caring environment for elderly individuals. The new model may also enable scientists to measure social interactions quantitatively, which could help psychologists study autism or analyze the effects of antidepressants.
    “Robots will live in our world soon enough and they really need to learn how to communicate with us on human terms. They need to understand when it is time for them to help and when it is time for them to see what they can do to prevent something from happening. This is very early work and we are barely scratching the surface, but I feel like this is the first very serious attempt for understanding what it means for humans and machines to interact socially,” says Boris Katz, principal research scientist and head of the InfoLab Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and a member of the Center for Brains, Minds, and Machines (CBMM).
    Joining Katz on the paper are co-lead author Ravi Tejwani, a research assistant at CSAIL; co-lead author Yen-Ling Kuo, a CSAIL PhD student; Tianmin Shu, a postdoc in the Department of Brain and Cognitive Sciences; and senior author Andrei Barbu, a research scientist at CSAIL and CBMM. The research will be presented at the Conference on Robot Learning in November. More

  • in

    Revolutionary identity verification technique offers robust solution to hacking

    A team of computer scientists, including Claude Crépeau of McGill University and physicist colleagues from the University of Geneva, have developed an extremely secure identity verification method based on the fundamental principle that information cannot travel faster than the speed of light. The breakthrough has the potential to greatly improve the security of financial transactions and other applications requiring proof of identity online.
    “Current identification schemes that use personal identification numbers (PINs) are incredibly insecure faced with a fake teller machine that stores the PINs of users,” says Crépeau, a professor in the School of Computer Science at McGill. “Our research found and implemented a secure mechanism to prove someone’s identity that cannot be replicated by the verifier of this identity.”
    How to prove you know something without revealing what it is you know
    The new method, published in Nature, is an advance on a concept known as zero-knowledge proof, whereby one party (a ‘prover’) can demonstrate to another (the ‘verifier’) that they possess a certain piece of information without actually revealing that information.
    The idea of zero-knowledge proof began to take hold in the field of data encryption in the 1980s. Today, many encryption systems rely on mathematical statements which the prover can show to be valid without giving away clues to the verifier as to how to prove the validity of the statement. Underlying the effectiveness of these systems is an assumption that there is no practical way for the verifier to work backwards from the information they do receive from the prover to figure out a general solution to the problem. The theory goes that there is a certain class of mathematical problem, known as one-way functions, that are easy for computers to evaluate but not easy for them to solve. However, with the development of quantum computing, scientists are beginning to question this assumption and are growing wary of the possibility that the supposed one-way functions underlying today’s encryption systems may be undone by an emerging generation of quantum computers.
    Separating witnesses to get the story straight
    The McGill-Geneva research team have reframed the zero-knowledge proof idea by creating a system involving two physically separated prover-verifier pairs. To confirm their bona fides, the two provers must demonstrate to the verifiers that they have a shared knowledge of a solution to a notoriously difficult mathematical problem: how to use only three colours to colour in an image made up of thousands of interconnected shapes such that no two adjacent shapes are of the same colour.
    “The verifiers randomly choose a large number of pairs of adjacent shapes in the image and then ask each of the two provers for the colour of one or the other shape in each pair,” explains co-author Hugo Zbinden, an associate professor of applied physics at the University of Geneva.
    If the two provers consistently name different colours in response, the verifiers can be assured that both provers do indeed know the three-colour solution. By separating the two provers physically and questioning them simultaneously, the system eliminates the possibility of collusion between the provers, because to do so they would have to transmit information to each other faster than the speed of light — a scenario ruled out by the principle of special relativity.
    “It’s like when the police interrogate two suspects at the same time in separate offices,” Zbinden says. “It’s a matter of checking their answers are consistent, without allowing them to communicate with each other.”
    Story Source:
    Materials provided by McGill University. Note: Content may be edited for style and length. More

  • in

    When is a basin of attraction like an octopus?

    Mathematicians who study dynamical systems often focus on the rules of attraction. Namely, how does the choice of the starting point affect where a system ends up? Some systems are easier to describe than others. A swinging pendulum, for example, will always land at the lowest point no matter where it starts.
    In dynamical systems research, a “basin of attraction” is the set of all the starting points — usually close to one another — that arrive at the same final state as the system evolves through time. For straightforward systems like a swinging pendulum, the shape and size of a basin is comprehensible. Not so for more complicated systems: those with dimensions that reach into the tens or hundreds or higher can have wild geometries with fractal boundaries.
    In fact, they may look like the tentacles of an octopus, according to new work by Yuanzhao Zhang, physicist and SFI Schmidt Science Fellow, and Steven Strogatz, a mathematician and writer at Cornell University. The convoluted geometries of these high-dimensional basins can’t be easily visualized, but in a new paper published in Physical Review Letters, the researchers describe a simple argument showing why basins in systems with multiple attractors should look like high-dimensional octopi. They make their argument by analyzing a simple model — a ring of oscillators that, despite only interacting locally, can produce myriad collective states such as in-phase synchronization. A high number of coupled oscillators will have many attractors, and therefore many basins.
    “When you have a high-dimensional system, the tentacles dominate the basin size,” says Zhang.
    Importantly, the new work shows that the volume of a high-dimensional basin can’t be correctly approximated by a hypercube, as tempting as it is. That’s because the hypercube fails to encompass the vast majority — more than 99% — of the points in the basin, which are strung out on tentacles.
    The paper also suggests that the topic of high-dimensional basins is rife with potential for new exploration. “The geometry is very far from anything we know,” says Strogatz. “This is not so much about what we found as to remind people that so much is waiting to be found. This is the early age of exploration for basins.”
    The work may also have real-world implications. Zhang points to the power grid as an example of important high-dimensional systems with multiple basins of attraction. Understanding which starting points lead to which outcomes may help engineers figure out how to keep the lights on.
    “Depending on how you start your grid, it will either evolve to a normal operating state or a disruptive state — like a blackout,” Zhang says.
    Story Source:
    Materials provided by Santa Fe Institute. Note: Content may be edited for style and length. More

  • in

    Underground tests dig into how heat affects salt-bed repository behavior

    Scientists from Sandia, Los Alamos and Lawrence Berkeley national laboratories have just begun the third phase of a years-long experiment to understand how salt and very salty water behave near hot nuclear waste containers in a salt-bed repository.
    Salt’s unique physical properties can be used to provide safe disposal of radioactive waste, said Kristopher Kuhlman, a Sandia geoscientist and technical lead for the project. Salt beds remain stable for hundreds of millions of years. Salt heals its own cracks and any openings will slowly creep shut.
    For example, the salt at the Waste Isolation Pilot Plant outside Carlsbad, New Mexico — where some of the nation’s Cold War-era nuclear waste is interred — closes on the storage rooms at a rate of a few inches a year, protecting the environment from the waste. However, unlike spent nuclear fuel, the waste interred at WIPP does not produce heat.
    The Department of Energy Office of Nuclear Energy’s Spent Fuel and Waste Disposition initiative seeks to provide a sound technical basis for multiple viable disposal options in the U.S., and specifically how heat changes the way liquids and gases move through and interact with salt, Kuhlman said. The understanding gained from this fundamental research will be used to refine conceptual and computer models, eventually informing policymakers about the benefits of disposing of spent nuclear fuel in salt beds. Sandia is the lead laboratory on the project.
    “Salt is a viable option for nuclear waste storage because far away from the excavation any openings are healed up,” Kuhlman said. “However, there’s this halo of damaged rock near the excavation. In the past people have avoided predicting the complex interactions within the damaged salt because 30 feet away the salt is a perfect, impermeable barrier. Now, we want to deepen our understanding of the early complexities next to the waste. The more we understand, the more long-term confidence we have in salt repositories.”
    Trial-and-error in the first experiment
    To understand the behavior of damaged salt when heated, Kuhlman and colleagues have been conducting experiments 2,150 feet underground at WIPP in an experimental area more than 3,200 feet away from ongoing disposal activity. They also monitor the distribution and behavior of brine, which is salt water found within the salt bed left over from an evaporated 250-million-year old sea. The little brine that is found in WIPP is 10 times saltier than seawater. More