More stories

  • in

    Autonomous driving: Saving millions in test kilometers

    Driving simulator tests are popular — for understandable reasons: any scenario can be simulated at the touch of a button. They are independent of time and weather conditions and without any safety risk for the vehicle, people or the environment. Moreover, an hour in the driving simulator is cheaper and requires less organization than a real driving lesson on a test track. “In the field of highly automated driving, however, driving simulator studies are often questioned because of the lack of realism. In addition, until recently there were no standardized test procedures that could have been used to check complex tasks such as the mutual interaction between human and system (handover procedures),” says Arno Eichberger, head of the research area “Automated Driving & Driver Assistance Systems” at the Institute of Automotive Engineering at Graz University of Technology (TU Graz).
    New regulation as initial spark
    Recently, because the first global regulation for Automated Lane Keeping Systems (ALKS) has been in force since the beginning of 2021. This law resolves the road approval dilemma, as Eichberger explains: “Until now, regulatory authorities did not know how to test and approve autonomous driving systems. The vehicle manufacturers, in turn, did not know what requirements the systems had to meet in order to be approved.” In the regulations, the approval criteria for highly automated systems (autonomous driving level 3) up to a maximum speed of 60 km/h have now been specified for the first time on the basis of a traffic jam assistant. When the assistant is activated, responsibility for control is transferred to the machine. The driver may take their hands off the steering wheel, but has to immediately take over again in the event of a malfunction. The system must recognize that the person behind the wheel is capable of doing this.
    Based on this regulation, Eichberger and his research partners from Fraunhofer Austria, AVL and JOANNEUM RESEARCH have developed an efficient method over the last few months by which the readiness to take over control can be tested safely, efficiently, and to a high degree realistically in a driving simulator and the results can be used for the certification of ALKS systems.
    Identical machine perception of the environment
    Processes were required to prove the validity of the driving simulation using the test drive. The basis for this was a direct comparison — driving simulation and real driving (the AVL test track in Gratkorn, Styria, served as the test location) had to match as closely as possible. Here, the machine perception of the environment posed a challenge. Figuratively speaking, machine perception is the sensory organs of the vehicle. It has the task of precisely recording the vehicle’s surroundings — from the landscape and environmental objects to other road users — so that the driving assistance system can react appropriately to the situation. Eichberger: “If this is to run the same as in reality, the environments in the simulation have to match the real environment to the exact centimetre.”
    Transferring the driving routes to the driving simulator
    his accuracy is achieved using so-called “Ultra High Definition Maps” (UHDmaps®) from JOANNEUM RESEARCH (JR), one of the world’s leading research institutions in the field of digital twins. “We use a mobile mapping system to measure the test environments. Finally, a seamless 3D map with an extremely high level of detail is created from the measurement data. In addition to traffic infrastructure objects such as traffic signs, lane markings and guard rails, vegetation and buildings are also represented in this map,” says Patrick Luley, head of the research laboratory for highly automated driving at the DIGITAL Institute. While comparable accuracy can be achieved with manual 3D modelling, JR’s automated UHD mapping process is many times cheaper and faster.
    The high-resolution 3D environment is finally transferred to the driving simulator. This is where the Fraunhofer Austria team come in. Volker Settgast from the Visual Computing business unit: “We prepare the data in such a way that the 3D environment can be displayed at high speed.” Even reflective and transparent surfaces or trees and bushes blown by the wind can be perceived naturally. Depending on the test scenario, additional vehicles or even people can then be added to the virtual environment.
    The validation is finally verified with the help of comparative runs on the real route. “With our method, it is possible for car manufacturers to easily compare and validate a certain sampling on the real track and in the driving simulator. This means that the test can ultimately be transferred from the real track to the driving simulator,” says Eichberger. The TU Graz researcher and his team are now working on setting up virtual approval tests over the next few months.
    Story Source:
    Materials provided by Graz University of Technology. Original written by Christoph Pelzl. Note: Content may be edited for style and length. More

  • in

    Giving robots social skills

    Robots can deliver food on a college campus and hit a hole in one on the golf course, but even the most sophisticated robot can’t perform basic social interactions that are critical to everyday human life.
    MIT researchers have now incorporated certain social interactions into a framework for robotics, enabling machines to understand what it means to help or hinder one another, and to learn to perform these social behaviors on their own. In a simulated environment, a robot watches its companion, guesses what task it wants to accomplish, and then helps or hinders this other robot based on its own goals.
    The researchers also showed that their model creates realistic and predictable social interactions. When they showed videos of these simulated robots interacting with one another to humans, the human viewers mostly agreed with the model about what type of social behavior was occurring.
    Enabling robots to exhibit social skills could lead to smoother and more positive human-robot interactions. For instance, a robot in an assisted living facility could use these capabilities to help create a more caring environment for elderly individuals. The new model may also enable scientists to measure social interactions quantitatively, which could help psychologists study autism or analyze the effects of antidepressants.
    “Robots will live in our world soon enough and they really need to learn how to communicate with us on human terms. They need to understand when it is time for them to help and when it is time for them to see what they can do to prevent something from happening. This is very early work and we are barely scratching the surface, but I feel like this is the first very serious attempt for understanding what it means for humans and machines to interact socially,” says Boris Katz, principal research scientist and head of the InfoLab Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and a member of the Center for Brains, Minds, and Machines (CBMM).
    Joining Katz on the paper are co-lead author Ravi Tejwani, a research assistant at CSAIL; co-lead author Yen-Ling Kuo, a CSAIL PhD student; Tianmin Shu, a postdoc in the Department of Brain and Cognitive Sciences; and senior author Andrei Barbu, a research scientist at CSAIL and CBMM. The research will be presented at the Conference on Robot Learning in November. More

  • in

    Revolutionary identity verification technique offers robust solution to hacking

    A team of computer scientists, including Claude Crépeau of McGill University and physicist colleagues from the University of Geneva, have developed an extremely secure identity verification method based on the fundamental principle that information cannot travel faster than the speed of light. The breakthrough has the potential to greatly improve the security of financial transactions and other applications requiring proof of identity online.
    “Current identification schemes that use personal identification numbers (PINs) are incredibly insecure faced with a fake teller machine that stores the PINs of users,” says Crépeau, a professor in the School of Computer Science at McGill. “Our research found and implemented a secure mechanism to prove someone’s identity that cannot be replicated by the verifier of this identity.”
    How to prove you know something without revealing what it is you know
    The new method, published in Nature, is an advance on a concept known as zero-knowledge proof, whereby one party (a ‘prover’) can demonstrate to another (the ‘verifier’) that they possess a certain piece of information without actually revealing that information.
    The idea of zero-knowledge proof began to take hold in the field of data encryption in the 1980s. Today, many encryption systems rely on mathematical statements which the prover can show to be valid without giving away clues to the verifier as to how to prove the validity of the statement. Underlying the effectiveness of these systems is an assumption that there is no practical way for the verifier to work backwards from the information they do receive from the prover to figure out a general solution to the problem. The theory goes that there is a certain class of mathematical problem, known as one-way functions, that are easy for computers to evaluate but not easy for them to solve. However, with the development of quantum computing, scientists are beginning to question this assumption and are growing wary of the possibility that the supposed one-way functions underlying today’s encryption systems may be undone by an emerging generation of quantum computers.
    Separating witnesses to get the story straight
    The McGill-Geneva research team have reframed the zero-knowledge proof idea by creating a system involving two physically separated prover-verifier pairs. To confirm their bona fides, the two provers must demonstrate to the verifiers that they have a shared knowledge of a solution to a notoriously difficult mathematical problem: how to use only three colours to colour in an image made up of thousands of interconnected shapes such that no two adjacent shapes are of the same colour.
    “The verifiers randomly choose a large number of pairs of adjacent shapes in the image and then ask each of the two provers for the colour of one or the other shape in each pair,” explains co-author Hugo Zbinden, an associate professor of applied physics at the University of Geneva.
    If the two provers consistently name different colours in response, the verifiers can be assured that both provers do indeed know the three-colour solution. By separating the two provers physically and questioning them simultaneously, the system eliminates the possibility of collusion between the provers, because to do so they would have to transmit information to each other faster than the speed of light — a scenario ruled out by the principle of special relativity.
    “It’s like when the police interrogate two suspects at the same time in separate offices,” Zbinden says. “It’s a matter of checking their answers are consistent, without allowing them to communicate with each other.”
    Story Source:
    Materials provided by McGill University. Note: Content may be edited for style and length. More

  • in

    When is a basin of attraction like an octopus?

    Mathematicians who study dynamical systems often focus on the rules of attraction. Namely, how does the choice of the starting point affect where a system ends up? Some systems are easier to describe than others. A swinging pendulum, for example, will always land at the lowest point no matter where it starts.
    In dynamical systems research, a “basin of attraction” is the set of all the starting points — usually close to one another — that arrive at the same final state as the system evolves through time. For straightforward systems like a swinging pendulum, the shape and size of a basin is comprehensible. Not so for more complicated systems: those with dimensions that reach into the tens or hundreds or higher can have wild geometries with fractal boundaries.
    In fact, they may look like the tentacles of an octopus, according to new work by Yuanzhao Zhang, physicist and SFI Schmidt Science Fellow, and Steven Strogatz, a mathematician and writer at Cornell University. The convoluted geometries of these high-dimensional basins can’t be easily visualized, but in a new paper published in Physical Review Letters, the researchers describe a simple argument showing why basins in systems with multiple attractors should look like high-dimensional octopi. They make their argument by analyzing a simple model — a ring of oscillators that, despite only interacting locally, can produce myriad collective states such as in-phase synchronization. A high number of coupled oscillators will have many attractors, and therefore many basins.
    “When you have a high-dimensional system, the tentacles dominate the basin size,” says Zhang.
    Importantly, the new work shows that the volume of a high-dimensional basin can’t be correctly approximated by a hypercube, as tempting as it is. That’s because the hypercube fails to encompass the vast majority — more than 99% — of the points in the basin, which are strung out on tentacles.
    The paper also suggests that the topic of high-dimensional basins is rife with potential for new exploration. “The geometry is very far from anything we know,” says Strogatz. “This is not so much about what we found as to remind people that so much is waiting to be found. This is the early age of exploration for basins.”
    The work may also have real-world implications. Zhang points to the power grid as an example of important high-dimensional systems with multiple basins of attraction. Understanding which starting points lead to which outcomes may help engineers figure out how to keep the lights on.
    “Depending on how you start your grid, it will either evolve to a normal operating state or a disruptive state — like a blackout,” Zhang says.
    Story Source:
    Materials provided by Santa Fe Institute. Note: Content may be edited for style and length. More

  • in

    Underground tests dig into how heat affects salt-bed repository behavior

    Scientists from Sandia, Los Alamos and Lawrence Berkeley national laboratories have just begun the third phase of a years-long experiment to understand how salt and very salty water behave near hot nuclear waste containers in a salt-bed repository.
    Salt’s unique physical properties can be used to provide safe disposal of radioactive waste, said Kristopher Kuhlman, a Sandia geoscientist and technical lead for the project. Salt beds remain stable for hundreds of millions of years. Salt heals its own cracks and any openings will slowly creep shut.
    For example, the salt at the Waste Isolation Pilot Plant outside Carlsbad, New Mexico — where some of the nation’s Cold War-era nuclear waste is interred — closes on the storage rooms at a rate of a few inches a year, protecting the environment from the waste. However, unlike spent nuclear fuel, the waste interred at WIPP does not produce heat.
    The Department of Energy Office of Nuclear Energy’s Spent Fuel and Waste Disposition initiative seeks to provide a sound technical basis for multiple viable disposal options in the U.S., and specifically how heat changes the way liquids and gases move through and interact with salt, Kuhlman said. The understanding gained from this fundamental research will be used to refine conceptual and computer models, eventually informing policymakers about the benefits of disposing of spent nuclear fuel in salt beds. Sandia is the lead laboratory on the project.
    “Salt is a viable option for nuclear waste storage because far away from the excavation any openings are healed up,” Kuhlman said. “However, there’s this halo of damaged rock near the excavation. In the past people have avoided predicting the complex interactions within the damaged salt because 30 feet away the salt is a perfect, impermeable barrier. Now, we want to deepen our understanding of the early complexities next to the waste. The more we understand, the more long-term confidence we have in salt repositories.”
    Trial-and-error in the first experiment
    To understand the behavior of damaged salt when heated, Kuhlman and colleagues have been conducting experiments 2,150 feet underground at WIPP in an experimental area more than 3,200 feet away from ongoing disposal activity. They also monitor the distribution and behavior of brine, which is salt water found within the salt bed left over from an evaporated 250-million-year old sea. The little brine that is found in WIPP is 10 times saltier than seawater. More

  • in

    A new dimension in magnetism and superconductivity launched

    An international team of scientists from Austria and Germany has launched a new paradigm in magnetism and superconductivity, putting effects of curvature, topology, and 3D geometry into the spotlight of next-decade research. | New paper in “Advanced Materials.”
    Traditionally, the primary field, where curvature is playing a pivotal role, is the theory of general relativity. In recent years, however, the impact of curvilinear geometry enters various disciplines, ranging from solid-state physics over soft-matter physics to chemistry and biology, giving rise to a plethora of emerging domains, such as curvilinear cell biology, semiconductors, superfluidity, optics, plasmonics and 2D van der Waals materials. In modern magnetism, superconductivity and spintronics, extending nanostructures into the third dimension has become a major research avenue because of geometry-, curvature- and topology-induced phenomena. This approach provides a means to improve conventional and to launch novel functionalities by tailoring the curvature and 3D shape.
    “In recent years, there have appeared experimental and theoretical works dealing with curvilinear and three-dimensional superconducting and (anti-)ferromagnetic nano-architectures. However, these studies originate from different scientific communities, resulting in the lack of knowledge transfer between such fundamental areas of condensed matter physics as magnetism and superconductivity,” says Oleksandr Dobrovolskiy, head of the SuperSpin Lab at the University of Vienna. “In our group, we lead projects in both these topical areas and it was the aim of our perspective article to build a “bridge” between the magnetism and superconductivity communities, drawing attention to the conceptual aspects of how extension of structures into the third dimension and curvilinear geometry can modify existing and aid launching novel functionalities upon solid-state systems.”
    “In magnetic materials, the geometrically-broken symmetry provides a new toolbox to tailor curvature-induced anisotropy and chiral responses,” says Denys Makarov, head of the department “Intelligent Materials and Systems” at the Helmholtz-Zentrum Dresden-Rossendorf. “The possibility to tune magnetic responses by designing the geometry of a wire or magnetic thin film, is one of the main advantages of the curvilinear magnetism, which has a major impact on physics, material science and technology. At present, under its umbrella, the fundamental field of curvilinear magnetism includes curvilinear ferro- and antiferromagnetism, curvilinear magnonics and curvilinear spintronics.”
    “The key difference in the impact of the curvilinear geometry on superconductors in comparison with (anti-)ferromagnets lies in the underlying nature of the order parameter,” expands Oleksandr Dobrovolskiy. “Namely, in contrast to magnetic materials, for which energy functionals contain spatial derivatives of vector fields, the description of superconductors also relies on the analysis of energy functionals containing spatial derivatives of scalar fields. While in magnetism the order parameter is the magnetization (vector), for a superconducting state the absolute value of the order parameter has a physical meaning of the superconducting energy gap (scalar). In the future, extension of hybrid (anti-)ferromagnet/superconductor structures into the third dimension will enable investigations of the interplay between curvature effects in systems possessing vector and scalar order parameters. Yet, this progress strongly relies on the development of experimental and theoretical methods and the improvement of computation capabilities.”
    Challenges for investigations of curvilinear and 3D nanomagnets and superconductors
    Generally, effects of curvature and torsion are expected when the sizes or features of the system become comparable with the respective length scales. Among the various nanofabrication techniques, writing of complex-shaped 3D nano-architectures by focused particles beams has exhibited the most significant progress in the recent years, turning these methods into the techniques of choice for basic and applications-oriented studies in 3D nanomagnetism and superconductivity. However, approaching the relevant length scales in the low nm range (exchange length in ferromagnets and superconducting coherence length in nanoprinted superconductors) is still beyond the reach of current experimental capabilities. At the same time, sophisticated techniques for the characterization of magnetic configurations and their dynamics in complex-shaped nanostructures are becoming available, including X-ray vector nanotomography and 3D imaging by soft X-ray laminography. Similar studies of superconductors are more delicate as they require cryogenic conditions, appealing for the development of such techniques in the years to come.
    Story Source:
    Materials provided by University of Vienna. Note: Content may be edited for style and length. More

  • in

    Autonomous robotic rover helps scientists with long-term monitoring of deep-sea carbon cycle and climate change

    The sheer expanse of the deep sea and the technological challenges of working in an extreme environment make these depths difficult to access and study. Scientists know more about the surface of the moon than the deep seafloor. MBARI is leveraging advancements in robotic technologies to address this disparity.
    An autonomous robotic rover, Benthic Rover II, has provided new insight into life on the abyssal seafloor, 4,000 meters (13,100 feet) beneath the surface of the ocean. A study published today in Science Robotics details the development and proven long-term operation of this rover. This innovative mobile laboratory has further revealed the role of the deep sea in cycling carbon. The data collected by this rover are fundamental to understanding the impacts of climate change on the ocean.
    “The success of this abyssal rover now permits long-term monitoring of the coupling between the water column and seafloor. Understanding these connected processes is critical to predicting the health and productivity of our planet engulfed in a changing climate,” said MBARI Senior Scientist Ken Smith.
    Despite its distance from the sunlit shallows, the deep seafloor is connected to the waters above and is vital for carbon cycling and sequestration. Bits of organic matter — including dead plants and animals, mucus, and excreted waste — slowly sink through the water column to the seafloor. The community of animals and microbes on and in the mud digests some of this carbon while the rest might get locked in deep-sea sediments for up to thousands of years.
    The deep sea plays an important role in Earth’s carbon cycle and climate, yet we still know little about processes happening thousands of meters below the surface. Engineering obstacles like extreme pressure and the corrosive nature of seawater make it difficult to send equipment to the abyssal seafloor to study and monitor the ebb and flow of carbon.
    In the past, Smith and other scientists relied on stationary instruments to study carbon consumption by deep seafloor communities. They could only deploy these instruments for a few days at a time. By building on 25 years of engineering innovation, MBARI has developed a long-term solution for monitoring the abyssal seafloor. More

  • in

    Securing data transfers with relativity

    The volume of data transferred is constantly increasing, but the absolute security of these exchanges cannot be guaranteed, as shown by cases of hacking frequently reported in the news. To counter hacking, a team from the University of Geneva (UNIGE), Switzerland, has developed a new system based on the concept of “zero-knowledge proofs,” the security of which is based on the physical principle of relativity: information cannot travel faster than the speed of light. Thus, one of the fundamental principles of modern physics allows for secure data transfer. This system allows users to identify themselves in complete confidentiality without disclosing any personal information, promising applications in the field of cryptocurrencies and blockchain. These results can be read in the journal Nature.
    When a person — the so called ‘prover’ — wants to confirm their identity, for example when they want to withdraw money from an ATM, they must provide their personal data to the verifier, in our example the bank, which processes this information (e.g. the identification number and the pin code). As long as only the prover and the verifier know this data, confidentiality is guaranteed. If others get hold of this information, for example by hacking into the bank’s server, security is compromised.
    Zero-knowledge proof as a solution
    To counter this problem, the prover should ideally be able to confirm their identity, without revealing any information at all about their personal data. But is this even possible? Surprisingly the answer is yes, via the concept of a zero-knowledge proof. “Imagine I want to prove a mathematical theorem to a colleague. If I show them the steps of the proof, they will be convinced, but then have access to all the information and could easily reproduce the proof,” explains Nicolas Brunner, a professor in the Department of Applied Physics at the UNIGE Faculty of Science. “On the contrary, with a zero-knowledge proof, I will be able to convince them that I know the proof, without giving away any information about it, thus preventing any possible data recovery.”
    The principle of zero-knowledge proof, invented in the mid-1980s, has been put into practice in recent years, notably for cryptocurrencies. However, these implementations suffer from a weakness, as they are based on a mathematical assumption (that a specific encoding function is difficult to decode). If this assumption is disproved — which cannot be ruled out today — security is compromised because the data would become accessible. Today, the Geneva team is demonstrating a radically different system in practice: a relativistic zero-knowledge proof. Security is based here on a physics concept, the principle of relativity, rather than on a mathematical hypothesis. The principle of relativity — that information does not travel faster than light — is a pillar of modern physics, unlikely to be ever challenged. The Geneva researchers’ protocol therefore offers perfect security and is guaranteed over the long term.
    Dual verification based on a three-colorability problem
    Implementing a relativistic zero-knowledge proof involves two distant verifier/prover pairs and a challenging mathematical problem. “We use a three-colorability problem. This type of problem consists of a graph made up of a set of nodes connected or not by links,” explains Hugo Zbinden, professor in the Department of Applied Physics at the UNIGE. Each node is given one out of three possible colours — green, blue or red — and two nodes that are linked together must be of different colours. These three-colouring problems, here featuring 5,000 nodes and 10,000 links, are in practice impossible to solve, as all possibilities must be tried. So why do we need two pairs of checker/prover?
    “To confirm their identity, the provers will no longer have to provide a code, but demonstrate to the verifier that they know a way to three-colour a certain graph,” continues Nicolas Brunner. To be sure, the verifiers will randomly choose a large number of pairs of nodes on the graph connected by a link, then ask their respective prover what colour the node is. Since this verification is done almost simultaneously, the provers cannot communicate with each other during the test, and therefore cannot cheat. Thus, if the two colours announced are always different, the verifiers are convinced of the identity of the provers, because they actually know a three-colouring of this graph. “It’s like when the police interrogates two criminals at the same time in separate offices: it’s a matter of checking that their answers match, without allowing them to communicate with each other,” says Hugo Zbinden. In this case, the questions are almost simultaneous, so the provers cannot communicate with each other, as this information would have to travel faster than light, which is of course impossible. Finally, to prevent the verifiers from reproducing the graph, the two provers constantly change the colour code in a correlated manner: what was green becomes blue, blue becomes red, etc. “In this way, the proof is made and verified, without revealing any information about it,” says the Geneva-based physicist.
    A reliable and ultra-fast system
    In practice, this verification is carried out more than three million times, all in less than three seconds. “The idea would be to assign a graph to each person or client,” continues Nicolas Brunner. In the Geneva researchers’ experiment, the two prover/verifier pairs are 60 metres apart, to ensure that they cannot communicate. “But this system can already be used, for example, between two branches of a bank and does not require complex or expensive technology,” he says. However, the research team believes that in the very near future this distance can be reduced to one metre. Whenever a data transfer has to be made, this relativistic zero-knowledge proof system would guarantee absolute security of data processing and could not be hacked. “In a few seconds, we would guarantee absolute confidentiality,” concludes Hugo Zbinden. More