More stories

  • in

    A new dimension in magnetism and superconductivity launched

    An international team of scientists from Austria and Germany has launched a new paradigm in magnetism and superconductivity, putting effects of curvature, topology, and 3D geometry into the spotlight of next-decade research. | New paper in “Advanced Materials.”
    Traditionally, the primary field, where curvature is playing a pivotal role, is the theory of general relativity. In recent years, however, the impact of curvilinear geometry enters various disciplines, ranging from solid-state physics over soft-matter physics to chemistry and biology, giving rise to a plethora of emerging domains, such as curvilinear cell biology, semiconductors, superfluidity, optics, plasmonics and 2D van der Waals materials. In modern magnetism, superconductivity and spintronics, extending nanostructures into the third dimension has become a major research avenue because of geometry-, curvature- and topology-induced phenomena. This approach provides a means to improve conventional and to launch novel functionalities by tailoring the curvature and 3D shape.
    “In recent years, there have appeared experimental and theoretical works dealing with curvilinear and three-dimensional superconducting and (anti-)ferromagnetic nano-architectures. However, these studies originate from different scientific communities, resulting in the lack of knowledge transfer between such fundamental areas of condensed matter physics as magnetism and superconductivity,” says Oleksandr Dobrovolskiy, head of the SuperSpin Lab at the University of Vienna. “In our group, we lead projects in both these topical areas and it was the aim of our perspective article to build a “bridge” between the magnetism and superconductivity communities, drawing attention to the conceptual aspects of how extension of structures into the third dimension and curvilinear geometry can modify existing and aid launching novel functionalities upon solid-state systems.”
    “In magnetic materials, the geometrically-broken symmetry provides a new toolbox to tailor curvature-induced anisotropy and chiral responses,” says Denys Makarov, head of the department “Intelligent Materials and Systems” at the Helmholtz-Zentrum Dresden-Rossendorf. “The possibility to tune magnetic responses by designing the geometry of a wire or magnetic thin film, is one of the main advantages of the curvilinear magnetism, which has a major impact on physics, material science and technology. At present, under its umbrella, the fundamental field of curvilinear magnetism includes curvilinear ferro- and antiferromagnetism, curvilinear magnonics and curvilinear spintronics.”
    “The key difference in the impact of the curvilinear geometry on superconductors in comparison with (anti-)ferromagnets lies in the underlying nature of the order parameter,” expands Oleksandr Dobrovolskiy. “Namely, in contrast to magnetic materials, for which energy functionals contain spatial derivatives of vector fields, the description of superconductors also relies on the analysis of energy functionals containing spatial derivatives of scalar fields. While in magnetism the order parameter is the magnetization (vector), for a superconducting state the absolute value of the order parameter has a physical meaning of the superconducting energy gap (scalar). In the future, extension of hybrid (anti-)ferromagnet/superconductor structures into the third dimension will enable investigations of the interplay between curvature effects in systems possessing vector and scalar order parameters. Yet, this progress strongly relies on the development of experimental and theoretical methods and the improvement of computation capabilities.”
    Challenges for investigations of curvilinear and 3D nanomagnets and superconductors
    Generally, effects of curvature and torsion are expected when the sizes or features of the system become comparable with the respective length scales. Among the various nanofabrication techniques, writing of complex-shaped 3D nano-architectures by focused particles beams has exhibited the most significant progress in the recent years, turning these methods into the techniques of choice for basic and applications-oriented studies in 3D nanomagnetism and superconductivity. However, approaching the relevant length scales in the low nm range (exchange length in ferromagnets and superconducting coherence length in nanoprinted superconductors) is still beyond the reach of current experimental capabilities. At the same time, sophisticated techniques for the characterization of magnetic configurations and their dynamics in complex-shaped nanostructures are becoming available, including X-ray vector nanotomography and 3D imaging by soft X-ray laminography. Similar studies of superconductors are more delicate as they require cryogenic conditions, appealing for the development of such techniques in the years to come.
    Story Source:
    Materials provided by University of Vienna. Note: Content may be edited for style and length. More

  • in

    Autonomous robotic rover helps scientists with long-term monitoring of deep-sea carbon cycle and climate change

    The sheer expanse of the deep sea and the technological challenges of working in an extreme environment make these depths difficult to access and study. Scientists know more about the surface of the moon than the deep seafloor. MBARI is leveraging advancements in robotic technologies to address this disparity.
    An autonomous robotic rover, Benthic Rover II, has provided new insight into life on the abyssal seafloor, 4,000 meters (13,100 feet) beneath the surface of the ocean. A study published today in Science Robotics details the development and proven long-term operation of this rover. This innovative mobile laboratory has further revealed the role of the deep sea in cycling carbon. The data collected by this rover are fundamental to understanding the impacts of climate change on the ocean.
    “The success of this abyssal rover now permits long-term monitoring of the coupling between the water column and seafloor. Understanding these connected processes is critical to predicting the health and productivity of our planet engulfed in a changing climate,” said MBARI Senior Scientist Ken Smith.
    Despite its distance from the sunlit shallows, the deep seafloor is connected to the waters above and is vital for carbon cycling and sequestration. Bits of organic matter — including dead plants and animals, mucus, and excreted waste — slowly sink through the water column to the seafloor. The community of animals and microbes on and in the mud digests some of this carbon while the rest might get locked in deep-sea sediments for up to thousands of years.
    The deep sea plays an important role in Earth’s carbon cycle and climate, yet we still know little about processes happening thousands of meters below the surface. Engineering obstacles like extreme pressure and the corrosive nature of seawater make it difficult to send equipment to the abyssal seafloor to study and monitor the ebb and flow of carbon.
    In the past, Smith and other scientists relied on stationary instruments to study carbon consumption by deep seafloor communities. They could only deploy these instruments for a few days at a time. By building on 25 years of engineering innovation, MBARI has developed a long-term solution for monitoring the abyssal seafloor. More

  • in

    Securing data transfers with relativity

    The volume of data transferred is constantly increasing, but the absolute security of these exchanges cannot be guaranteed, as shown by cases of hacking frequently reported in the news. To counter hacking, a team from the University of Geneva (UNIGE), Switzerland, has developed a new system based on the concept of “zero-knowledge proofs,” the security of which is based on the physical principle of relativity: information cannot travel faster than the speed of light. Thus, one of the fundamental principles of modern physics allows for secure data transfer. This system allows users to identify themselves in complete confidentiality without disclosing any personal information, promising applications in the field of cryptocurrencies and blockchain. These results can be read in the journal Nature.
    When a person — the so called ‘prover’ — wants to confirm their identity, for example when they want to withdraw money from an ATM, they must provide their personal data to the verifier, in our example the bank, which processes this information (e.g. the identification number and the pin code). As long as only the prover and the verifier know this data, confidentiality is guaranteed. If others get hold of this information, for example by hacking into the bank’s server, security is compromised.
    Zero-knowledge proof as a solution
    To counter this problem, the prover should ideally be able to confirm their identity, without revealing any information at all about their personal data. But is this even possible? Surprisingly the answer is yes, via the concept of a zero-knowledge proof. “Imagine I want to prove a mathematical theorem to a colleague. If I show them the steps of the proof, they will be convinced, but then have access to all the information and could easily reproduce the proof,” explains Nicolas Brunner, a professor in the Department of Applied Physics at the UNIGE Faculty of Science. “On the contrary, with a zero-knowledge proof, I will be able to convince them that I know the proof, without giving away any information about it, thus preventing any possible data recovery.”
    The principle of zero-knowledge proof, invented in the mid-1980s, has been put into practice in recent years, notably for cryptocurrencies. However, these implementations suffer from a weakness, as they are based on a mathematical assumption (that a specific encoding function is difficult to decode). If this assumption is disproved — which cannot be ruled out today — security is compromised because the data would become accessible. Today, the Geneva team is demonstrating a radically different system in practice: a relativistic zero-knowledge proof. Security is based here on a physics concept, the principle of relativity, rather than on a mathematical hypothesis. The principle of relativity — that information does not travel faster than light — is a pillar of modern physics, unlikely to be ever challenged. The Geneva researchers’ protocol therefore offers perfect security and is guaranteed over the long term.
    Dual verification based on a three-colorability problem
    Implementing a relativistic zero-knowledge proof involves two distant verifier/prover pairs and a challenging mathematical problem. “We use a three-colorability problem. This type of problem consists of a graph made up of a set of nodes connected or not by links,” explains Hugo Zbinden, professor in the Department of Applied Physics at the UNIGE. Each node is given one out of three possible colours — green, blue or red — and two nodes that are linked together must be of different colours. These three-colouring problems, here featuring 5,000 nodes and 10,000 links, are in practice impossible to solve, as all possibilities must be tried. So why do we need two pairs of checker/prover?
    “To confirm their identity, the provers will no longer have to provide a code, but demonstrate to the verifier that they know a way to three-colour a certain graph,” continues Nicolas Brunner. To be sure, the verifiers will randomly choose a large number of pairs of nodes on the graph connected by a link, then ask their respective prover what colour the node is. Since this verification is done almost simultaneously, the provers cannot communicate with each other during the test, and therefore cannot cheat. Thus, if the two colours announced are always different, the verifiers are convinced of the identity of the provers, because they actually know a three-colouring of this graph. “It’s like when the police interrogates two criminals at the same time in separate offices: it’s a matter of checking that their answers match, without allowing them to communicate with each other,” says Hugo Zbinden. In this case, the questions are almost simultaneous, so the provers cannot communicate with each other, as this information would have to travel faster than light, which is of course impossible. Finally, to prevent the verifiers from reproducing the graph, the two provers constantly change the colour code in a correlated manner: what was green becomes blue, blue becomes red, etc. “In this way, the proof is made and verified, without revealing any information about it,” says the Geneva-based physicist.
    A reliable and ultra-fast system
    In practice, this verification is carried out more than three million times, all in less than three seconds. “The idea would be to assign a graph to each person or client,” continues Nicolas Brunner. In the Geneva researchers’ experiment, the two prover/verifier pairs are 60 metres apart, to ensure that they cannot communicate. “But this system can already be used, for example, between two branches of a bank and does not require complex or expensive technology,” he says. However, the research team believes that in the very near future this distance can be reduced to one metre. Whenever a data transfer has to be made, this relativistic zero-knowledge proof system would guarantee absolute security of data processing and could not be hacked. “In a few seconds, we would guarantee absolute confidentiality,” concludes Hugo Zbinden. More

  • in

    Thin-film, high-frequency antenna array offers new flexibility for wireless communications

    Princeton researchers have taken a step toward developing a type of antenna array that could coat an airplane’s wings, function as a skin patch transmitting signals to medical implants, or cover a room as wallpaper that communicates with internet of things (IoT) devices.
    The technology, which could enable many uses of emerging 5G and 6G wireless networks, is based on large-area electronics, a way of fabricating electronic circuits on thin, flexible materials. The researchers described its development in a paper published Oct. 7 in Nature Electronics.
    The approach overcomes limitations of conventional silicon semiconductors, which can operate at the high radio frequencies needed for 5G applications, but can only be made up to a few centimeters wide, and are difficult to assemble into the large arrays required for enhanced communication with low-power devices.
    “To achieve these large dimensions, people have tried discrete integration of hundreds of little microchips. But that’s not practical — it’s not low-cost, it’s not reliable, it’s not scalable on a wireless systems level,” said senior study author Naveen Verma, a professor of electrical and computer engineering and director of Princeton’s Keller Center for Innovation in Engineering Education.
    “What you want is a technology that can natively scale to these big dimensions. Well, we have a technology like that — it’s the one that we use for our displays” such as computer monitors and liquid-crystal display (LCD) televisions, said Verma. These use thin-film transistor technology, which Verma and colleagues adapted for use in wireless signaling.
    The researchers used zinc-oxide thin-film transistors to create a 1-foot-long (30-centimeter) row of three antennas, in a setup known as a phased array. Phased antenna arrays can transmit narrow-beam signals that can be digitally programmed to achieve desired frequencies and directions. Each antenna in the array emits a signal with a specified time delay from its neighbors, and the constructive and destructive interference between these signals add up to a focused electromagnetic beam — akin to the interference between ripples created by water droplets in a pond. More

  • in

    New software predicts the movements of large land animals

    Large land animals have a significant impact on the ecology and biodiversity of the areas they inhabit and traverse. If, for example, the routes and stopping places of cattle, horses, sheep, and also those of wolves or bears overlap with those of people, this often leads to conflicts. Knowing and being able to predict the movement patterns of animals is, therefore, of utmost relevance. This is not only necessary for nature and landscape protection, and to safeguard agriculture and forestry, but also for the safety of human travellers and the security of human infrastructures.
    Example — the brown bear
    The Abruzzo region of Italy, the location of the Sirente Velino Regional Park, is home to the endangered and therefore protected Marsican brown bear (Ursus arctos marsicanus). Recording the bears’ patterns of movement in the 50,000 hectare, partly populated area is especially important for their own protection, but also for that of the people living there and the sensitive flora. Movement pattern maps can be used to determine the bears’ roaming routes and places of refuge more effectively. These can then be adequately protected and, if necessary, adjusted.
    Traditional methods are expensive
    Traditional maps of animal movements are mostly based on long-term surveys of so-called telemetry data; this comes from individuals fitted with radio transmitters. This type of map-making is often time consuming and expensive, and lack of radio contact in some areas means that no data can be collected at all. That was also the case in the vast and isolated Sirente Velino national park.
    Researchers developed an alternative
    Researchers from iDiv, the Friedrich Schiller University Jena, Aarhus University and the University of Oxford have developed software – named ‘enerscape’ — with which maps can be created easily and cost-effectively. Dr Emilio Berti is post-doctoral researcher with the Theory in Biodiversity Science research group at iDiv and the Friedrich Schiller University Jena. As first author of the study he stressed: “What’s special is that the software requires very little data as a basis.” The energy an animal needs to expend to travel a certain distance is calculated, based on the weight of that animal and its general movement behaviour. This energy expenditure is then integrated with the topographical information of an area. “From this information we can then create ‘energy landscape maps’ for individuals as well as for groups of animals. Our maps are calculated rather than measured and thus represent a cost-effective alternative to traditional maps. In particular applications, such as the conditions in the Italian national park, our method makes the creation of movement pattern maps actually possible at all,” said Berti.
    Software helps with the designation of protection zones
    Using enerscape, the researchers found that bears choose paths that require less energy expenditure. These paths often lead through settlements, so that the bears encounter humans — which frequently ends fatally for the animals. The software also predicts, that bears wanting to save energy will tend to stay in valleys, far away from human settlements. Bear conflict as well as protection zones can now be identified using enerscape. Its maps can also be used to check whether landscape elements are still well-connected enough to enable the animals to move around the area sufficiently.
    enerscape is freely available and adaptable
    The researchers’ software enerscape is based on the widely used and openly accessible programming language ‘R’. It has a modular structure and can therefore process animal movement and topographical data from a wide variety of ecosystem types. “This makes it possible for both researchers and wildlife managers to adapt the software to a wide variety of landscapes and animals,” said Prof Fritz Vollrath from the Zoology Department of the University of Oxford and senior author of the study, emphasising the special nature of enerscape. “This means that the number of maps of animal movement in landscapes will increase in just a short time. With significantly more cartographical data, the understanding of the behavioural ecology of a species in a certain habitat will also fundamentally change. This will primarily benefit nature conservation and, in particular, rewilding measures — the reintroduction of wild animals,” said Vollrath.
    The development of enerscape was supported by iDiv which is funded by the German Research Foundation. In addition, enerscape is part of the VILLUM Investigator project ‘Biodiversity Dynamics in a Changing World’, which is funded by the Danish VILLUM Foundation and its ‘Independent Research Fund Denmark | Natural Sciences Project MegaComplexity’. More

  • in

    Government action needed to ensure insurance against major hacking of driverless vehicles, experts warn

    Government action is needed so driverless vehicles can be insured against malicious hacks which could have potentially catastrophic consequences, a study says.
    The software in driverless vehicles will make it possible for them to communicate with each other. It is being used and tested on public transport around the world, and is likely to be available to private vehicles in the future.
    This technology can help improve transport safety, but hacking could result in accidents and damage to fleets of vehicles, financial loss, deaths and personal injury.
    Experts have called for the creation of a national compensatory body in the UK offering a guarantee fund from which victims may seek redress.
    Traditional vehicle insurance wouldn’t cover the mass hacking of driverless cars, and an incident like this could cost the industry tens of billions of pounds.
    Hackers could target vehicles via their regular software updates. Without appropriate insurance systems driverless vehicles could pose too great a danger to road users if the vehicles suffered serious software defects or were subject to malicious hacking. Existing systems of liability are deficient or inapplicable to vehicles which operate without a driver in control.
    The research, published in the journal Computer Law & Security Review, was carried out by Matthew Channon from the University of Exeter and James Marson from Sheffield Hallam University.
    Dr Channon said: “It’s impossible to measure the risk of driverless vehicles being hacked, but it’s important to be prepared. We suggest the introduction of an insurance backed Maliciously Compromised Connected Vehicle Agreement to compensate low cost hacks and a government backed guarantee fund to compensate high-cost hacks.
    “This would remove a potentially onerous burden on manufacturers and would enable the deployment and advancement of driverless vehicles in the UK.
    “If manufacturers are required to pick up the burden of compensating victims of mass-hacking, major disruptions to innovation would be likely. Disputes could result in litigation costs for both manufacturer and insurer.
    “Public confidence requires a system to be available in the event of hacking or mass hacking which compensates people and also does not stifle or limit continuing development and innovation.”
    Dr Marson said, “The UK intends to play a leading role in the development and roll-out of connected and autonomous vehicles. It was the first country to establish a statutory liability framework for the introduction of autonomous vehicles onto national roads. If it wishes to continue playing a leading role in this sector, it has the opportunity by creating an insurance fund for victims of mass-hacked vehicles. This would not only protect road users and pedestrians in the event of injury following a hacking event, but would also give confidence to insurers to provide cover for a new and largely untested market.”
    Story Source:
    Materials provided by University of Exeter. Note: Content may be edited for style and length. More

  • in

    Artificial intelligence to detect colorectal cancer

    A Tulane University researcher found that artificial intelligence can accurately detect and diagnose colorectal cancer from tissue scans as well or better than pathologists, according to a new study in the journal Nature Communications.
    The study, which was conducted by researchers from Tulane, Central South University in China, the University of Oklahoma Health Sciences Center, Temple University, and Florida State University, was designed to test whether AI could be a tool to help pathologists keep pace with the rising demand for their services.
    Pathologists evaluate and label thousands of histopathology images on a regular basis to tell whether someone has cancer. But their average workload has increased significantly and can sometimes cause unintended misdiagnoses due to fatigue.
    “Even though a lot of their work is repetitive, most pathologists are extremely busy because there’s a huge demand for what they do but there’s a global shortage of qualified pathologists, especially in many developing countries” said Dr. Hong-Wen Deng, professor and director of the Tulane Center of Biomedical Informatics and Genomics at Tulane University School of Medicine. “This study is revolutionary because we successfully leveraged artificial intelligence to identify and diagnose colorectal cancer in a cost-effective way, which could ultimately reduce the workload of pathologists.”
    To conduct the study, Deng and his team collected over 13,000 images of colorectal cancer from 8,803 subjects and 13 independent cancer centers in China, Germany and the United States. Using the images, which were randomly selected by technicians, they built a machine assisted pathological recognition program that allows a computer to recognize images that show colorectal cancer, one of the most common causes of cancer related deaths in Europe and America.
    “The challenges of this study stemmed from complex large image sizes, complex shapes, textures, and histological changes in nuclear staining,” Deng said. “But ultimately the study revealed that when we used AI to diagnose colorectal cancer, the performance is shown comparable to and even better in many cases than real pathologists.”
    The area under the receiver operating characteristic (ROC) curve or AUC is the performance measurement tool that Deng and his team used to determine the success of the study. After comparing the computer’s results with the work of highly experienced pathologists who interpreted data manually, the study found that the average pathologist scored at .969 for accurately identifying colorectal cancer manually. The average score for the machine-assisted AI computer program was .98, which is comparable if not more accurate.
    Using artificial intelligence to identify cancer is an emerging technology and hasn’t yet been widely accepted. Deng’s hope is that the study will lead to more pathologists using prescreening technology in the future to make quicker diagnoses.
    “It’s still in the research phase and we haven’t commercialized it yet because we need to make it more user friendly and test and implement in more clinical settings. But as we develop it further, hopefully it can also be used for different types of cancer in the future. Using AI to diagnose cancer can expedite the whole process and will save a lot of time for both patients and clinicians.”
    Story Source:
    Materials provided by Tulane University. Note: Content may be edited for style and length. More

  • in

    Better models of atmospheric ‘detergent’ can help predict climate change

    Earth’s atmosphere has a unique ability to cleanse itself by way of invisible molecules in the air that act as minuscule cleanup crews. The most important molecule in that crew is the hydroxyl radical (OH), nicknamed the “detergent of the atmosphere” because of its dominant role in removing pollutants. When the OH molecule chemically interacts with a variety of harmful gases, including the potent greenhouse gas methane, it is able to decompose the pollutants into forms that can be removed from Earth’s atmosphere.
    It is difficult to measure OH, however, and it is not directly emitted. Instead, researchers predict the presence of OH based on its chemical production from other, “precursor” gases. To make these predictions, researchers use computer simulations.
    In a new paper published in the journal PNAS, Lee Murray, an assistant professor of earth and environmental sciences at the University of Rochester, outlines why computer models used to predict future levels of OH — and, therefore, how long air pollutants and reactive greenhouse gases last in the atmosphere — have traditionally produced widely varying forecasts. The study is the latest in Murray’s efforts to develop models of the dynamics and composition of Earth’s atmosphere and has important implications in advancing policies to combat climate change.
    “We need to understand what controls changes in hydroxyl radical in Earth’s atmosphere in order to give us a better idea of the measures we need to take to rid the atmosphere of pollutants and reactive greenhouse gases,” Murray says.
    Building accurate computer models to predict OH levels is similar to baking: just as you must add precise ingredients in the proper amounts and order to make an edible cake, precise data and metrics must be input into computer models to make them more accurate.
    The various existing computer models used to predict OH levels have traditionally been designed with data input involving identical emissions levels of OH precursor gases. Murray and his colleagues, however, demonstrated that OH levels strongly depend on how much of these precursor emissions are lost before they react to produce OH. In this case, different bakers follow the same recipe of ingredients (emissions), but end up with different sizes of cake (OH levels) because some bakers throw out different portions of batter in the middle of the process.
    “Uncertainties in future predictions are primarily driven by uncertainties in how models implement the fate of reactive gases that are directly emitted,” Murray says.
    As Murray and his colleagues show, the computer models used to predict OH levels must evaluate the loss processes of reactive precursor gases, before they may be used for accurate future predictions.
    But more data is needed about these processes, Murray says.
    “Performing new measurements to constrain these processes will allow us to provide more accurate data about the amount of hydroxyl in the atmosphere and how it may change in the future,” he says.
    Story Source:
    Materials provided by University of Rochester. Original written by Lindsey Valich. Note: Content may be edited for style and length. More