More stories

  • in

    Thin-film, high-frequency antenna array offers new flexibility for wireless communications

    Princeton researchers have taken a step toward developing a type of antenna array that could coat an airplane’s wings, function as a skin patch transmitting signals to medical implants, or cover a room as wallpaper that communicates with internet of things (IoT) devices.
    The technology, which could enable many uses of emerging 5G and 6G wireless networks, is based on large-area electronics, a way of fabricating electronic circuits on thin, flexible materials. The researchers described its development in a paper published Oct. 7 in Nature Electronics.
    The approach overcomes limitations of conventional silicon semiconductors, which can operate at the high radio frequencies needed for 5G applications, but can only be made up to a few centimeters wide, and are difficult to assemble into the large arrays required for enhanced communication with low-power devices.
    “To achieve these large dimensions, people have tried discrete integration of hundreds of little microchips. But that’s not practical — it’s not low-cost, it’s not reliable, it’s not scalable on a wireless systems level,” said senior study author Naveen Verma, a professor of electrical and computer engineering and director of Princeton’s Keller Center for Innovation in Engineering Education.
    “What you want is a technology that can natively scale to these big dimensions. Well, we have a technology like that — it’s the one that we use for our displays” such as computer monitors and liquid-crystal display (LCD) televisions, said Verma. These use thin-film transistor technology, which Verma and colleagues adapted for use in wireless signaling.
    The researchers used zinc-oxide thin-film transistors to create a 1-foot-long (30-centimeter) row of three antennas, in a setup known as a phased array. Phased antenna arrays can transmit narrow-beam signals that can be digitally programmed to achieve desired frequencies and directions. Each antenna in the array emits a signal with a specified time delay from its neighbors, and the constructive and destructive interference between these signals add up to a focused electromagnetic beam — akin to the interference between ripples created by water droplets in a pond. More

  • in

    New software predicts the movements of large land animals

    Large land animals have a significant impact on the ecology and biodiversity of the areas they inhabit and traverse. If, for example, the routes and stopping places of cattle, horses, sheep, and also those of wolves or bears overlap with those of people, this often leads to conflicts. Knowing and being able to predict the movement patterns of animals is, therefore, of utmost relevance. This is not only necessary for nature and landscape protection, and to safeguard agriculture and forestry, but also for the safety of human travellers and the security of human infrastructures.
    Example — the brown bear
    The Abruzzo region of Italy, the location of the Sirente Velino Regional Park, is home to the endangered and therefore protected Marsican brown bear (Ursus arctos marsicanus). Recording the bears’ patterns of movement in the 50,000 hectare, partly populated area is especially important for their own protection, but also for that of the people living there and the sensitive flora. Movement pattern maps can be used to determine the bears’ roaming routes and places of refuge more effectively. These can then be adequately protected and, if necessary, adjusted.
    Traditional methods are expensive
    Traditional maps of animal movements are mostly based on long-term surveys of so-called telemetry data; this comes from individuals fitted with radio transmitters. This type of map-making is often time consuming and expensive, and lack of radio contact in some areas means that no data can be collected at all. That was also the case in the vast and isolated Sirente Velino national park.
    Researchers developed an alternative
    Researchers from iDiv, the Friedrich Schiller University Jena, Aarhus University and the University of Oxford have developed software – named ‘enerscape’ — with which maps can be created easily and cost-effectively. Dr Emilio Berti is post-doctoral researcher with the Theory in Biodiversity Science research group at iDiv and the Friedrich Schiller University Jena. As first author of the study he stressed: “What’s special is that the software requires very little data as a basis.” The energy an animal needs to expend to travel a certain distance is calculated, based on the weight of that animal and its general movement behaviour. This energy expenditure is then integrated with the topographical information of an area. “From this information we can then create ‘energy landscape maps’ for individuals as well as for groups of animals. Our maps are calculated rather than measured and thus represent a cost-effective alternative to traditional maps. In particular applications, such as the conditions in the Italian national park, our method makes the creation of movement pattern maps actually possible at all,” said Berti.
    Software helps with the designation of protection zones
    Using enerscape, the researchers found that bears choose paths that require less energy expenditure. These paths often lead through settlements, so that the bears encounter humans — which frequently ends fatally for the animals. The software also predicts, that bears wanting to save energy will tend to stay in valleys, far away from human settlements. Bear conflict as well as protection zones can now be identified using enerscape. Its maps can also be used to check whether landscape elements are still well-connected enough to enable the animals to move around the area sufficiently.
    enerscape is freely available and adaptable
    The researchers’ software enerscape is based on the widely used and openly accessible programming language ‘R’. It has a modular structure and can therefore process animal movement and topographical data from a wide variety of ecosystem types. “This makes it possible for both researchers and wildlife managers to adapt the software to a wide variety of landscapes and animals,” said Prof Fritz Vollrath from the Zoology Department of the University of Oxford and senior author of the study, emphasising the special nature of enerscape. “This means that the number of maps of animal movement in landscapes will increase in just a short time. With significantly more cartographical data, the understanding of the behavioural ecology of a species in a certain habitat will also fundamentally change. This will primarily benefit nature conservation and, in particular, rewilding measures — the reintroduction of wild animals,” said Vollrath.
    The development of enerscape was supported by iDiv which is funded by the German Research Foundation. In addition, enerscape is part of the VILLUM Investigator project ‘Biodiversity Dynamics in a Changing World’, which is funded by the Danish VILLUM Foundation and its ‘Independent Research Fund Denmark | Natural Sciences Project MegaComplexity’. More

  • in

    Government action needed to ensure insurance against major hacking of driverless vehicles, experts warn

    Government action is needed so driverless vehicles can be insured against malicious hacks which could have potentially catastrophic consequences, a study says.
    The software in driverless vehicles will make it possible for them to communicate with each other. It is being used and tested on public transport around the world, and is likely to be available to private vehicles in the future.
    This technology can help improve transport safety, but hacking could result in accidents and damage to fleets of vehicles, financial loss, deaths and personal injury.
    Experts have called for the creation of a national compensatory body in the UK offering a guarantee fund from which victims may seek redress.
    Traditional vehicle insurance wouldn’t cover the mass hacking of driverless cars, and an incident like this could cost the industry tens of billions of pounds.
    Hackers could target vehicles via their regular software updates. Without appropriate insurance systems driverless vehicles could pose too great a danger to road users if the vehicles suffered serious software defects or were subject to malicious hacking. Existing systems of liability are deficient or inapplicable to vehicles which operate without a driver in control.
    The research, published in the journal Computer Law & Security Review, was carried out by Matthew Channon from the University of Exeter and James Marson from Sheffield Hallam University.
    Dr Channon said: “It’s impossible to measure the risk of driverless vehicles being hacked, but it’s important to be prepared. We suggest the introduction of an insurance backed Maliciously Compromised Connected Vehicle Agreement to compensate low cost hacks and a government backed guarantee fund to compensate high-cost hacks.
    “This would remove a potentially onerous burden on manufacturers and would enable the deployment and advancement of driverless vehicles in the UK.
    “If manufacturers are required to pick up the burden of compensating victims of mass-hacking, major disruptions to innovation would be likely. Disputes could result in litigation costs for both manufacturer and insurer.
    “Public confidence requires a system to be available in the event of hacking or mass hacking which compensates people and also does not stifle or limit continuing development and innovation.”
    Dr Marson said, “The UK intends to play a leading role in the development and roll-out of connected and autonomous vehicles. It was the first country to establish a statutory liability framework for the introduction of autonomous vehicles onto national roads. If it wishes to continue playing a leading role in this sector, it has the opportunity by creating an insurance fund for victims of mass-hacked vehicles. This would not only protect road users and pedestrians in the event of injury following a hacking event, but would also give confidence to insurers to provide cover for a new and largely untested market.”
    Story Source:
    Materials provided by University of Exeter. Note: Content may be edited for style and length. More

  • in

    Artificial intelligence to detect colorectal cancer

    A Tulane University researcher found that artificial intelligence can accurately detect and diagnose colorectal cancer from tissue scans as well or better than pathologists, according to a new study in the journal Nature Communications.
    The study, which was conducted by researchers from Tulane, Central South University in China, the University of Oklahoma Health Sciences Center, Temple University, and Florida State University, was designed to test whether AI could be a tool to help pathologists keep pace with the rising demand for their services.
    Pathologists evaluate and label thousands of histopathology images on a regular basis to tell whether someone has cancer. But their average workload has increased significantly and can sometimes cause unintended misdiagnoses due to fatigue.
    “Even though a lot of their work is repetitive, most pathologists are extremely busy because there’s a huge demand for what they do but there’s a global shortage of qualified pathologists, especially in many developing countries” said Dr. Hong-Wen Deng, professor and director of the Tulane Center of Biomedical Informatics and Genomics at Tulane University School of Medicine. “This study is revolutionary because we successfully leveraged artificial intelligence to identify and diagnose colorectal cancer in a cost-effective way, which could ultimately reduce the workload of pathologists.”
    To conduct the study, Deng and his team collected over 13,000 images of colorectal cancer from 8,803 subjects and 13 independent cancer centers in China, Germany and the United States. Using the images, which were randomly selected by technicians, they built a machine assisted pathological recognition program that allows a computer to recognize images that show colorectal cancer, one of the most common causes of cancer related deaths in Europe and America.
    “The challenges of this study stemmed from complex large image sizes, complex shapes, textures, and histological changes in nuclear staining,” Deng said. “But ultimately the study revealed that when we used AI to diagnose colorectal cancer, the performance is shown comparable to and even better in many cases than real pathologists.”
    The area under the receiver operating characteristic (ROC) curve or AUC is the performance measurement tool that Deng and his team used to determine the success of the study. After comparing the computer’s results with the work of highly experienced pathologists who interpreted data manually, the study found that the average pathologist scored at .969 for accurately identifying colorectal cancer manually. The average score for the machine-assisted AI computer program was .98, which is comparable if not more accurate.
    Using artificial intelligence to identify cancer is an emerging technology and hasn’t yet been widely accepted. Deng’s hope is that the study will lead to more pathologists using prescreening technology in the future to make quicker diagnoses.
    “It’s still in the research phase and we haven’t commercialized it yet because we need to make it more user friendly and test and implement in more clinical settings. But as we develop it further, hopefully it can also be used for different types of cancer in the future. Using AI to diagnose cancer can expedite the whole process and will save a lot of time for both patients and clinicians.”
    Story Source:
    Materials provided by Tulane University. Note: Content may be edited for style and length. More

  • in

    Better models of atmospheric ‘detergent’ can help predict climate change

    Earth’s atmosphere has a unique ability to cleanse itself by way of invisible molecules in the air that act as minuscule cleanup crews. The most important molecule in that crew is the hydroxyl radical (OH), nicknamed the “detergent of the atmosphere” because of its dominant role in removing pollutants. When the OH molecule chemically interacts with a variety of harmful gases, including the potent greenhouse gas methane, it is able to decompose the pollutants into forms that can be removed from Earth’s atmosphere.
    It is difficult to measure OH, however, and it is not directly emitted. Instead, researchers predict the presence of OH based on its chemical production from other, “precursor” gases. To make these predictions, researchers use computer simulations.
    In a new paper published in the journal PNAS, Lee Murray, an assistant professor of earth and environmental sciences at the University of Rochester, outlines why computer models used to predict future levels of OH — and, therefore, how long air pollutants and reactive greenhouse gases last in the atmosphere — have traditionally produced widely varying forecasts. The study is the latest in Murray’s efforts to develop models of the dynamics and composition of Earth’s atmosphere and has important implications in advancing policies to combat climate change.
    “We need to understand what controls changes in hydroxyl radical in Earth’s atmosphere in order to give us a better idea of the measures we need to take to rid the atmosphere of pollutants and reactive greenhouse gases,” Murray says.
    Building accurate computer models to predict OH levels is similar to baking: just as you must add precise ingredients in the proper amounts and order to make an edible cake, precise data and metrics must be input into computer models to make them more accurate.
    The various existing computer models used to predict OH levels have traditionally been designed with data input involving identical emissions levels of OH precursor gases. Murray and his colleagues, however, demonstrated that OH levels strongly depend on how much of these precursor emissions are lost before they react to produce OH. In this case, different bakers follow the same recipe of ingredients (emissions), but end up with different sizes of cake (OH levels) because some bakers throw out different portions of batter in the middle of the process.
    “Uncertainties in future predictions are primarily driven by uncertainties in how models implement the fate of reactive gases that are directly emitted,” Murray says.
    As Murray and his colleagues show, the computer models used to predict OH levels must evaluate the loss processes of reactive precursor gases, before they may be used for accurate future predictions.
    But more data is needed about these processes, Murray says.
    “Performing new measurements to constrain these processes will allow us to provide more accurate data about the amount of hydroxyl in the atmosphere and how it may change in the future,” he says.
    Story Source:
    Materials provided by University of Rochester. Original written by Lindsey Valich. Note: Content may be edited for style and length. More

  • in

    Scientists build on AI modeling to understand more about protein-sugar structures

    New research building on AI algorithms has enabled scientists to create more complete models of the protein structures in our bodies — paving the way for faster design of therapeutics and vaccines.
    The study — led by the University of York — used artificial intelligence (AI) to help researchers understand more about the sugar that surrounds most proteins in our bodies.
    Up to 70 per cent of human proteins are surrounded or scaffolded with sugar, which plays an important part in how they look and act. Moreover, some viruses like those behind AIDS, Flu, Ebola and COVID-19 are also shielded behind sugars (glycans). The addition of these sugars is known as modification.
    To study the proteins, researchers created software that adds missing sugar components to models created with AlphaFold, which is an artificial intelligence program developed by Google’s DeepMind which performs predictions of protein structures.
    Senior author, Dr Jon Agirre from the Department of Chemistry said: “The proteins of the human body are tiny machines that in their billions, make up our flesh and bones, transport our oxygen, allow us to function, and defend us from pathogens. And just like a hammer relies on a metal head to strike pointy objects including nails, proteins have specialised shapes and compositions to get their jobs done.”
    “The AlphaFold method for protein structure prediction has the potential to revolutionise workflows in biology, allowing scientists to understand a protein and the impact of mutations faster than ever.”
    “However, the algorithm does not account for essential modifications that affect protein structure and function, which gives us only part of the picture. Our research has shown that this can be addressed in a relatively straightforward manner, leading to a more complete structural prediction.”
    The recent introduction of AlphaFold and the accompanying database of protein structures has enabled scientists to have accurate structure predictions for all known human proteins.
    Dr Agirre added: “It is always great to watch an international collaboration grow to bear fruit, but this is just the beginning for us. Our software was used in the glycan structural work that underpinned the mRNA vaccines against SARS-CoV-2, but now there is so much more we can do thanks to the AlphaFold technological leap. It is still early stages, but the objective is to move on from reacting to changes in a glycan shield to anticipating them.”
    The research was conducted with Dr Elisa Fadda and Carl A. Fogarty from Maynooth University. Haroldas Bagdonas, PhD student at the York Structural Biology Laboratory, which is part of the Department of Chemistry, also worked on the study with Dr Agirre.
    Story Source:
    Materials provided by University of York. Note: Content may be edited for style and length. More

  • in

    Researchers move closer to controlling two-dimensional graphene

    The device you are currently reading this article on was born from the silicon revolution. To build modern electrical circuits, researchers control silicon’s current-conducting capabilities via doping, which is a process that introduces either negatively charged electrons or positively charged “holes” where electrons used to be. This allows the flow of electricity to be controlled and for silicon involves injecting other atomic elements that can adjust electrons — known as dopants — into its three-dimensional (3D) atomic lattice.
    Silicon’s 3D lattice, however, is too big for next-generation electronics, which include ultra-thin transistors, new devices for optical communication, and flexible bio-sensors that can be worn or implanted in the human body. To slim things down, researchers are experimenting with materials no thicker than a single sheet of atoms, such as graphene. But the tried-and-true method for doping 3D silicon doesn’t work with 2D graphene, which consists of a single layer of carbon atoms that doesn’t normally conduct a current.
    Rather than injecting dopants, researchers have tried layering on a “charge-transfer layer” intended to add or pull away electrons from the graphene. However, previous methods used “dirty” materials in their charge-transfer layers; impurities in these would leave the graphene unevenly doped and impede its ability to conduct electricity.
    Now, a new study in Nature Electronics proposes a better way. An interdisciplinary team of researchers, led by James Hone and James Teherani at Columbia University, and Won Jong Yoo at Sungkyungkwan University in Korea, describe a clean technique to dope graphene via a charge-transfer layer made of low-impurity tungsten oxyselenide (TOS).
    The team generated the new “clean” layer by oxidizing a single atomic layer of another 2D material, tungsten selenide. When TOS was layered on top of graphene, they found that it left the graphene riddled with electricity-conducting holes. Those holes could be fine-tuned to better control the materials’ electricity-conducting properties by adding a few atomic layers of tungsten selenide in between the TOS and the graphene.
    The researchers found that graphene’s electrical mobility, or how easily charges move through it, was higher with their new doping method than previous attempts. Adding tungsten selenide spacers further increased the mobility to the point where the effect of the TOS becomes negligible, leaving mobility to be determined by the intrinsic properties of graphene itself. This combination of high doping and high mobility gives graphene greater electrical conductivity than that of highly conductive metals like copper and gold.
    As the doped graphene got better at conducting electricity, it also became more transparent, the researchers said. This is due to Pauli blocking, a phenomenon where charges manipulated by doping block the material from absorbing light. At the infrared wavelengths used in telecommunications, the graphene became more than 99 percent transparent. Achieving a high rate of transparency and conductivity is crucial to moving information through light-based photonic devices. If too much light is absorbed, information gets lost. The team found a much smaller loss for TOS-doped graphene than for other conductors, suggesting that this method could hold potential for next-generation ultra-efficient photonic devices.
    “This is a new way to tailor the properties of graphene on demand,” Hone said. “We have just begun to explore the possibilities of this new technique.”
    One promising direction is to alter graphene’s electronic and optical properties by changing the pattern of the TOS, and to imprint electrical circuits directly on the graphene itself. The team is also working to integrate the doped material into novel photonic devices, with potential applications in transparent electronics, telecommunications systems, and quantum computers.
    Story Source:
    Materials provided by Columbia University. Original written by Ellen Neff. Note: Content may be edited for style and length. More

  • in

    Researchers discover predictable behavior in promising material for computer memory

    In the last few years, a class of materials called antiferroelectrics has been increasingly studied for its potential applications in modern computer memory devices. Research has shown that antiferroelectric-based memories might have greater energy efficiency and faster read and write speeds than conventional memories, among other appealing attributes. Further, the same compounds that can exhibit antiferroelectric behavior are already integrated into existing semiconductor chip manufacturing processes.
    Now, a team led by Georgia Tech researchers has discovered unexpectedly familiar behavior in the antiferroelectric material known as zirconium dioxide, or zirconia. They show that as the microstructure of the material is reduced in size, it behaves similarly to much better understood materials known as ferroelectrics. The findings were recently published in the journal Advanced Electronic Materials.
    Miniaturization of circuits has played a key role in improving memory performance over the last fifty years. Knowing how the properties of an antiferroelectric change with shrinking size should enable the design of more effective memory components.
    The researchers also note that the findings should have implications in many other areas besides memory.
    “Antiferroelectrics have a range of unique properties like high reliability, high voltage endurance, and broad operating temperatures that makes them useful in a wealth of different devices, including high-energy-density capacitors, transducers, and electro-optics circuits.” said Nazanin Bassiri-Gharb, coauthor of the paper and professor in the Woodruff School of Mechanical Engineering and the School of Materials Science and Engineering at Georgia Tech. “But size scaling effects had gone largely under the radar for a long time.”
    “You can design your device and make it smaller knowing exactly how the material is going to perform,” said Asif Khan, coauthor of the paper and assistant professor in the School of Electrical and Computer Engineering and the School of Materials Science and Engineering at Georgia Tech. “From our standpoint, it opens really a new field of research.”
    Lasting Fields More