More stories

  • in

    Introducing a transceiver that can tap into the higher frequency bands of 5G networks

    5G networks are becoming more prevalent worldwide. Many consumer devices that support 5G are already benefiting from increased speeds and lower latency. However, some frequency bands allocated for 5G are not effectively utilized owing to technological limitations. These frequency bands include the New Radio (NR) 39 GHz band, but actually span from 37 GHz to 43.5 GHz, depending on the country. The NR band offers notable advantages in performance over other lower frequency bands 5G networks use today. For instance, it enables ultra-low latency in communication along with data rates of over 10 Gb/s and a massive capacity to accommodate several users.
    However, these feats come at a cost. High-frequency signals are attenuated quickly as they travel through space. It is, therefore, crucial that the transmitted power is concentrated in a narrow beam aimed directly at the receiver. This can, in principle, be achieved using phased-array beamformers, transmission devices composed of an array of carefully phase-controlled antennas. However, working at high frequency regions of the NR band decreases the efficiency of power amplifiers as they tend to suffer from nonlinearity issues, which distort the transmitted signal.
    To address these issues, a team of researchers led by Professor Kenichi Okada from Tokyo Institute of Technology (Tokyo Tech), Japan, have recently developed, in a new study, a novel phased-array beamformer for 5G base stations. Their design adapts two well-known techniques, namely the Doherty amplifier and digital predistortion (DPD), into a mmWave phased-array transceiver, but with a few twists. The researchers will present their findings in the upcoming 2022 IEEE Symposium on VLSI Technology and Circuits.
    The Doherty amplifier, developed in 1936, has seen a resurgence in modern telecommunication devices owing to its good power efficiency and suitability for signals with a high peak-to-average ratio (such as 5G signals). The team at Tokyo Tech modified the conventional Doherty amplifier design and produced a bi-directional amplifier. What this means is that the same circuit can both amplify a signal to be transmitted and a received signal with low noise. This fulfilled the crucial role of amplification for both transmission and reception. “Our proposed bidirectional implementation for the amplifier is very area-efficient. Additionally, thanks to its co-design with a wafer-level chip-scale packaging technology, it enables low insertion loss. This means that less power is lost as the signal traverses the amplifier,” explains Professor Okada.
    Despite its several advantages, however, the Doherty amplifier can exacerbate nonlinearity problems that arise from mismatches in the elements of the phased-array antenna. The team addressed this problem in two ways. First, they employed the DPD technique, which involves distorting the signal before transmission to effectively cancel out the distortion introduced by the amplifier. Their implementation, unlike the conventional DPD approaches, used a shared look-up table (LUT) for all antennas, minimizing the complexity of the circuit. Second, they introduced inter-element mismatch compensation capabilities to the phased array, improving its overall linearity. “We compared the proposed device with other state-of-the-art 5G phased-array transceivers and found that, by compensating the inter-element mismatches in the shared-LUT DPD module, ours demonstrate a lower adjacent channel leakage and transmission error,” remarks Professor Okada. “Hopefully, the device and techniques described in this study will let us all reap the benefits of 5G NR sooner!”
    Story Source:
    Materials provided by Tokyo Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Ultra-fast photonic computing processor uses polarization

    New research uses multiple polarisation channels to carry out parallel processing — enhancing computing density by several orders over conventional electronic chips.
    In a paper published today in Science Advances, researchers at the University of Oxford have developed a method using the polarisation of light to maximise information storage density and computing performance using nanowires.
    Light has an exploitable property — different wavelengths of light do not interact with each other — a characteristic used by fibreoptics to carry parallel streams of data. Similarly, different polarisations of light do not interact with each other either. Each polarisation can be used as an independent information channel, enabling more information to be stored in multiple channels, hugely enhancing information density.
    First author and DPhil student June Sang Lee, Department of Materials, University of Oxford said: ‘We all know that the advantage of photonics over electronics is that light is faster and more functional over large bandwidths. So, our aim was to fully harness such advantages of photonics combining with tunable material to realise faster and denser information processing.’
    In collaboration with Professor C David Wright, University of Exeter, the research team developed a HAD (hybridized-active-dielectric) nanowire, using a hybrid glassy material which shows switchable material properties upon the illumination of optical pulses. Each nanowire shows selective responses to a specific polarisation direction, so information can be simultaneously processed using multiple polarisations in different directions.
    Using this concept, researchers have developed the first photonic computing processor to utilise polarisations of light.
    Photonic computing is carried out through multiple polarisation channels, leading to an enhancement in computing density by several orders compared to that of conventional electronic chips. The computing speeds are faster because these nanowires are modulated by nanosecond optical pulses.
    Since the invention of the first integrated circuit in 1958, packing more transistors into a given size of an electronic chip has been the go-to means of maximising computing density — the so-called ‘Moore’s Law’. However, with Artificial Intelligence and Machine Learning requiring specialised hardware that is beginning to push the boundaries of established computing, the dominant question in this area of electronic engineering has been ‘How do we pack more functionalities into a single transistor?’
    For over a decade, researchers in Professor Harish Bhaskaran’s lab in the Department of Materials, University of Oxford have been looking into using light as a means to compute.
    Professor Bhaskaran, who led the work, said: ‘This is just the beginning of what we would like to see in future, which is the exploitation of all degrees of freedoms that light offers, including polarisation to dramatically parallelise information processing. Definitely early-stage work, but super exciting ideas that combine electronics, non-linear materials and computing. Lots of exciting prospects to work on which is always a great place to be in!’
    Story Source:
    Materials provided by University of Oxford. Note: Content may be edited for style and length. More

  • in

    What quantum information and snowflakes have in common, and what we can do about it

    Qubits are a basic building block for quantum computers, but they’re also notoriously fragile — tricky to observe without erasing their information in the process. Now, new research from the University of Colorado Boulder and the National Institute of Standards and Technology (NIST) could be a leap forward for handling qubits with a light touch.
    In the study, a team of physicists demonstrated that it could read out the signals from a type of qubit called a superconducting qubit using laser light, and without destroying the qubit at the same time.
    The group’s results could be a major step toward building a quantum internet, the researchers say. Such a network would link up dozens or even hundreds of quantum chips, allowing engineers to solve problems that are beyond the reach of even the fastest supercomputers around today. They could also, theoretically, use a similar set of tools to send unbreakable codes over long distances.
    The study, which will appear June 15 in the journal Nature, was led by JILA, a joint research institute between CU Boulder and NIST.
    “Currently, there’s no way to send quantum signals between distant superconducting processors like we send signals between two classical computers,” said Robert Delaney, lead author of the study and a former graduate student at JILA.
    Delaney explained that the traditional bits that run your laptop are pretty limited: They can only take on a value of zero or one, the numbers that underlie most computer programming to date. Qubits, in contrast, can be zeros, ones or, through a property called “superposition,” exist as zeros and ones at the same time. More

  • in

    Military cannot rely on AI for strategy or judgment, study suggests

    Using artificial intelligence (AI) for warfare has been the promise of science fiction and politicians for years, but new research from the Georgia Institute of Technology argues only so much can be automated and shows the value of human judgment.
    “All of the hard problems in AI really are judgment and data problems, and the interesting thing about that is when you start thinking about war, the hard problems are strategy and uncertainty, or what is well known as the fog of war,” said Jon Lindsay, an associate professor in the School of Cybersecurity & Privacy and the Sam Nunn School of International Affairs. “You need human sense-making and to make moral, ethical, and intellectual decisions in an incredibly confusing, fraught, scary situation.”
    AI decision-making is based on four key components: data about a situation, interpretation of those data (or prediction), determining the best way to act in line with goals and values (or judgment), and action. Machine learning advancements have made predictions easier, which makes data and judgment even more valuable. Although AI can automate everything from commerce to transit, judgment is where humans must intervene, Lindsay and University of Toronto Professor Avi Goldfarb wrote in the paper, “Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War,” published in International Security.
    Many policy makers assume human soldiers could be replaced with automated systems, ideally making militaries less dependent on human labor and more effective on the battlefield. This is called the substitution theory of AI, but Lindsay and Goldfarb state that AI should not be seen as a substitute, but rather a complement to existing human strategy.
    “Machines are good at prediction, but they depend on data and judgment, and the most difficult problems in war are information and strategy,” he said. “The conditions that make AI work in commerce are the conditions that are hardest to meet in a military environment because of its unpredictability.”
    An example Lindsay and Goldfarb highlight is the Rio Tinto mining company, which uses self-driving trucks to transport materials, reducing costs and risks to human drivers. There are abundant, predictable, and unbiased data traffic patterns and maps that require little human intervention unless there are road closures or obstacles.
    War, however, usually lacks abundant unbiased data, and judgments about objectives and values are inherently controversial, but that doesn’t mean it’s impossible. The researchers argue AI would be best employed in bureaucratically stabilized environments on a task-by-task basis.
    “All the excitement and the fear are about killer robots and lethal vehicles, but the worst case for military AI in practice is going to be the classically militaristic problems where you’re really dependent on creativity and interpretation,” Lindsay said. “But what we should be looking at is personnel systems, administration, logistics, and repairs.”
    There are also consequences to using AI for both the military and its adversaries, according to the researchers. If humans are the central element to deciding when to use AI in warfare, then military leadership structure and hierarchies could change based on the person in charge of designing and cleaning data systems and making policy decisions. This also means adversaries will aim to compromise both data and judgment since they would largely affect the trajectory of the war. Competing against AI may push adversaries to manipulate or disrupt data to make sound judgment even harder. In effect, human intervention will be even more necessary.
    Yet this is just the start of the argument and innovations.
    “If AI is automating prediction, that’s making judgment and data really important,” Lindsay said. “We’ve already automated a lot of military action with mechanized forces and precision weapons, then we automated data collection with intelligence satellites and sensors, and now we’re automating prediction with AI. So, when are we going to automate judgment, or are there components of judgment cannot be automated?”
    Until then, though, tactical and strategic decision making by humans continues to be the most important aspect of warfare. More

  • in

    Quantum computer programming basics

    For would-be quantum programmers scratching their heads over how to jump into the game as quantum computers proliferate and become publicly accessible, a new beginner’s guide provides a thorough introduction to quantum algorithms and their implementation on existing hardware.
    “Writing quantum algorithms is radically different from writing classical computing programs and requires some understanding of quantum principles and the mathematics behind them,” said Andrey Y. Lokhov, a scientist at Los Alamos National Laboratory and lead author of the recently published guide in ACM Transactions on Quantum Computing. “Our guide helps quantum programmers get started in the field, which is bound to grow as more and more quantum computers with more and more qubits become commonplace.”
    In succinct, stand-alone sections, the guide surveys 20 quantum algorithms — including famous, foundational quantum algorithms, such as Grover’s Algorithm for database searching and much more, and Shor’s Algorithm for factoring integers. Making the real-world connection, the guide then walks programmers through implementing the algorithms on IBM’s publicly available 5-qubit IBMQX4 quantum computer and others. In each case, the authors discuss the results of the implementation and explain differences between the simulator and the actual hardware runs.
    “This article was the result of a rapid-response effort by the Information Science and Technology Institute at Los Alamos, where about 20 Lab staff members self-selected to learn about and implement a standard quantum algorithm on the IBM Q quantum system,” said Stephan Eidenbenz, a senior quantum computing scientist at Los Alamos, a coauthor of the article and director of ISTI when work on it began.
    The goal was to prepare the Los Alamos workforce for the quantum era by guiding those staff members with little or no quantum computing experience all the way through implementation of a quantum algorithm on a real-life quantum computer, Eidenbenz said.
    These staff members, in addition to a few students and well-established quantum experts, make up the long author list of this “crowd-sourced” overview article that has already been heavily cited, Eidenbenz said.
    The first section of the guide covers the basics of quantum computer programming, explaining qubits and qubit systems, fundamental quantum concepts of superposition and entanglement and quantum measurements before tackling the deeper material of unitary transformations and gates, quantum circuits and quantum algorithms.
    The section on the IBM quantum computer covers the set of gates available for algorithms, the actual physical gates implemented, how the qubits are connected and the sources of noise, or errors.
    Another section looks at the various types of quantum algorithms. From there, the guide dives into the 20 selected algorithms, with a problem definition, description and steps for implementing each one on the IBM or, in a few cases, other computers.
    Extensive references at the end of the guide will help interested readers go deeper in their explorations of quantum algorithms.
    Information Science and Technology Institute at Los Alamos National Laboratory through the Laboratory Directed Research and Development program.
    Story Source:
    Materials provided by DOE/Los Alamos National Laboratory. Note: Content may be edited for style and length. More

  • in

    Calculating the 'fingerprints' of molecules with artificial intelligence

    With conventional methods, it is extremely time-consuming to calculate the spectral fingerprint of larger molecules. But this is a prerequisite for correctly interpreting experimentally obtained data. Now, a team at HZB has achieved very good results in significantly less time using self-learning graphical neural networks.
    “Macromolecules but also quantum dots, which often consist of thousands of atoms, can hardly be calculated in advance using conventional methods such as DFT,” says PD Dr. Annika Bande at HZB. With her team she has now investigated how the computing time can be shortened by using methods from artificial intelligence.
    The idea: a computer programme from the group of “graphical neural networks” or GNN receives small molecules as input with the task of determining their spectral responses. In the next step, the GNN programme compares the calculated spectra with the known target spectra (DFT or experimental) and corrects the calculation path accordingly. Round after round, the result becomes better. The GNN programme thus learns on its own how to calculate spectra reliably with the help of known spectra.
    “We have trained five newer GNNs and found that enormous improvements can be achieved with one of them, the SchNet model: The accuracy increases by 20% and this is done in a fraction of the computation time,” says first author Kanishka Singh. Singh participates in the HEIBRiDS graduate school and is supervised by two experts from different backgrounds: computer science expert Prof. Ulf Leser from Humboldt University Berlin and theoretical chemist Annika Bande.
    “Recently developed GNN frameworks could do even better,” she says. “And the demand is very high. We therefore want to strengthen this line of research and are planning to create a new postdoctoral position for it from summer onwards as part of the Helmholtz project “eXplainable Artificial Intelligence for X-ray Absorption Spectroscopy.” ”
    Story Source:
    Materials provided by Helmholtz-Zentrum Berlin für Materialien und Energie. Note: Content may be edited for style and length. More

  • in

    Automating renal access in kidney stone surgery using AI-enabled surgical robot

    Percutaneous nephrolithotomy (PCNL) is an efficient, minimally-invasive, gold standard procedure used for removing large kidney stones. Creating an access from the skin on the back to the kidney — called renal access, is a crucial yet challenging step in PCNL. An inefficiently created renal access can lead to severe complications including massive bleeding, thoracis and bowel injuries, renal pelvis perforation, or even sepsis. It is therefore no surprise that it takes years of training and practice to perform this procedure efficiently. There are two main renal access methods adopted during PCNL — fluoroscopic guidance and ultrasound (US) guidance with or without fluoroscopy. Both approaches deliver similar postoperative outcomes but require experience-based expertise.
    Many novel methods and technologies are being tested and used in clinical practice to bridge this gap in skill requirement. While some offer better imaging guidance, others provide precise percutaneous access. Nonetheless, most techniques are still challenging for beginners. This inspired a research team led by Assistant Professors Kazumi Taguchi and Shuzo Hamamoto, and Chair and Professor Takahiro Yasui from Nagoya City University (NCU) Graduate School of Medical Sciences (Nephro-urology), to question if artificial intelligence (AI)-powered robotic devices could be used for improved guidance compared with conventional US guidance. Specifically, they wanted to see if the AI-powered device called the Automated Needle Targeting with X-ray (ANT-X), which was developed by the Singaporean medical start-up, NDR Medical Technology, offers better precision in percutaneous renal access along with automated needle trajectory.
    The team conducted a randomized, single-blind, controlled trial comparing their robotic-assisted fluoroscopic-guided (RAF) method with US-guided PCNL. The results of this trial were made available online on May 13, 2022 and published on June 13, 2022 in The Journal of Urology. “This was the first human study comparing RAF with conventional ultrasound guidance for renal access during PCNL,and the first clinical application of the ANT-X,” says Dr. Taguchi.
    The trial was conducted at NCU Hospital between January 2020 and May 2021 with 71 patients — 36 in the RAF group and 35 in the US group. The primary outcome of the study was single puncture success, with stone-free rate (SFR), complication rate, parameters measured during renal access, and fluoroscopy time as secondary outcomes.
    The single puncture success rate was ~34 and 50 percent in the US and RAF groups, respectively. The average number of needle punctures were significantly fewer in the RAF group (1.82 times) as opposed to the US group (2.51 times). In 14.3 percent of US-guided cases the resident was unable to obtain renal access due to procedural difficulty and needed a surgeon change. However, none of the RAF cases faced this issue. The median needle puncture duration was also significantly shorter in the RAF group (5.5 minutes vs. 8.0 minutes). There were no significant differences in the other secondary outcomes. These results revealed that using RAF guidance reduced the mean number of needle punctures by 0.73 times.
    Multiple renal accesses during PCNL are directly linked to postoperative complications including, decreased renal function. Therefore, the low needle puncture frequency and shorter puncture duration, as demonstrated by the ANT-X, may provide better long-term outcome for patients. While the actual PCNL was performed by residents in both RAF and US groups, the renal access was created by a single, novice surgeon in the RAF group, using ANT-X. This demonstrates the safety and convenience of the novel robotic device, which could possibly reduce surgeons’ training load and allow more hospitals to offer PCNL procedures.
    Dr. Taguchi outlines the potential advantages of their RAF device saying,”The ANT-X simplifies a complex procedure, like PCNL, making it easier for more doctors to perform it and help more number of patients in the process. Being an AI-powered robotic technology, this technique may pave the way for automating similar interventional surgeries that could shorten the procedure time, relieve the burden off of senior doctors, and perhaps reduce the occurrence of complications .” With such promising results, ANT-X and other similar robotic-assisted platforms might be the future of percutaneous procedures in urology and other medical fields.
    Story Source:
    Materials provided by Nagoya City University. Note: Content may be edited for style and length. More

  • in

    New, highly tunable composite materials–with a twist

    Watch for the patterns created as the circles move across each other. Those patterns, created by two sets of lines offset from each other, are called moiré (pronounced mwar-AY) effects. As optical illusions, moiré patterns create neat simulations of movement. But at the atomic scale, when one sheet of atoms arranged in a lattice is slightly offset from another sheet, these moiré patterns can create some exciting and important physics with interesting and unusual electronic properties.
    Mathematicians at the University of Utah have found that they can design a range of composite materials from moiré patterns created by rotating and stretching one lattice relative to another. Their electrical and other physical properties can change — sometimes quite abruptly, depending on whether the resulting moiré patterns are regularly repeating or non-repeating. Their findings are published in Communications Physics.
    The mathematics and physics of these twisted lattices applies to a wide variety of material properties, says Kenneth Golden, distinguished professor of mathematics. “The underlying theory also holds for materials on a large range of length scales, from nanometers to kilometers, demonstrating just how broad the scope is for potential technological applications of our findings.”
    With a twist
    Before we arrive at these new findings, we’ll need to chart the history of two important concepts: aperiodic geometry and twistronics.
    Aperiodic geometry means patterns that don’t repeat. An example is the Penrose tiling pattern of rhombuses. If you draw a box around a part of the pattern and start sliding it in any direction, without rotating it, you’ll never find a part of the pattern that matches it. More