More stories

  • in

    Quantum computer programming basics

    For would-be quantum programmers scratching their heads over how to jump into the game as quantum computers proliferate and become publicly accessible, a new beginner’s guide provides a thorough introduction to quantum algorithms and their implementation on existing hardware.
    “Writing quantum algorithms is radically different from writing classical computing programs and requires some understanding of quantum principles and the mathematics behind them,” said Andrey Y. Lokhov, a scientist at Los Alamos National Laboratory and lead author of the recently published guide in ACM Transactions on Quantum Computing. “Our guide helps quantum programmers get started in the field, which is bound to grow as more and more quantum computers with more and more qubits become commonplace.”
    In succinct, stand-alone sections, the guide surveys 20 quantum algorithms — including famous, foundational quantum algorithms, such as Grover’s Algorithm for database searching and much more, and Shor’s Algorithm for factoring integers. Making the real-world connection, the guide then walks programmers through implementing the algorithms on IBM’s publicly available 5-qubit IBMQX4 quantum computer and others. In each case, the authors discuss the results of the implementation and explain differences between the simulator and the actual hardware runs.
    “This article was the result of a rapid-response effort by the Information Science and Technology Institute at Los Alamos, where about 20 Lab staff members self-selected to learn about and implement a standard quantum algorithm on the IBM Q quantum system,” said Stephan Eidenbenz, a senior quantum computing scientist at Los Alamos, a coauthor of the article and director of ISTI when work on it began.
    The goal was to prepare the Los Alamos workforce for the quantum era by guiding those staff members with little or no quantum computing experience all the way through implementation of a quantum algorithm on a real-life quantum computer, Eidenbenz said.
    These staff members, in addition to a few students and well-established quantum experts, make up the long author list of this “crowd-sourced” overview article that has already been heavily cited, Eidenbenz said.
    The first section of the guide covers the basics of quantum computer programming, explaining qubits and qubit systems, fundamental quantum concepts of superposition and entanglement and quantum measurements before tackling the deeper material of unitary transformations and gates, quantum circuits and quantum algorithms.
    The section on the IBM quantum computer covers the set of gates available for algorithms, the actual physical gates implemented, how the qubits are connected and the sources of noise, or errors.
    Another section looks at the various types of quantum algorithms. From there, the guide dives into the 20 selected algorithms, with a problem definition, description and steps for implementing each one on the IBM or, in a few cases, other computers.
    Extensive references at the end of the guide will help interested readers go deeper in their explorations of quantum algorithms.
    Information Science and Technology Institute at Los Alamos National Laboratory through the Laboratory Directed Research and Development program.
    Story Source:
    Materials provided by DOE/Los Alamos National Laboratory. Note: Content may be edited for style and length. More

  • in

    Calculating the 'fingerprints' of molecules with artificial intelligence

    With conventional methods, it is extremely time-consuming to calculate the spectral fingerprint of larger molecules. But this is a prerequisite for correctly interpreting experimentally obtained data. Now, a team at HZB has achieved very good results in significantly less time using self-learning graphical neural networks.
    “Macromolecules but also quantum dots, which often consist of thousands of atoms, can hardly be calculated in advance using conventional methods such as DFT,” says PD Dr. Annika Bande at HZB. With her team she has now investigated how the computing time can be shortened by using methods from artificial intelligence.
    The idea: a computer programme from the group of “graphical neural networks” or GNN receives small molecules as input with the task of determining their spectral responses. In the next step, the GNN programme compares the calculated spectra with the known target spectra (DFT or experimental) and corrects the calculation path accordingly. Round after round, the result becomes better. The GNN programme thus learns on its own how to calculate spectra reliably with the help of known spectra.
    “We have trained five newer GNNs and found that enormous improvements can be achieved with one of them, the SchNet model: The accuracy increases by 20% and this is done in a fraction of the computation time,” says first author Kanishka Singh. Singh participates in the HEIBRiDS graduate school and is supervised by two experts from different backgrounds: computer science expert Prof. Ulf Leser from Humboldt University Berlin and theoretical chemist Annika Bande.
    “Recently developed GNN frameworks could do even better,” she says. “And the demand is very high. We therefore want to strengthen this line of research and are planning to create a new postdoctoral position for it from summer onwards as part of the Helmholtz project “eXplainable Artificial Intelligence for X-ray Absorption Spectroscopy.” ”
    Story Source:
    Materials provided by Helmholtz-Zentrum Berlin für Materialien und Energie. Note: Content may be edited for style and length. More

  • in

    Automating renal access in kidney stone surgery using AI-enabled surgical robot

    Percutaneous nephrolithotomy (PCNL) is an efficient, minimally-invasive, gold standard procedure used for removing large kidney stones. Creating an access from the skin on the back to the kidney — called renal access, is a crucial yet challenging step in PCNL. An inefficiently created renal access can lead to severe complications including massive bleeding, thoracis and bowel injuries, renal pelvis perforation, or even sepsis. It is therefore no surprise that it takes years of training and practice to perform this procedure efficiently. There are two main renal access methods adopted during PCNL — fluoroscopic guidance and ultrasound (US) guidance with or without fluoroscopy. Both approaches deliver similar postoperative outcomes but require experience-based expertise.
    Many novel methods and technologies are being tested and used in clinical practice to bridge this gap in skill requirement. While some offer better imaging guidance, others provide precise percutaneous access. Nonetheless, most techniques are still challenging for beginners. This inspired a research team led by Assistant Professors Kazumi Taguchi and Shuzo Hamamoto, and Chair and Professor Takahiro Yasui from Nagoya City University (NCU) Graduate School of Medical Sciences (Nephro-urology), to question if artificial intelligence (AI)-powered robotic devices could be used for improved guidance compared with conventional US guidance. Specifically, they wanted to see if the AI-powered device called the Automated Needle Targeting with X-ray (ANT-X), which was developed by the Singaporean medical start-up, NDR Medical Technology, offers better precision in percutaneous renal access along with automated needle trajectory.
    The team conducted a randomized, single-blind, controlled trial comparing their robotic-assisted fluoroscopic-guided (RAF) method with US-guided PCNL. The results of this trial were made available online on May 13, 2022 and published on June 13, 2022 in The Journal of Urology. “This was the first human study comparing RAF with conventional ultrasound guidance for renal access during PCNL,and the first clinical application of the ANT-X,” says Dr. Taguchi.
    The trial was conducted at NCU Hospital between January 2020 and May 2021 with 71 patients — 36 in the RAF group and 35 in the US group. The primary outcome of the study was single puncture success, with stone-free rate (SFR), complication rate, parameters measured during renal access, and fluoroscopy time as secondary outcomes.
    The single puncture success rate was ~34 and 50 percent in the US and RAF groups, respectively. The average number of needle punctures were significantly fewer in the RAF group (1.82 times) as opposed to the US group (2.51 times). In 14.3 percent of US-guided cases the resident was unable to obtain renal access due to procedural difficulty and needed a surgeon change. However, none of the RAF cases faced this issue. The median needle puncture duration was also significantly shorter in the RAF group (5.5 minutes vs. 8.0 minutes). There were no significant differences in the other secondary outcomes. These results revealed that using RAF guidance reduced the mean number of needle punctures by 0.73 times.
    Multiple renal accesses during PCNL are directly linked to postoperative complications including, decreased renal function. Therefore, the low needle puncture frequency and shorter puncture duration, as demonstrated by the ANT-X, may provide better long-term outcome for patients. While the actual PCNL was performed by residents in both RAF and US groups, the renal access was created by a single, novice surgeon in the RAF group, using ANT-X. This demonstrates the safety and convenience of the novel robotic device, which could possibly reduce surgeons’ training load and allow more hospitals to offer PCNL procedures.
    Dr. Taguchi outlines the potential advantages of their RAF device saying,”The ANT-X simplifies a complex procedure, like PCNL, making it easier for more doctors to perform it and help more number of patients in the process. Being an AI-powered robotic technology, this technique may pave the way for automating similar interventional surgeries that could shorten the procedure time, relieve the burden off of senior doctors, and perhaps reduce the occurrence of complications .” With such promising results, ANT-X and other similar robotic-assisted platforms might be the future of percutaneous procedures in urology and other medical fields.
    Story Source:
    Materials provided by Nagoya City University. Note: Content may be edited for style and length. More

  • in

    New, highly tunable composite materials–with a twist

    Watch for the patterns created as the circles move across each other. Those patterns, created by two sets of lines offset from each other, are called moiré (pronounced mwar-AY) effects. As optical illusions, moiré patterns create neat simulations of movement. But at the atomic scale, when one sheet of atoms arranged in a lattice is slightly offset from another sheet, these moiré patterns can create some exciting and important physics with interesting and unusual electronic properties.
    Mathematicians at the University of Utah have found that they can design a range of composite materials from moiré patterns created by rotating and stretching one lattice relative to another. Their electrical and other physical properties can change — sometimes quite abruptly, depending on whether the resulting moiré patterns are regularly repeating or non-repeating. Their findings are published in Communications Physics.
    The mathematics and physics of these twisted lattices applies to a wide variety of material properties, says Kenneth Golden, distinguished professor of mathematics. “The underlying theory also holds for materials on a large range of length scales, from nanometers to kilometers, demonstrating just how broad the scope is for potential technological applications of our findings.”
    With a twist
    Before we arrive at these new findings, we’ll need to chart the history of two important concepts: aperiodic geometry and twistronics.
    Aperiodic geometry means patterns that don’t repeat. An example is the Penrose tiling pattern of rhombuses. If you draw a box around a part of the pattern and start sliding it in any direction, without rotating it, you’ll never find a part of the pattern that matches it. More

  • in

    The potential of probabilistic computers

    The rise of artificial intelligence (AI) and machine learning (ML) has created a crisis in computing and a significant need for more hardware that is both energy-efficient and scalable. A key step in both AI and ML is making decisions based on incomplete data, the best approach for which is to output a probability for each possible answer. Current classical computers are not able to do that in an energy-efficient way, a limitation that has led to a search for novel approaches to computing. Quantum computers, which operate on qubits, may help meet these challenges, but they are extremely sensitive to their surroundings, must be kept at extremely low temperatures and are still in the early stages of development.
    Kerem Camsari, an assistant professor of electrical and computer engineering (ECE) at UC Santa Barbara, believes that probabilistic computers (p-computers) are the solution. P-computers are powered by probabilistic bits (p-bits), which interact with other p-bits in the same system. Unlike the bits in classical computers, which are in a 0 or a 1 state, or qubits, which can be in more than one state at a time, p-bits fluctuate between positions and operate at room temperature. In an article published in Nature Electronics, Camsari and his collaborators discuss their project that demonstrated the promise of p-computers.
    “We showed that inherently probabilistic computers, built out of p-bits, can outperform state-of-the-art software that has been in development for decades,” said Camsari, who received a Young Investigator Award from the Office of Naval Research earlier this year.
    Camsari’s group collaborated with scientists at the University of Messina in Italy, with Luke Theogarajan, vice chair of UCSB’s ECE Department, and with physics professor John Martinis, who led the team that built the world’s first quantum computer to achieve quantum supremacy. Together the researchers achieved their promising results by using classical hardware to create domain-specific architectures. They developed a unique sparse Ising machine (sIm), a novel computing device used to solve optimization problems and minimize energy consumption.
    Camsari describes the sIm as a collection of probabilistic bits which can be thought of as people. And each person has only a small set of trusted friends, which are the “sparse” connections in the machine.
    “The people can make decisions quickly because they each have a small set of trusted friends and they do not have to hear from everyone in an entire network,” he explained. “The process by which these agents reach consensus is similar to that used to solve a hard optimization problem that satisfies many different constraints. Sparse Ising machines allow us to formulate and solve a wide variety of such optimization problems using the same hardware.”
    The team’s prototyped architecture included a field-programmable gate array (FPGA), a powerful piece of hardware that provides much more flexibility than application-specific integrated circuits. More

  • in

    Staring at yourself during virtual chats may worsen your mood

    A new study finds that the more a person stares at themself while talking with a partner in an online chat, the more their mood degrades over the course of the conversation. Alcohol use appears to worsen the problem, the researchers found.
    Reported in the journal Clinical Psychological Science, the findings point to a potentially problematic role of online meeting platforms in exacerbating psychological problems like anxiety and depression, the researchers said.
    “We used eye-tracking technology to examine the relationship between mood, alcohol and attentional focus during virtual social interaction,” said Talia Ariss, a University of Illinois Urbana-Champaign doctoral candidate who led the research with U. of I. psychology professor Catharine Fairbairn. “We found that participants who spent more time looking at themselves during the conversation felt worse after the call, even after controlling for pre-interaction negative mood. And those who were under the influence of alcohol spent more time looking at themselves.”
    The findings add to previous studies suggesting that people who focus more on themselves than on external realities — especially during social interactions — may be susceptible to mood disorders, Ariss said.
    “The more self-focused a person is, the more likely they are to report feeling emotions that are consistent with things like anxiety and even depression,” she said.
    “Users of the online video call platform Zoom increased 30-fold during the pandemic — burgeoning from 10 million in December 2019 to 300 million by April 2020,” the researchers wrote. “The pandemic has yielded a surge in levels of depression and anxiety and, given reports of heightened self-awareness and ‘fatigue’ during virtual exchange, some have posited a role for virtual interaction in exacerbating such trends.”
    In the study, participants answered questions about their emotional status before and after the online conversations. They were instructed to talk about what they liked and disliked about living in the local community during the chats, and to discuss their musical preferences. Participants could see themselves and their conversation partners on a split-screen monitor. Some consumed an alcoholic beverage before talking and others drank a nonalcoholic beverage. More

  • in

    Estimating tumor-specific total mRNA level predicts cancer outcomes

    Researchers at The University of Texas MD Anderson Cancer Center have developed a new approach to quantify tumor-specific total mRNA levels from patient tumor samples, which contain both cancer and non-cancer cells. Using this technique on tumors from more than 6,500 patients across 15 cancer types, the researchers demonstrated that higher mRNA levels in cancer cells were associated with reduced patient survival.
    The study, published today in Nature Biotechnology, suggests this computational approach could permit large-scale analyses of tumor-specific total mRNA levels from tumor samples, which could serve as a prognostic biomarker for many types of cancers.
    “Single-cell sequencing studies have shown us that total mRNA content in cancer cells is correlated with biological features of the tumor, but it’s not feasible to use single-cell approaches for analyzing large patient cohorts,” said corresponding author Wenyi Wang, Ph.D., professor of Bioinformatics & Computational Biology. “With this study, we propose a novel mathematical deconvolution technique to study this important biological feature of cancer at scale, using widely available bulk tumor sequencing data.”
    Whereas single-cell sequencing approaches can profile thousands of individual cells from a sample, bulk sequencing generates an overall picture of the tumor across a larger number of cells. Because a tumor sample contains a diverse mixture of cancer and non-cancer cells, additional steps are required to isolate the cancer-specific information from bulk sequencing data.
    Deconvolution is a computational technique designed to separate bulk sequencing data into its different components. This study is the first to report a deconvolution approach for quantifying total tumor-specific mRNA levels from bulk sequencing data, providing a scalable complement to single-cell analysis.
    Together with Wang, the study was led by Shaolong Cao, Ph.D., former postdoctoral fellow, Jennifer R. Wang, M.D., assistant professor of Head & Neck Surgery, and Shuangxi Ji, Ph.D., postdoctoral fellow in Bioinformatics & Computational Biology. More

  • in

    Rubbery camouflage skin exhibits smart and stretchy behaviors

    The skin of cephalopods, such as octopuses, squids and cuttlefish, is stretchy and smart, contributing to these creatures’ ability to sense and respond to their surroundings. A Penn State-led collaboration has harnessed these properties to create an artificial skin that mimics both the elasticity and the neurologic functions of cephalopod skin, with potential applications for neurorobotics, skin prosthetics, artificial organs and more.  
    Led by Cunjiang Yu, Dorothy Quiggle Career Development Associate Professor of Engineering Science and Mechanics and Biomedical Engineering, the team published its findings on June 1 in the Proceedings of the National Academy of Sciences. 
    Cephalopod skin is a soft organ that can endure complex deformations, such as expanding, contracting, bending and twisting. It also possesses cognitive sense-and-respond functions that enable the skin to sense light, react and camouflage its wearer. While artificial skins with either these physical or these cognitive capabilities have existed previously, according to Yu, until now none has simultaneously exhibited both qualities — the combination needed for advanced, artificially intelligent bioelectronic skin devices.  
    “Although several artificial camouflage skin devices have been recently developed, they lack critical noncentralized neuromorphic processing and cognition capabilities, and materials with such capabilities lack robust mechanical properties,” Yu said. “Our recently developed soft synaptic devices have achieved brain-inspired computing and artificial nervous systems that are sensitive to touch and light that retain these neuromorphic functions when biaxially stretched.”  
    To simultaneously achieve both smartness and stretchability, the researchers constructed synaptic transistors entirely from elastomeric materials. These rubbery semiconductors operate in a similar fashion to neural connections, exchanging critical messages for system-wide needs, impervious to physical changes in the system’s structure. The key to creating a soft skin device with both cognitive and stretching capabilities, according to Yu, was using elastomeric rubbery materials for every component. This approach resulted in a device that can successfully exhibit and maintain neurological synaptic behaviors, such as image sensing and memorization, even when stretched, twisted and poked 30% beyond a natural resting state.  
    “With the recent surge of smart skin devices, implementing neuromorphic functions into these devices opens the door for a future direction toward more powerful biomimetics,” Yu said. “This methodology for implementing cognitive functions into smart skin devices could be extrapolated into many other areas, including neuromorphic computing wearables, artificial organs, soft neurorobotics and skin prosthetics for next-generation intelligent systems.”
    The Office of Naval Research Young Investigator Program and the National Science Foundation supported this work.
    Co-authors include Hyunseok Shim, Seonmin Jang and Shubham Patel, Penn State Department of Engineering Science and Mechanics; Anish Thukral and Bin Kan, University of Houston Department of Mechanical Engineering; Seongsik Jeong, Hyeson Jo and Hai-Jin Kim, Gyeongsang National University School of Mechanical and Aerospace Engineering; Guodan Wei, Tsinghua-Berkeley Shenzhen Institute; and Wei Lan, Lanzhou University School of Physical Science and Technology. 
    Story Source:
    Materials provided by Penn State. Original written by Mary Fetzer. Note: Content may be edited for style and length. More