More stories

  • in

    What quantum information and snowflakes have in common, and what we can do about it

    Qubits are a basic building block for quantum computers, but they’re also notoriously fragile — tricky to observe without erasing their information in the process. Now, new research from the University of Colorado Boulder and the National Institute of Standards and Technology (NIST) could be a leap forward for handling qubits with a light touch.
    In the study, a team of physicists demonstrated that it could read out the signals from a type of qubit called a superconducting qubit using laser light, and without destroying the qubit at the same time.
    The group’s results could be a major step toward building a quantum internet, the researchers say. Such a network would link up dozens or even hundreds of quantum chips, allowing engineers to solve problems that are beyond the reach of even the fastest supercomputers around today. They could also, theoretically, use a similar set of tools to send unbreakable codes over long distances.
    The study, which will appear June 15 in the journal Nature, was led by JILA, a joint research institute between CU Boulder and NIST.
    “Currently, there’s no way to send quantum signals between distant superconducting processors like we send signals between two classical computers,” said Robert Delaney, lead author of the study and a former graduate student at JILA.
    Delaney explained that the traditional bits that run your laptop are pretty limited: They can only take on a value of zero or one, the numbers that underlie most computer programming to date. Qubits, in contrast, can be zeros, ones or, through a property called “superposition,” exist as zeros and ones at the same time. More

  • in

    Military cannot rely on AI for strategy or judgment, study suggests

    Using artificial intelligence (AI) for warfare has been the promise of science fiction and politicians for years, but new research from the Georgia Institute of Technology argues only so much can be automated and shows the value of human judgment.
    “All of the hard problems in AI really are judgment and data problems, and the interesting thing about that is when you start thinking about war, the hard problems are strategy and uncertainty, or what is well known as the fog of war,” said Jon Lindsay, an associate professor in the School of Cybersecurity & Privacy and the Sam Nunn School of International Affairs. “You need human sense-making and to make moral, ethical, and intellectual decisions in an incredibly confusing, fraught, scary situation.”
    AI decision-making is based on four key components: data about a situation, interpretation of those data (or prediction), determining the best way to act in line with goals and values (or judgment), and action. Machine learning advancements have made predictions easier, which makes data and judgment even more valuable. Although AI can automate everything from commerce to transit, judgment is where humans must intervene, Lindsay and University of Toronto Professor Avi Goldfarb wrote in the paper, “Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War,” published in International Security.
    Many policy makers assume human soldiers could be replaced with automated systems, ideally making militaries less dependent on human labor and more effective on the battlefield. This is called the substitution theory of AI, but Lindsay and Goldfarb state that AI should not be seen as a substitute, but rather a complement to existing human strategy.
    “Machines are good at prediction, but they depend on data and judgment, and the most difficult problems in war are information and strategy,” he said. “The conditions that make AI work in commerce are the conditions that are hardest to meet in a military environment because of its unpredictability.”
    An example Lindsay and Goldfarb highlight is the Rio Tinto mining company, which uses self-driving trucks to transport materials, reducing costs and risks to human drivers. There are abundant, predictable, and unbiased data traffic patterns and maps that require little human intervention unless there are road closures or obstacles.
    War, however, usually lacks abundant unbiased data, and judgments about objectives and values are inherently controversial, but that doesn’t mean it’s impossible. The researchers argue AI would be best employed in bureaucratically stabilized environments on a task-by-task basis.
    “All the excitement and the fear are about killer robots and lethal vehicles, but the worst case for military AI in practice is going to be the classically militaristic problems where you’re really dependent on creativity and interpretation,” Lindsay said. “But what we should be looking at is personnel systems, administration, logistics, and repairs.”
    There are also consequences to using AI for both the military and its adversaries, according to the researchers. If humans are the central element to deciding when to use AI in warfare, then military leadership structure and hierarchies could change based on the person in charge of designing and cleaning data systems and making policy decisions. This also means adversaries will aim to compromise both data and judgment since they would largely affect the trajectory of the war. Competing against AI may push adversaries to manipulate or disrupt data to make sound judgment even harder. In effect, human intervention will be even more necessary.
    Yet this is just the start of the argument and innovations.
    “If AI is automating prediction, that’s making judgment and data really important,” Lindsay said. “We’ve already automated a lot of military action with mechanized forces and precision weapons, then we automated data collection with intelligence satellites and sensors, and now we’re automating prediction with AI. So, when are we going to automate judgment, or are there components of judgment cannot be automated?”
    Until then, though, tactical and strategic decision making by humans continues to be the most important aspect of warfare. More

  • in

    Quantum computer programming basics

    For would-be quantum programmers scratching their heads over how to jump into the game as quantum computers proliferate and become publicly accessible, a new beginner’s guide provides a thorough introduction to quantum algorithms and their implementation on existing hardware.
    “Writing quantum algorithms is radically different from writing classical computing programs and requires some understanding of quantum principles and the mathematics behind them,” said Andrey Y. Lokhov, a scientist at Los Alamos National Laboratory and lead author of the recently published guide in ACM Transactions on Quantum Computing. “Our guide helps quantum programmers get started in the field, which is bound to grow as more and more quantum computers with more and more qubits become commonplace.”
    In succinct, stand-alone sections, the guide surveys 20 quantum algorithms — including famous, foundational quantum algorithms, such as Grover’s Algorithm for database searching and much more, and Shor’s Algorithm for factoring integers. Making the real-world connection, the guide then walks programmers through implementing the algorithms on IBM’s publicly available 5-qubit IBMQX4 quantum computer and others. In each case, the authors discuss the results of the implementation and explain differences between the simulator and the actual hardware runs.
    “This article was the result of a rapid-response effort by the Information Science and Technology Institute at Los Alamos, where about 20 Lab staff members self-selected to learn about and implement a standard quantum algorithm on the IBM Q quantum system,” said Stephan Eidenbenz, a senior quantum computing scientist at Los Alamos, a coauthor of the article and director of ISTI when work on it began.
    The goal was to prepare the Los Alamos workforce for the quantum era by guiding those staff members with little or no quantum computing experience all the way through implementation of a quantum algorithm on a real-life quantum computer, Eidenbenz said.
    These staff members, in addition to a few students and well-established quantum experts, make up the long author list of this “crowd-sourced” overview article that has already been heavily cited, Eidenbenz said.
    The first section of the guide covers the basics of quantum computer programming, explaining qubits and qubit systems, fundamental quantum concepts of superposition and entanglement and quantum measurements before tackling the deeper material of unitary transformations and gates, quantum circuits and quantum algorithms.
    The section on the IBM quantum computer covers the set of gates available for algorithms, the actual physical gates implemented, how the qubits are connected and the sources of noise, or errors.
    Another section looks at the various types of quantum algorithms. From there, the guide dives into the 20 selected algorithms, with a problem definition, description and steps for implementing each one on the IBM or, in a few cases, other computers.
    Extensive references at the end of the guide will help interested readers go deeper in their explorations of quantum algorithms.
    Information Science and Technology Institute at Los Alamos National Laboratory through the Laboratory Directed Research and Development program.
    Story Source:
    Materials provided by DOE/Los Alamos National Laboratory. Note: Content may be edited for style and length. More

  • in

    Calculating the 'fingerprints' of molecules with artificial intelligence

    With conventional methods, it is extremely time-consuming to calculate the spectral fingerprint of larger molecules. But this is a prerequisite for correctly interpreting experimentally obtained data. Now, a team at HZB has achieved very good results in significantly less time using self-learning graphical neural networks.
    “Macromolecules but also quantum dots, which often consist of thousands of atoms, can hardly be calculated in advance using conventional methods such as DFT,” says PD Dr. Annika Bande at HZB. With her team she has now investigated how the computing time can be shortened by using methods from artificial intelligence.
    The idea: a computer programme from the group of “graphical neural networks” or GNN receives small molecules as input with the task of determining their spectral responses. In the next step, the GNN programme compares the calculated spectra with the known target spectra (DFT or experimental) and corrects the calculation path accordingly. Round after round, the result becomes better. The GNN programme thus learns on its own how to calculate spectra reliably with the help of known spectra.
    “We have trained five newer GNNs and found that enormous improvements can be achieved with one of them, the SchNet model: The accuracy increases by 20% and this is done in a fraction of the computation time,” says first author Kanishka Singh. Singh participates in the HEIBRiDS graduate school and is supervised by two experts from different backgrounds: computer science expert Prof. Ulf Leser from Humboldt University Berlin and theoretical chemist Annika Bande.
    “Recently developed GNN frameworks could do even better,” she says. “And the demand is very high. We therefore want to strengthen this line of research and are planning to create a new postdoctoral position for it from summer onwards as part of the Helmholtz project “eXplainable Artificial Intelligence for X-ray Absorption Spectroscopy.” ”
    Story Source:
    Materials provided by Helmholtz-Zentrum Berlin für Materialien und Energie. Note: Content may be edited for style and length. More

  • in

    Automating renal access in kidney stone surgery using AI-enabled surgical robot

    Percutaneous nephrolithotomy (PCNL) is an efficient, minimally-invasive, gold standard procedure used for removing large kidney stones. Creating an access from the skin on the back to the kidney — called renal access, is a crucial yet challenging step in PCNL. An inefficiently created renal access can lead to severe complications including massive bleeding, thoracis and bowel injuries, renal pelvis perforation, or even sepsis. It is therefore no surprise that it takes years of training and practice to perform this procedure efficiently. There are two main renal access methods adopted during PCNL — fluoroscopic guidance and ultrasound (US) guidance with or without fluoroscopy. Both approaches deliver similar postoperative outcomes but require experience-based expertise.
    Many novel methods and technologies are being tested and used in clinical practice to bridge this gap in skill requirement. While some offer better imaging guidance, others provide precise percutaneous access. Nonetheless, most techniques are still challenging for beginners. This inspired a research team led by Assistant Professors Kazumi Taguchi and Shuzo Hamamoto, and Chair and Professor Takahiro Yasui from Nagoya City University (NCU) Graduate School of Medical Sciences (Nephro-urology), to question if artificial intelligence (AI)-powered robotic devices could be used for improved guidance compared with conventional US guidance. Specifically, they wanted to see if the AI-powered device called the Automated Needle Targeting with X-ray (ANT-X), which was developed by the Singaporean medical start-up, NDR Medical Technology, offers better precision in percutaneous renal access along with automated needle trajectory.
    The team conducted a randomized, single-blind, controlled trial comparing their robotic-assisted fluoroscopic-guided (RAF) method with US-guided PCNL. The results of this trial were made available online on May 13, 2022 and published on June 13, 2022 in The Journal of Urology. “This was the first human study comparing RAF with conventional ultrasound guidance for renal access during PCNL,and the first clinical application of the ANT-X,” says Dr. Taguchi.
    The trial was conducted at NCU Hospital between January 2020 and May 2021 with 71 patients — 36 in the RAF group and 35 in the US group. The primary outcome of the study was single puncture success, with stone-free rate (SFR), complication rate, parameters measured during renal access, and fluoroscopy time as secondary outcomes.
    The single puncture success rate was ~34 and 50 percent in the US and RAF groups, respectively. The average number of needle punctures were significantly fewer in the RAF group (1.82 times) as opposed to the US group (2.51 times). In 14.3 percent of US-guided cases the resident was unable to obtain renal access due to procedural difficulty and needed a surgeon change. However, none of the RAF cases faced this issue. The median needle puncture duration was also significantly shorter in the RAF group (5.5 minutes vs. 8.0 minutes). There were no significant differences in the other secondary outcomes. These results revealed that using RAF guidance reduced the mean number of needle punctures by 0.73 times.
    Multiple renal accesses during PCNL are directly linked to postoperative complications including, decreased renal function. Therefore, the low needle puncture frequency and shorter puncture duration, as demonstrated by the ANT-X, may provide better long-term outcome for patients. While the actual PCNL was performed by residents in both RAF and US groups, the renal access was created by a single, novice surgeon in the RAF group, using ANT-X. This demonstrates the safety and convenience of the novel robotic device, which could possibly reduce surgeons’ training load and allow more hospitals to offer PCNL procedures.
    Dr. Taguchi outlines the potential advantages of their RAF device saying,”The ANT-X simplifies a complex procedure, like PCNL, making it easier for more doctors to perform it and help more number of patients in the process. Being an AI-powered robotic technology, this technique may pave the way for automating similar interventional surgeries that could shorten the procedure time, relieve the burden off of senior doctors, and perhaps reduce the occurrence of complications .” With such promising results, ANT-X and other similar robotic-assisted platforms might be the future of percutaneous procedures in urology and other medical fields.
    Story Source:
    Materials provided by Nagoya City University. Note: Content may be edited for style and length. More

  • in

    New, highly tunable composite materials–with a twist

    Watch for the patterns created as the circles move across each other. Those patterns, created by two sets of lines offset from each other, are called moiré (pronounced mwar-AY) effects. As optical illusions, moiré patterns create neat simulations of movement. But at the atomic scale, when one sheet of atoms arranged in a lattice is slightly offset from another sheet, these moiré patterns can create some exciting and important physics with interesting and unusual electronic properties.
    Mathematicians at the University of Utah have found that they can design a range of composite materials from moiré patterns created by rotating and stretching one lattice relative to another. Their electrical and other physical properties can change — sometimes quite abruptly, depending on whether the resulting moiré patterns are regularly repeating or non-repeating. Their findings are published in Communications Physics.
    The mathematics and physics of these twisted lattices applies to a wide variety of material properties, says Kenneth Golden, distinguished professor of mathematics. “The underlying theory also holds for materials on a large range of length scales, from nanometers to kilometers, demonstrating just how broad the scope is for potential technological applications of our findings.”
    With a twist
    Before we arrive at these new findings, we’ll need to chart the history of two important concepts: aperiodic geometry and twistronics.
    Aperiodic geometry means patterns that don’t repeat. An example is the Penrose tiling pattern of rhombuses. If you draw a box around a part of the pattern and start sliding it in any direction, without rotating it, you’ll never find a part of the pattern that matches it. More

  • in

    The potential of probabilistic computers

    The rise of artificial intelligence (AI) and machine learning (ML) has created a crisis in computing and a significant need for more hardware that is both energy-efficient and scalable. A key step in both AI and ML is making decisions based on incomplete data, the best approach for which is to output a probability for each possible answer. Current classical computers are not able to do that in an energy-efficient way, a limitation that has led to a search for novel approaches to computing. Quantum computers, which operate on qubits, may help meet these challenges, but they are extremely sensitive to their surroundings, must be kept at extremely low temperatures and are still in the early stages of development.
    Kerem Camsari, an assistant professor of electrical and computer engineering (ECE) at UC Santa Barbara, believes that probabilistic computers (p-computers) are the solution. P-computers are powered by probabilistic bits (p-bits), which interact with other p-bits in the same system. Unlike the bits in classical computers, which are in a 0 or a 1 state, or qubits, which can be in more than one state at a time, p-bits fluctuate between positions and operate at room temperature. In an article published in Nature Electronics, Camsari and his collaborators discuss their project that demonstrated the promise of p-computers.
    “We showed that inherently probabilistic computers, built out of p-bits, can outperform state-of-the-art software that has been in development for decades,” said Camsari, who received a Young Investigator Award from the Office of Naval Research earlier this year.
    Camsari’s group collaborated with scientists at the University of Messina in Italy, with Luke Theogarajan, vice chair of UCSB’s ECE Department, and with physics professor John Martinis, who led the team that built the world’s first quantum computer to achieve quantum supremacy. Together the researchers achieved their promising results by using classical hardware to create domain-specific architectures. They developed a unique sparse Ising machine (sIm), a novel computing device used to solve optimization problems and minimize energy consumption.
    Camsari describes the sIm as a collection of probabilistic bits which can be thought of as people. And each person has only a small set of trusted friends, which are the “sparse” connections in the machine.
    “The people can make decisions quickly because they each have a small set of trusted friends and they do not have to hear from everyone in an entire network,” he explained. “The process by which these agents reach consensus is similar to that used to solve a hard optimization problem that satisfies many different constraints. Sparse Ising machines allow us to formulate and solve a wide variety of such optimization problems using the same hardware.”
    The team’s prototyped architecture included a field-programmable gate array (FPGA), a powerful piece of hardware that provides much more flexibility than application-specific integrated circuits. More

  • in

    Staring at yourself during virtual chats may worsen your mood

    A new study finds that the more a person stares at themself while talking with a partner in an online chat, the more their mood degrades over the course of the conversation. Alcohol use appears to worsen the problem, the researchers found.
    Reported in the journal Clinical Psychological Science, the findings point to a potentially problematic role of online meeting platforms in exacerbating psychological problems like anxiety and depression, the researchers said.
    “We used eye-tracking technology to examine the relationship between mood, alcohol and attentional focus during virtual social interaction,” said Talia Ariss, a University of Illinois Urbana-Champaign doctoral candidate who led the research with U. of I. psychology professor Catharine Fairbairn. “We found that participants who spent more time looking at themselves during the conversation felt worse after the call, even after controlling for pre-interaction negative mood. And those who were under the influence of alcohol spent more time looking at themselves.”
    The findings add to previous studies suggesting that people who focus more on themselves than on external realities — especially during social interactions — may be susceptible to mood disorders, Ariss said.
    “The more self-focused a person is, the more likely they are to report feeling emotions that are consistent with things like anxiety and even depression,” she said.
    “Users of the online video call platform Zoom increased 30-fold during the pandemic — burgeoning from 10 million in December 2019 to 300 million by April 2020,” the researchers wrote. “The pandemic has yielded a surge in levels of depression and anxiety and, given reports of heightened self-awareness and ‘fatigue’ during virtual exchange, some have posited a role for virtual interaction in exacerbating such trends.”
    In the study, participants answered questions about their emotional status before and after the online conversations. They were instructed to talk about what they liked and disliked about living in the local community during the chats, and to discuss their musical preferences. Participants could see themselves and their conversation partners on a split-screen monitor. Some consumed an alcoholic beverage before talking and others drank a nonalcoholic beverage. More