More stories

  • in

    Powerful new AI can predict people’s attitudes to vaccines

    A powerful new tool in artificial intelligence is able to predict whether someone is willing to be vaccinated against COVID-19.
    The predictive system uses a small set of data from demographics and personal judgments such as aversion to risk or loss.
    The findings frame a new technology that could have broad applications for predicting mental health and result in more effective public health campaigns.
    A team led by researchers at the University of Cincinnati and Northwestern University created a predictive model using an integrated system of mathematical equations describing the lawful patterns in reward and aversion judgment with machine learning.
    “We used a small number of variables and minimal computational resources to make predictions,” said lead author Nicole Vike, a senior research associate in UC’s College of Engineering and Applied Science.
    “COVID-19 is unlikely to be the last pandemic we see in the next decades. Having a new form of AI for prediction in public health provides a valuable tool that could help prepare hospitals for predicting vaccination rates and consequential infection rates.”
    The study was published in the Journal of Medical Internet Research Public Health and Surveillance.

    Researchers surveyed 3,476 adults across the United States in 2021 during the COVID-19 pandemic. At the time of the survey, the first vaccines had been available for more than a year.
    Respondents provided information such as where they live, income, highest education level completed, ethnicity and access to the internet. The respondents’ demographics mirrored those of the United States based on U.S. Census Bureau figures.
    Participants were asked if they had received either of the available COVID-19 vaccines. About 73% of respondents said they were vaccinated, slightly more than the 70% of the nation’s population that had been vaccinated in 2021.
    Further, they were asked if they routinely followed four recommendations designed to prevent the spread of the virus: wearing a mask, social distancing, washing their hands and not gathering in large groups.
    Participants were asked to rate how much they liked or disliked a randomly sequenced set of 48 pictures on a seven-point scale of 3 to -3. The pictures were from the International Affective Picture Set, a large set of emotionally evocative color photographs, in six categories: sports, disasters, cute animals, aggressive animals, nature and food.
    Vike said the goal of this exercise is to quantify mathematical features of people’s judgments as they observe mildly emotional stimuli. Measures from this task include concepts familiar to behavioral economists — or even people who gamble — such aversion to risk (the point at which someone is willing to accept potential loss for a potential reward) and aversion to loss. This is the willingness to avoid risk by, for example, obtaining insurance.

    “The framework by which we judge what is rewarding or aversive is fundamental to how we make medical decisions,” said co-senior author Hans Breiter, a professor of computer science at UC. “A seminal paper in 2017 hypothesized the existence of a standard model of the mind. Using a small set of variables from mathematical psychology to predict medical behavior would support such a model. The work of this collaborative team has provided such support and argues that the mind is a set of equations akin to what is used in particle physics.”
    The judgment variables and demographics were compared between respondents who were vaccinated and those who were not. Three machine learning approaches were used to test how well the respondents’ judgment, demographics and attitudes toward COVID-19 precautions predicted whether they would get the vaccine.
    The study demonstrates that artificial intelligence can make accurate predictions about human attitudes with surprisingly little data or reliance on expensive and time-consuming clinical assessments.
    “We found that a small set of demographic variables and 15 judgment variables predict vaccine uptake with moderate to high accuracy and high precision,” the study said. “In an age of big-data machine learning approaches, the current work provides an argument for using fewer but more interpretable variables.”
    “The study is anti-big-data,” said co-senior author Aggelos Katsaggelos, an endowed professor of electrical engineering and computer science at Northwestern University. “It can work very simply. It doesn’t need super-computation, it’s inexpensive and can be applied with anyone who has a smartphone. We refer to it as computational cognition AI. It is likely you will be seeing other applications regarding alterations in judgment in the very near future.” More

  • in

    Bendable energy storage materials by cool science

    Imaging being able to wear your smartphone on your wrist, not as a watch, but literally as a flexible band that wraps around your arm. How about clothes that charge your gadgets just by wearing them?
    Recently, a collaborative team led by Professor Jin Kon Kim and Dr. Keon-Woo Kim of Pohang University of Science and Technology (POSTECH), Professor Taesung Kim and M.S./Ph.D. student Hyunho Seok of Sungkyunkwan University (SKKU), and Professor Hong Chul Moon of University of Seoul (UOS) has brought us a step closer to achieving this reality. This research work was published in Advanced Materials.
    Mesoporous metal oxides (MMOs) are characterized by pores ranging from 2 to 50 nanometers (nm) in size. Due to their extensive surface area, MMOs have various applications, such as high-performance energy storage and efficient catalysis, semiconductors, and sensors. However, the integration of MMOs on wearable and flexible devices remains a great challenge, because plastic substrates could not maintain their integrity at elevated temperatures (350°C or above) where MMOs could be synthesized.
    The research team tackled this problem by using synergetic effect of heat and plasma to synthesize various MMOs including vanadium oxide (V2O5), renowned high-performance energy storage materials, V6O13, TiO2, Nb2O5, and WO3, on flexible materials at much lower temperatures (150 ~ 200 oC). The high reactive plasma chemical moieties provide enough energy that could be compensated by high temperature. The fabricated devices could be bent thousands of times without losing the energy storage performance.
    Professor Jin Kon Kim, the leading researcher, expressed his opinion, stating: “We’re on the brink of a revolution in wearable tech.”
    “Our breakthrough could lead to gadgets that are not only more flexible but also much more adaptable to our daily needs.”
    This research was supported by National Creative Initiative Research Program, the Basic Research in Science & Engineering Program, and the Nano & Material Technology Development Program. More

  • in

    Brain-inspired wireless system to gather data from salt-sized sensors

    Tiny chips may equal a big breakthrough for a team of scientists led by Brown University engineers.
    Writing in Nature Electronics, the research team describes a novel approach for a wireless communication network that can efficiently transmit, receive and decode data from thousands of microelectronic chips that are each no larger than a grain of salt.
    The sensor network is designed so the chips can be implanted into the body or integrated into wearable devices. Each submillimeter-sized silicon sensor mimics how neurons in the brain communicate through spikes of electrical activity. The sensors detect specific events as spikes and then transmit that data wirelessly in real time using radio waves, saving both energy and bandwidth.
    “Our brain works in a very sparse way,” said Jihun Lee, a postdoctoral researcher at Brown and study lead author. “Neurons do not fire all the time. They compress data and fire sparsely so that they are very efficient. We are mimicking that structure here in our wireless telecommunication approach. The sensors would not be sending out data all the time — they’d just be sending relevant data as needed as short bursts of electrical spikes, and they would be able to do so independently of the other sensors and without coordinating with a central receiver. By doing this, we would manage to save a lot of energy and avoid flooding our central receiver hub with less meaningful data.”
    This radiofrequency transmission scheme also makes the system scalable and tackles a common problem with current sensor communication networks: they all need to be perfectly synced to work well.
    The researchers say the work marks a significant step forward in large-scale wireless sensor technology and may one day help shape how scientists collect and interpret information from these little silicon devices, especially since electronic sensors have become ubiquitous as a result of modern technology.
    “We live in a world of sensors,” said Arto Nurmikko, a professor in Brown’s School of Engineering and the study’s senior author. “They are all over the place. They’re certainly in our automobiles, they are in so many places of work and increasingly getting into our homes. The most demanding environment for these sensors will always be inside the human body.”
    That’s why the researchers believe the system can help lay the foundation for the next generation of implantable and wearable biomedical sensors. There is a growing need in medicine for microdevices that are efficient, unobtrusive and unnoticeable but that also operate as part of a large ensembles to map physiological activity across an entire area of interest.

    “This is a milestone in terms of actually developing this type of spike-based wireless microsensor,” Lee said. “If we continue to use conventional methods, we cannot collect the high channel data these applications will require in these kinds of next-generation systems.”
    The events the sensors identify and transmit can be specific occurrences such as changes in the environment they are monitoring, including temperature fluctuations or the presence of certain substances.
    The sensors are able to use as little energy as they do because external transceivers supply wireless power to the sensors as they transmit their data — meaning they just need to be within range of the energy waves sent out by the transceiver to get a charge. This ability to operate without needing to be plugged into a power source or battery make them convenient and versatile for use in many different situations.
    The team designed and simulated the complex electronics on a computer and has worked through several fabrication iterations to create the sensors. The work builds on previous research from Nurmikko’s lab at Brown that introduced a new kind of neural interface system called “neurograins.” This system used a coordinated network of tiny wireless sensors to record and stimulate brain activity.
    “These chips are pretty sophisticated as miniature microelectronic devices, and it took us a while to get here,” said Nurmikko, who is also affiliated with Brown’s Carney Institute for Brain Science. “The amount of work and effort that is required in customizing the several different functions in manipulating the electronic nature of these sensors — that being basically squeezed to a fraction of a millimeter space of silicon — is not trivial.”
    The researchers demonstrated the efficiency of their system as well as just how much it could potentially be scaled up. They tested the system using 78 sensors in the lab and found they were able to collect and send data with few errors, even when the sensors were transmitting at different times. Through simulations, they were able to show how to decode data collected from the brains of primates using about 8,000 hypothetically implanted sensors.
    The researchers say next steps include optimizing the system for reduced power consumption and exploring broader applications beyond neurotechnology.
    “The current work provides a methodology we can further build on,” Lee said. More

  • in

    Artificial nanofluidic synapses can store computational memory

    Memory, or the ability to store information in a readily accessible way, is an essential operation in computers and human brains. A key difference is that while brain information processing involves performing computations directly on stored data, computers shuttle data back and forth between a memory unit and a central processing unit (CPU). This inefficient separation (the von Neumann bottleneck) contributes to the rising energy cost of computers.
    Since the 1970s, researchers have been working on the concept of a memristor (memory resistor); an electronic component that can, like a synapse, both compute and store data. But Aleksandra Radenovic in the Laboratory of Nanoscale Biology (LBEN) in EPFL’s School of Engineering set her sight on something even more ambitious: a functional nanofluidic memristive device that relies on ions, rather than electrons and their oppositely charged counterparts (holes). Such an approach would more closely mimic the brain’s own — much more energy efficient — way of processing information.
    “Memristors have already been used to build electronic neural networks, but our goal is to build a nanofluidic neural network that takes advantage of changes in ion concentrations, similar to living organisms,” Radenovic says.
    “We have fabricated a new nanofluidic device for memory applications that is significantly more scalable and much more performant than previous attempts,” says LBEN postdoctoral researcher Théo Emmerich. “This has enabled us, for the very first time, to connect two such ‘artificial synapses’, paving the way for the design of brain-inspired liquid hardware.”
    The research has recently been published in Nature Electronics.
    Just add water
    Memristors can switch between two conductance states — on and off — through manipulation of an applied voltage. While electronic memristors rely on electrons and holes to process digital information, LBEN’s memristor can take advantage of a range of different ions. For their study, the researchers immersed their device in an electrolyte water solution containing potassium ions, but others could be used, including sodium and calcium.

    “We can tune the memory of our device by changing the ions we use, which affects how it switches from on to off, or how much memory it stores,” Emmerich explains.
    The device was fabricated on a chip at EPFL’s Center of MicroNanoTechnology by creating a nanopore at the center of a silicon nitride membrane. The researchers added palladium and graphite layers to create nano-channels for ions. As a current flows through the chip, the ions percolate through the channels and converge at the pore, where their pressure creates a blister between the chip surface and the graphite. As the graphite layer is forced up by the blister, the device becomes more conductive, switching its memory state to ‘on’. Since the graphite layer stays lifted, even without a current, the device ‘remembers’ its previous state. A negative voltage puts the layers back into contact, resetting the memory to the ‘off’ state.
    “Ion channels in the brain undergo structural changes inside a synapse, so this also mimics biology,” says LBEN PhD student Yunfei Teng, who worked on fabricating the devices — dubbed highly asymmetric channels (HACs) in reference to the shape of the ion flow toward the central pores.
    LBEN PhD student Nathan Ronceray adds that the team’s observation of the HAC’s memory action in real time is also a novel achievement in the field. “Because we were dealing with a completely new memory phenomenon, we built a microscope to watch it in action.”
    By collaborating with Riccardo Chiesa and Edoardo Lopriore of the Laboratory of Nanoscale Electronics and Structures, led by Andras Kis, the researchers succeeded in connecting two HACs with an electrode to form a logic circuit based on ion flow. This achievement represents the first demonstration of digital logic operations based on synapse-like ionic devices. But the researchers aren’t stopping there: their next goal is to connect a network of HACs with water channels to create fully liquid circuits. In addition to providing an in-built cooling mechanism, the use of water would facilitate the development of bio-compatible devices with potential applications in brain-computer interfaces or neuromedicine. More

  • in

    Researchers develop deep learning model to predict breast cancer

    Researchers have developed a new, interpretable artificial intelligence (AI) model to predict 5-year breast cancer risk from mammograms, according to a new study published today in Radiology, a journal of the Radiological Society of North America (RSNA).
    One in 8 women, or approximately 13% of the female population in the U.S., will develop invasive breast cancer in their lifetime and 1 in 39 women (3%) will die from the disease, according to the American Cancer Society. Breast cancer screening with mammography, for many women, is the best way to find breast cancer early when treatment is most effective. Having regularly scheduled mammograms can significantly lower the risk of dying from breast cancer. However, it remains unclear how to precisely predict which women will develop breast cancer through screening alone.
    Mirai, a state-of-the-art, deep learning-based algorithm, has demonstrated proficiency as a tool to help predict breast cancer but, because little is known about its reasoning process, the algorithm has the potential for overreliance by radiologists and incorrect diagnoses.
    “Mirai is a black box — a very large and complex neural network, similar in construction to ChatGPT — and no one knew how it made its decisions,” said the study’s lead author, Jon Donnelly, B.S., a Ph.D. student in the Department of Computer Science at Duke University in Durham, North Carolina. “We developed an interpretable AI method that allows us to predict breast cancer from mammograms 1 to 5 years in advance. AsymMirai is much simpler and much easier to understand than Mirai.”
    For the study, Donnelly and colleagues in the Department of Computer Science and Department of Radiology compared their newly developed mammography-based deep learning model called AsymMirai to Mirai’s 1- to 5-year breast cancer risk predictions. AsymMirai was built on the “front end” deep learning portion of Mirai, while replacing the rest of that complicated method with an interpretable module: local bilateral dissimilarity, which looks at tissue differences between the left and right breasts.
    “Previously, differences between the left and right breast tissue were used only to help detect cancer, not to predict it in advance,” Donnelly said. “We discovered that Mirai uses comparisons between the left and right sides, which is how we were able to design a substantially simpler network that also performs comparisons between the sides.”
    For the study, the researchers compared 210,067 mammograms from 81,824 patients in the EMory BrEast imaging Dataset (EMBED) from January 2013 to December 2020 using both Mirai and AsymMirai models. The researchers found that their simplified deep learning model performed almost as well as the state-of-the-art Mirai for 1- to 5-year breast cancer risk prediction.
    The results also supported the clinical importance of breast asymmetry and, as a result, highlights the potential of bilateral dissimilarity as a future imaging marker for breast cancer risk.
    Since the reasoning behind AsymMirai’s predictions is easy to understand, it could be a valuable adjunct to human radiologists in breast cancer diagnoses and risk prediction, Donnelly said.
    “We can, with surprisingly high accuracy, predict whether a woman will develop cancer in the next 1 to 5 years based solely on localized differences between her left and right breast tissue,” he said. “This could have public impact because it could, in the not-too-distant future, affect how often women receive mammograms.” More

  • in

    Backyard insect inspires invisibility devices, next gen tech

    Leafhoppers, a common backyard insect, secrete and coat themselves in tiny mysterious particles that could provide both the inspiration and the instructions for next-generation technology, according to a new study led by Penn State researchers. In a first, the team precisely replicated the complex geometry of these particles, called brochosomes, and elucidated a better understanding of how they absorb both visible and ultraviolet light.
    This could allow the development of bioinspired optical materials with possible applications ranging from invisible cloaking devices to coatings to more efficiently harvest solar energy, said Tak-Sing Wong, professor of mechanical engineering and biomedical engineering. Wong led the study, which was published today (March 18) in the Proceedings of the National Academy of Sciences of the United States of America (PNAS).
    The unique, tiny particles have an unusual soccer ball-like geometry with cavities, and their exact purpose for the insects has been something of a mystery to scientists since the 1950s. In 2017, Wong led the Penn State research team that was the first to create a basic, synthetic version of brochosomes in an effort to better understand their function.
    “This discovery could be very useful for technological innovation,” said Lin Wang, postdoctoral scholar in mechanical engineering and the lead author of the study. “With a new strategy to regulate light reflection on a surface, we might be able to hide the thermal signatures of humans or machines. Perhaps someday people could develop a thermal invisibility cloak based on the tricks used by leafhoppers. Our work shows how understanding nature can help us develop modern technologies.”
    Wang went on to explain that even though scientists have known about brochosome particles for three-quarters of a century, making them in a lab has been a challenge due to the complexity of the particle’s geometry.
    “It has been unclear why the leafhoppers produce particles with such complex structures,” Wang said, “We managed to make these brochosomes using a high-tech 3D-printing method in the lab. We found that these lab-made particles can reduce light reflection by up to 94%. This is a big discovery because it’s the first time we’ve seen nature do something like this, where it controls light in such a specific way using hollow particles.”
    Theories on why leafhoppers coat themselves with a brochosome armor have ranged from keeping them free of contaminants and water to a superhero-like invisibility cloak. However, a new understanding of their geometry raises a strong possibility that its main purpose could be the cloak to avoid predators, according to Tak-Sing Wong, professor of mechanical engineering and biomedical engineering and corresponding author of the study.

    The researchers have found that the size of the holes in the brochosome that give it a hollow, soccer ball-like appearance is extremely important. The size is consistent across leafhopper species, no matter the size of the insect’s body. The brochosomes are roughly 600 nanometers in diameter — about half the size of a single bacterium — and the brochosome pores are around 200 nanometers.
    “That makes us ask a question,” Wong said. “Why this consistency? What is the secret of having brochosomes of about 600 nanometers with about 200-nanometer pores? Does that serve some purpose?”
    The researchers found the unique design of brochosomes serves a dual purpose — absorbing ultraviolet (UV) light, which reduces visibility to predators with UV vision, such as birds and reptiles, and scattering visible light, creating an anti-reflective shield against potential threats. The size of the holes is perfect for absorbing light at the ultraviolet frequency.
    This potentially could lead to a variety of applications for humans using synthetic brochosomes, such as more efficient solar energy harvesting systems, coatings that protect pharmaceuticals from light-induced damage, advanced sunscreens for better skin protection against sun damage and even cloaking devices, researchers said. To test this, the team first had to make synthetic brochosomes, a major challenge in and of itself.
    In their 2017 study, the researchers mimicked some features of brochosomes, particularly the dimples and their distribution, using synthetic materials. This allowed them to begin understanding the optical properties. However, they were only able to make something that looked like brochosomes, not an exact replica.
    “This is the first time we are able to make the exact geometry of the natural brochosome,” Wong said, explaining that the researchers were able to create scaled synthetic replicas of the brochosome structures by using advanced 3D-printing technology.

    They printed a scaled-up version that was 20,000 nanometers in size, or roughly one-fifth the diameter of a human hair. The researchers precisely replicated the shape and morphology, as well as the number and placement of pores using 3D printing, to produce still-small faux brochosomes that were large enough to characterize optically.
    They used a Micro-Fourier transform infrared (FTIR) spectrometer to examine how the brochosomes interacted with infrared light of different wavelengths, helping the researchers understand how the structures manipulate the light.
    Next, the researchers said they plan to improve the synthetic brochosome fabrication to enable production at a scale closer to the size of natural brochosomes. They will also explore additional applications for synthetic brochosomes, such as information encryption, where brochosome-like structures could be used as part of an encryption system where data is only visible under certain light wavelengths.
    Wang noted that their brochosome work demonstrates the value of a biomimetic research approach, where scientists looks to nature for inspiration.
    “Nature has been a good teacher for scientists to develop novel advanced materials,” Wang said. “In this study, we have just focused on one insect species, but there are many more amazing insects out there that are waiting for material scientists to study, and they may be able to help us solve various engineering problems. They are not just bugs; they are inspirations.”
    Along with Wong and Wang from Penn State, other researchers on the study include Sheng Shen, professor of mechanical engineering, and Zhuo Li, doctoral candidate in mechanical engineering, both at Carnegie Mellon University, who contributed to the simulations in this study. Wang and Li contributed equally to this work, for which the researchers have filed a U.S. provisional patent. The Office of Naval Research supported this research. More

  • in

    Two artificial intelligences talk to each other

    Performing a new task based solely on verbal or written instructions, and then describing it to others so that they can reproduce it, is a cornerstone of human communication that still resists artificial intelligence (AI). A team from the University of Geneva (UNIGE) has succeeded in modelling an artificial neural network capable of this cognitive prowess. After learning and performing a series of basic tasks, this AI was able to provide a linguistic description of them to a ”sister” AI, which in turn performed them. These promising results, especially for robotics, are published in Nature Neuroscience.
    Performing a new task without prior training, on the sole basis of verbal or written instructions, is a unique human ability. What’s more, once we have learned the task, we are able to describe it so that another person can reproduce it. This dual capacity distinguishes us from other species which, to learn a new task, need numerous trials accompanied by positive or negative reinforcement signals, without being able to communicate it to their congeners.
    A sub-field of artificial intelligence (AI) — Natural language processing — seeks to recreate this human faculty, with machines that understand and respond to vocal or textual data. This technique is based on artificial neural networks, inspired by our biological neurons and by the way they transmit electrical signals to each other in the brain. However, the neural calculations that would make it possible to achieve the cognitive feat described above are still poorly understood.
    ”Currently, conversational agents using AI are capable of integrating linguistic information to produce text or an image. But, as far as we know, they are not yet capable of translating a verbal or written instruction into a sensorimotor action, and even less explaining it to another artificial intelligence so that it can reproduce it,” explains Alexandre Pouget, full professor in the Department of Basic Neurosciences at the UNIGE Faculty of Medicine.
    A model brain
    The researcher and his team have succeeded in developing an artificial neuronal model with this dual capacity, albeit with prior training. ”We started with an existing model of artificial neurons, S-Bert, which has 300 million neurons and is pre-trained to understand language. We ‘connected’ it to another, simpler network of a few thousand neurons,” explains Reidar Riveland, a PhD student in the Department of Basic Neurosciences at the UNIGE Faculty of Medicine, and first author of the study.
    In the first stage of the experiment, the neuroscientists trained this network to simulate Wernicke’s area, the part of our brain that enables us to perceive and interpret language. In the second stage, the network was trained to reproduce Broca’s area, which, under the influence of Wernicke’s area, is responsible for producing and articulating words. The entire process was carried out on conventional laptop computers. Written instructions in English were then transmitted to the AI.
    For example: pointing to the location — left or right — where a stimulus is perceived; responding in the opposite direction of a stimulus; or, more complex, between two visual stimuli with a slight difference in contrast, showing the brighter one. The scientists then evaluated the results of the model, which simulated the intention of moving, or in this case pointing. ”Once these tasks had been learned, the network was able to describe them to a second network — a copy of the first — so that it could reproduce them. To our knowledge, this is the first time that two AIs have been able to talk to each other in a purely linguistic way,” says Alexandre Pouget, who led the research.
    For future humanoids
    This model opens new horizons for understanding the interaction between language and behaviour. It is particularly promising for the robotics sector, where the development of technologies that enable machines to talk to each other is a key issue. ”The network we have developed is very small. Nothing now stands in the way of developing, on this basis, much more complex networks that would be integrated into humanoid robots capable of understanding us but also of understanding each other,” conclude the two researchers. More

  • in

    Where quantum computers can score

    The travelling salesman problem is considered a prime example of a combinatorial optimisation problem. Now a Berlin team led by theoretical physicist Prof. Dr. Jens Eisert of Freie Universität Berlin and HZB has shown that a certain class of such problems can actually be solved better and much faster with quantum computers than with conventional methods.
    Quantum computers use so-called qubits, which are not either zero or one as in conventional logic circuits, but can take on any value in between. These qubits are realised by highly cooled atoms, ions or superconducting circuits, and it is still physically very complex to build a quantum computer with many qubits. However, mathematical methods can already be used to explore what fault-tolerant quantum computers could achieve in the future. “There are a lot of myths about it, and sometimes a certain amount of hot air and hype. But we have approached the issue rigorously, using mathematical methods, and delivered solid results on the subject. Above all, we have clarified in what sense there can be any advantages at all,” says Prof. Dr. Jens Eisert, who heads a joint research group at Freie Universität Berlin and Helmholtz-Zentrum Berlin.
    The well-known problem of the travelling salesman serves as a prime example: A traveller has to visit a number of cities and then return to his home town. Which is the shortest route? Although this problem is easy to understand, it becomes increasingly complex as the number of cities increases and computation time explodes. The travelling salesman problem stands for a group of optimisation problems that are of enormous economic importance, whether they involve railway networks, logistics or resource optimisation. Good enough solutions can be found using approximation methods.
    The team led by Jens Eisert and his colleague Jean-Pierre Seifert has now used purely analytical methods to evaluate how a quantum computer with qubits could solve this class of problems. A classic thought experiment with pen and paper and a lot of expertise. “We simply assume, regardless of the physical realisation, that there are enough qubits and look at the possibilities of performing computing operations with them,” explains Vincent Ulitzsch, a PhD student at the Technical University of Berlin. In doing so, they unveiled similarities to a well-known problem in cryptography, i.e. the encryption of data. “We realised that we could use the Shor algorithm to solve a subclass of these optimisation problems,” says Ulitzsch. This means that the computing time no longer “explodes” with the number of cities (exponential, 2N), but only increases polynomially, i.e. with Nx, where x is a constant. The solution obtained in this way is also qualitatively much better than the approximate solution using the conventional algorithm.
    “We have shown that for a specific but very important and practically relevant class of combinatorial optimisation problems, quantum computers have a fundamental advantage over classical computers for certain instances of the problem,” says Eisert. More