More stories

  • in

    Researchers make a quantum computing leap with a magnetic twist

    Quantum computing could revolutionize our world. For specific and crucial tasks, it promises to be exponentially faster than the zero-or-one binary technology that underlies today’s machines, from supercomputers in laboratories to smartphones in our pockets. But developing quantum computers hinges on building a stable network of qubits — or quantum bits — to store information, access it and perform computations.
    Yet the qubit platforms unveiled to date have a common problem: They tend to be delicate and vulnerable to outside disturbances. Even a stray photon can cause trouble. Developing fault-tolerant qubits — which would be immune to external perturbations — could be the ultimate solution to this challenge.
    A team led by scientists and engineers at the University of Washington has announced a significant advancement in this quest. In a pair of papers published June 14 in Nature and June 22 in Science, they report that, in experiments with flakes of semiconductor materials — each only a single layer of atoms thick — they detected signatures of “fractional quantum anomalous Hall” (FQAH) states. The team’s discoveries mark a first and promising step in constructing a type of fault-tolerant qubit because FQAH states can host anyons — strange “quasiparticles” that have only a fraction of an electron’s charge. Some types of anyons can be used to make what are called “topologically protected” qubits, which are stable against any small, local disturbances.
    “This really establishes a new paradigm for studying quantum physics with fractional excitations in the future,” said Xiaodong Xu, the lead researcher behind these discoveries, who is also the Boeing Distinguished Professor of Physics and a professor of materials science and engineering at the UW.
    FQAH states are related to the fractional quantum Hall state, an exotic phase of matter that exists in two-dimensional systems. In these states, electrical conductivity is constrained to precise fractions of a constant known as the conductance quantum. But fractional quantum Hall systems typically require massive magnetic fields to keep them stable, making them impractical for applications in quantum computing. The FQAH state has no such requirement — it is stable even “at zero magnetic field,” according to the team.
    Hosting such an exotic phase of matter required the researchers to build an artificial lattice with exotic properties. They stacked two atomically thin flakes of the semiconductor material molybdenum ditelluride (MoTe2) at small, mutual “twist” angles relative to one another. This configuration formed a synthetic “honeycomb lattice” for electrons. When researchers cooled the stacked slices to a few degrees above absolute zero, an intrinsic magnetism arose in the system. The intrinsic magnetism takes the place of the strong magnetic field typically required for the fractional quantum Hall state. Using lasers as probes, the researchers detected signatures of the FQAH effect, a major step forward in unlocking the power of anyons for quantum computing.

    The team — which also includes scientists at the University of Hong Kong, the National Institute for Materials Science in Japan, Boston College and the Massachusetts Institute of Technology — envisions their system as a powerful platform to develop a deeper understanding of anyons, which have very different properties from everyday particles like electrons. Anyons are quasiparticles — or particle-like “excitations” — that can act as fractions of an electron. In future work with their experimental system, the researchers hope to discover an even more exotic version of this type of quasiparticle: “non-Abelian” anyons, which could be used as topological qubits. Wrapping — or “braiding” — the non-Abelian anyons around each other can generate an entangled quantum state. In this quantum state, information is essentially “spread out” over the entire system and resistant to local disturbances — forming the basis of topological qubits and a major advancement over the capabilities of current quantum computers.
    “This type of topological qubit would be fundamentally different from those that can be created now,” said UW physics doctoral student Eric Anderson, who is lead author of the Science paper and co-lead author of the Nature paper. “The strange behavior of non-Abelian anyons would make them much more robust as a quantum computing platform.”
    Three key properties, all of which existed simultaneously in the researchers’ experimental setup, allowed FQAH states to emerge: Magnetism: Though MoTe2 is not a magnetic material, when they loaded the system with positive charges, a “spontaneous spin order” — a form of magnetism called ferromagnetism — emerged. Topology: Electrical charges within their system have “twisted bands,” similar to a Möbius strip, which helps make the system topological. Interactions: The charges within their experimental system interact strongly enough to stabilize the FQAH state.The team hopes that, using their approach, non-Abelian anyons await for discovery.
    “The observed signatures of the fractional quantum anomalous Hall effect are inspiring,” said UW physics doctoral student Jiaqi Cai, co-lead author on the Nature paper and co-author of the Science paper. “The fruitful quantum states in the system can be a laboratory-on-a-chip for discovering new physics in two dimensions, and also new devices for quantum applications.”
    “Our work provides clear evidence of the long-sought FQAH states,” said Xu, who is also a member of the Molecular Engineering and Sciences Institute, the Institute for Nano-Engineered Systems and the Clean Energy Institute, all at UW. “We are currently working on electrical transport measurements, which could provide direct and unambiguous evidence of fractional excitations at zero magnetic field.”
    The team believes that, with their approach, investigating and manipulating these unusual FQAH states can become commonplace — accelerating the quantum computing journey.
    Additional co-authors on the papers are William Holtzmann and Yinong Zhang in the UW Department of Physics; Di Xiao, Chong Wang, Xiaowei Zhang, Xiaoyu Liu and Ting Cao in the UW Department of Materials Science & Engineering; Feng-Ren Fan and Wang Yao at the University of Hong Kong and the Joint Institute of Theoretical and Computational Physics at Hong Kong; Takashi Taniguchi and Kenji Watanabe from the National Institute of Materials Science in Japan; Ying Ran of Boston College; and Liang Fu at MIT. The research was funded by the U.S. Department of Energy, the Air Force Office of Scientific Research, the National Science Foundation, the Research Grants Council of Hong Kong, the Croucher Foundation, the Tencent Foundation, the Japan Society for the Promotion of Science and the University of Washington. More

  • in

    How secure are voice authentication systems really?

    Computer scientists at the University of Waterloo have discovered a method of attack that can successfully bypass voice authentication security systems with up to a 99% success rate after only six tries.
    Voice authentication — which allows companies to verify the identity of their clients via a supposedly unique “voiceprint” — has increasingly been used in remote banking, call centers and other security-critical scenarios.
    “When enrolling in voice authentication, you are asked to repeat a certain phrase in your own voice. The system then extracts a unique vocal signature (voiceprint) from this provided phrase and stores it on a server,” said Andre Kassis, a Computer Security and Privacy PhD candidate and the lead author of a study detailing the research.
    “For future authentication attempts, you are asked to repeat a different phrase and the features extracted from it are compared to the voiceprint you have saved in the system to determine whether access should be granted.”
    After the concept of voiceprints was introduced, malicious actors quickly realized they could use machine learning-enabled “deepfake” software to generate convincing copies of a victim’s voice using as little as five minutes of recorded audio.
    In response, developers introduced “spoofing countermeasures” — checks that could examine a speech sample and determine whether it was created by a human or a machine.
    The Waterloo researchers have developed a method that evades spoofing countermeasures and can fool most voice authentication systems within six attempts. They identified the markers in deepfake audio that betray it is computer-generated, and wrote a program that removes these markers, making it indistinguishable from authentic audio.
    In a recent test against Amazon Connect’s voice authentication system, they achieved a 10 per cent success rate in one four-second attack, with this rate rising to over 40 per cent in less than thirty seconds. With some of the less sophisticated voice authentication systems they targeted, they achieved a 99 per cent success rate after six attempts.
    Kassis contends that while voice authentication is obviously better than no additional security, the existing spoofing countermeasures are critically flawed.
    “The only way to create a secure system is to think like an attacker. If you don’t, then you’re just waiting to be attacked,” Kassis said.
    Kassis’ supervisor, computer science professor Urs Hengartner added, “By demonstrating the insecurity of voice authentication, we hope that companies relying on voice authentication as their only authentication factor will consider deploying additional or stronger authentication measures.” More

  • in

    What math can teach us about standing up to bullies

    In a time of income inequality and ruthless politics, people with outsized power or an unrelenting willingness to browbeat others often seem to come out ahead.
    New research from Dartmouth, however, shows that being uncooperative can help people on the weaker side of the power dynamic achieve a more equal outcome — and even inflict some loss on their abusive counterpart.
    The findings provide a tool based in game theory — the field of mathematics focused on optimizing competitive strategies — that could be applied to help equalize the balance of power in labor negotiations or international relations and could even be used to integrate cooperation into interconnected artificial intelligence systems such as driverless cars.
    Published in the latest issue of the journal PNAS Nexus, the study takes a fresh look at what are known in game theory as “zero-determinant strategies” developed by renowned scientists William Press, now at the University of Texas at Austin, and the late Freeman Dyson at the Institute for Advanced Study in Princeton, New Jersey.
    Zero-determinant strategies dictate that “extortionists” control situations to their advantage by becoming less and less cooperative — though just cooperative enough to keep the other party engaged — and by never being the first to concede when there’s a stalemate. Theoretically, they will always outperform their opponent by demanding and receiving a larger share of what’s at stake.
    But the Dartmouth paper uses mathematical models of interactions to uncover an “Achilles heel” to these seemingly uncrackable scenarios, said senior author Feng Fu, an associate professor of mathematics. Fu and first author Xingru Chen, who received her Ph.D. in mathematics from Dartmouth in 2021, discovered an “unbending strategy” in which resistance to being steamrolled not only causes an extortionist to ultimately lose more than their opponent but can result in a more equal outcome as the overbearing party compromises in a scramble to get the best payoff.

    “Unbending players who choose not to be extorted can resist by refusing to fully cooperate. They also give up part of their own payoff, but the extortioner loses even more,” said Chen, who is now an assistant professor at the Beijing University of Posts and Telecommunications.
    “Our work shows that when an extortioner is faced with an unbending player, their best response is to offer a fair split, thereby guaranteeing an equal payoff for both parties,” she said. “In other words, fairness and cooperation can be cultivated and enforced by unbending players.”
    These scenarios frequently play out in the real world, Fu said. Labor relations provide a poignant model. A large corporation can strong-arm suppliers and producers such as farmworkers to accept lower prices for their effort by threatening to replace them and cut them off from a lucrative market. But a strike or protest can turn the balance of power back toward the workers’ favor and result in more fairness and cooperation, such as when a labor union wins some concessions from an employer.
    While the power dynamic in these scenarios is never equal, Fu said, his and Chen’s work shows that unbending players can reap benefits by defecting from time to time and sabotaging what extortioners are truly after — the highest payoff for themselves.
    “The practical insight from our work is for weaker parties to be unbending and resist being the first to compromise, thereby transforming the interaction into an ultimatum game in which extortioners are incentivized to be fairer and more cooperative to avoid ‘lose-lose’ situations,” Fu said.

    “Consider the dynamics of power between dominant entities such as Donald Trump and the lack of unbending from the Republican Party, or, on the other hand, the military and political resistance to Russia’s invasion of Ukraine that has helped counteract incredible asymmetry,” he said. “These results can be applied to real-world situations, from social equity and fair pay to developing systems that promote cooperation among AI agents, such as autonomous driving.”
    Chen and Fu’s paper expands the theoretical understanding of zero-determinant interactions while also outlining how the outsized power of extortioners can be checked, said mathematician Christian Hilbe, leader of the Dynamics of Social Behavior research group at the Max Planck Institute for Evolutionary Biology in Germany
    “Among the technical contributions, they stress that even extortioners can be outperformed in some games. I don’t think that has been fully appreciated by the community before,” said Hilbe, who was not involved in the study but is familiar with it. “Among the conceptual insights, I like the idea of unbending strategies, behaviors that encourage an extortionate player to eventually settle at a fairer outcome.”
    Behavioral research involving human participants has shown that extortioners may constitute a significant portion of our everyday interactions, said Hilbe, who published a 2016 paper in the journal PLOS ONE reporting just that. He also co-authored a 2014 study in Nature Communications that found people playing against a computerized opponent strongly resisted when the computer engaged in threatening conduct, even when it reduced their own payout.
    “The empirical evidence to date suggests that people do engage in these extortionate behaviors, especially in asymmetric situations, and that the extorted party often tries to resist it, which is then costly to both parties,” Hilbe said. More

  • in

    Mathematicians solve long-known problem

    Making history with 42 digits: Scientists at Paderborn University and KU Leuven have unlocked a decades-old mystery of mathematics with the so-called ninth Dedekind number. Experts worldwide have been searching for the value since 1991. The Paderborn scientists arrived at the exact sequence of numbers with the help of the Noctua supercomputer located there. The results will be presented in September at the International Workshop on Boolean Functions and their Applications (BFA) in Norway.What started as a master’s thesis project by Lennart Van Hirtum, then a computer science student at KU Leuven and now a research associate at the University of Paderborn, has become a huge success. The scientists join an illustrious group with their work: Earlier numbers in the series were found by mathematician Richard Dedekind himself when he defined the problem in 1897, and later by greats of early computer science such as Randolph Church and Morgan Ward. “For 32 years, the calculation of D(9) was an open challenge, and it was questionable whether it would ever be possible to calculate this number at all,” Van Hirtum says.The previous number in the Dedekind sequence, the 8th Dedekind number, was found in 1991 using a Cray 2, the most powerful supercomputer at the time. “It therefore seemed conceivable to us that it should be possible by now to calculate the 9th number on a large supercomputer,” says Van Hirtum, describing the motivation for the ambitious project, which he initially implemented jointly with the supervisors of his master’s thesis at KU Leuven.Grains of sand, chess and supercomputersThe main subject of Dedekind numbers are so-called monotone Boolean functions. Van Hirtum explains, “Basically, you can think of a monotone Boolean function in two, three, and infinite dimensions as a game with an n-dimensional cube. You balance the cube on one corner and then color each of the remaining corners either white or red. There is only one rule: you must never place a white corner above a red one. This creates a kind of vertical red-white intersection. The object of the game is to count how many different cuts there are. Their number is what is defined as the Dedekind number. Even if it doesn’t seem like it, the numbers quickly become gigantic in the process: the 8th Dedekind number already has 23 digits.”Comparably large – but incomparably easier to calculate – numbers are known from a legend concerning the invention of the game of chess. “According to this legend, the inventor of the chess game asked the king for only a few grains of rice on each square of the chess board as a reward: one grain on the first square, two grains on the second, four on the third, and twice as many on each of the following squares. The king quickly realized that this request was impossible to fulfill, because so much rice does not exist in the whole world. The number of grains of rice on the complete board would have 20 digits – an unimaginable amount, but still less than D(8). When you realize these orders of magnitude, it is obvious that both an efficient computational method and a very fast computer would be needed to find D(9),” Van Hirtum said.Milestone: Years become months To calculate D(9), the scientists used a technique developed by master’s thesis advisor Patrick De Causmaecker known as the P-coefficient formula. It provides a way to calculate Dedekind numbers not by counting, but by a very large sum. This allows D(8) to be decoded in just eight minutes on a normal laptop. But, “What takes eight minutes for D(8) becomes hundreds of thousands of years for D(9). Even if you used a large supercomputer exclusively for this task, it would still take many years to complete the calculation,” Van Hirtum points out. The main problem is that the number of terms in this formula grows incredibly fast. “In our case, by exploiting symmetries in the formula, we were able to reduce the number of terms to ‘only’ 5.5*10^18 – an enormous amount. By comparison, the number of grains of sand on Earth is about 7.5*10^18, which is nothing to sneeze at, but for a modern supercomputer, 5.5*10^18 operations are quite manageable,” the computer scientist said. The problem: The calculation of these terms on normal processors is slow and also a use of GPUs as currently the fastest hardware accelerator technology for many AI applications is not efficient for this algorithm.The solution: application-specific hardware using highly specialized and parallel arithmetic units – so-called FPGAs (field programmable gate arrays). Van Hirtum developed an initial prototype for the hardware accelerator and began looking for a supercomputer that had the necessary FPGA cards. In the process, he became aware of the Noctua 2 computer at the “Paderborn Center for Parallel Computing (PC2)” at the University of Paderborn, which has one of the world’s most powerful FPGA systems.Prof. Dr. Christian Plessl, head of PC2, explains: “When Lennart Van Hirtum and Patrick De Causmaeker contacted us, it was immediately clear to us that we wanted to support this moonshot project. Solving hard combinatorial problems with FPGAs is a promising field of application and Noctua 2 is one of the few supercomputers worldwide with which the experiment is feasible at all. The extreme reliability and stability requirements also pose a challenge and test for our infrastructure. The FPGA expert consulting team worked closely with Lennart to adapt and optimize the application for our environment.”After several years of development, the program ran on the supercomputer for about five months. And then the time had come: on March 8, the scientists found the 9th Dedekind number: 286386577668298411128469151667598498812366.Today, three years after the start of the Dedekind project, Van Hirtum is working as a fellow of the NHR Graduate School at the Paderborn Center for Parallel Computing to develop the next generation of hardware tools in his PhD. The NHR (National High Performance Computing) Graduate School is the joint graduate school of the NHR centers. He will report on his extraordinary success together with Patrick De Causmaecker on June 27 at 2 p.m. in Lecture Hall O2 of the University of Paderborn. The interested public is cordially invited. More

  • in

    Act now to prevent uncontrolled rise in carbon footprint of computational science

    Cambridge scientists have set out principles for how computational science — which powers discoveries from unveiling the mysteries of the universe to developing treatments to fight cancer to improving our understanding of the human genome, but can have a substantial carbon footprint — can be made more environmentally sustainable.
    Writing in Nature Computational Science, researchers from the Department of Public Health and Primary Care at the University of Cambridge argue that the scientific community needs to act now if it is to prevent a potentially uncontrolled rise in the carbon footprint of computational science as data science and algorithms increase in usage.
    Dr Loïc Lannelongue, who is a research associate in biomedical data science and a postdoctoral associate at Jesus College, Cambridge, said: “Science has transformed our understanding of the world around us and has led to great benefits to society. But this has come with a not-insignificant — and not always well understood — impact on the environment. As scientists — as with people working in every sector — it’s important that we do what we can to reduce the carbon footprint of our work to ensure that the benefits of our discoveries are not outweighed by their environmental costs.”
    Recent studies have begun to explore the environmental impacts of scientific research, with an initial focus on scientific conferences and experimental laboratories. For example, the 2019 Fall Meeting of the American Geophysical Union was estimated to emit 80,000 tons of CO2e* (tCO2e), equivalent to the average weekly emissions of the city of Edinburgh, UK. The annual carbon footprint of a typical life science laboratory has been estimated to be around 20 tCO2e.
    But there is one aspect of research that often gets overlooked — and which can have a substantial environmental impact: high performance and cloud computing.
    In 2020, the Information and Communication Technologies sector was estimated to have made up between 1.8% and 2.8% of global greenhouse gas emissions — more than aviation (1.9%). In addition to the environmental effects of electricity usage, manufacturing and disposal of hardware, there are also concerns around data centres’ water usage and land footprint.

    Professor Michael Inouye said: “While the environmental impact of experimental ‘wet’ labs is more immediately obvious, the impact of algorithms is less clear and often underestimated. While new hardware, lower-energy data centres and more efficient high performance computing systems can help reduce their impact, the increasing ubiquity of artificial intelligence and data science more generally means their carbon footprint could grow exponentially in coming years if we don’t act now.”
    To help address this issue, the team has developed GREENER (Governance, Responsibility, Estimation, Energy and embodied impacts, New collaborations, Education and Research), a set of principles to allow the computational science community to lead the way in sustainable research practices, maximising computational science’s benefit to both humanity and the environment.
    Governance and Responsibility — Everyone involved in computational science has a role to play in making the field more sustainable: individual and institutional responsibility is a necessary step to ensure transparency and reduction of greenhouse gas emission.
    For example, institutions themselves can be key to managing and expanding centralised data infrastructures, and in ensuring that procurement decisions take into account both the manufacturing and operational footprint of hardware purchases. IT teams in high performance computing (HPC) centres can play a key role, both in terms of training and helping scientists monitor the carbon footprint of their work. Principal Investigators can encourage their teams to think about this issue and give access to suitable training. Funding bodies can influence researchers by requiring estimates of carbon footprints to be included in funding applications.
    Estimate and report the energy consumption of algorithms — Estimating and monitoring the carbon footprint of computations identifies inefficiencies and opportunities for improvement.

    User-level metrics are crucial to understanding environmental impacts and promoting personal responsibility. The financial cost of running computations is often negligible, particularly in academia, and scientists may have the impression of unlimited and inconsequential computing capacity. Quantifying the carbon footprint of individual projects helps raise awareness of the true costs of research.
    Tackling Energy and embodied impacts through New collaborations — Minimising carbon intensity — that is, the carbon footprint of producing electricity — is one of the most immediately impactful ways to reduce greenhouse gas emissions. This could involve relocating computations to low-carbon settings and countries, but this needs to be done with equity in mind. Carbon intensities can differ by as much as three orders of magnitude between the top and bottom performing high-income countries (from 0.10 gCO2e/kWh in Iceland to 770 gCO2e/kWh in Australia).
    The footprint of user devices is also a factor: one estimate found that almost three-quarters (72%) of the energy footprint of streaming a video to a laptop is from the laptop, with 23% used in transmission and a mere 5% at the data centre.
    Another key consideration is data storage. The carbon footprint of storing data depends on numerous factors, but the life cycle footprint of storing one terabyte of data for a year is of the order of 10 kg CO2e. This issue is exacerbated by the duplication of such datasets in order for each institution, and sometimes each research group, to have a copy. Large (hyperscale) data centres are expected to be more energy efficient, but they may also encourage unnecessary increases in the scale of computing (the ‘rebound effect’).
    Education and Research — Education is essential to raise awareness of the issues with different stakeholders. Integrating sustainability into computational training courses is a tangible first step toward reducing carbon footprints. Investing in research that will catalyse innovation in the field of environmentally sustainable computational science is a crucial role for funders and institutions to play.
    Recent studies found that the most widely-used programming languages in research, such as R and Python, tend to be the least energy efficient ones, highlighting the importance of having trained Research Software Engineers within research groups to ensure that the algorithms used are efficiently implemented. There is also scope to use current tools more efficiently by better understanding and monitoring how coding choices impact carbon footprints.
    Dr Lannelongue said: “Computational scientists have a real opportunity to lead the way in sustainability, but this is going to involve a change in our culture and the ways we work. There will need to more transparency, more awareness, better training and resources, and improved policies.
    “Cooperation, open science, and equitable access to low-carbon computing facilities will also be crucial. We need to make sure that sustainable solutions work for everyone, as they frequently have the least benefit for populations, often in low- and middle-income countries, who suffer the most from climate change.”
    Professor Inouye added: “Everyone in the field — from funders to journals to institutions down to individuals — plays an important role and can, themselves, make a positive impact. We have an immense opportunity to make a change, but the clock is ticking.”
    The research was a collaboration with major stakeholders including Health Data Research UK, EMBL-EBI, Wellcome and UK Research and Innovation (UKRI).
    *CO2e, or CO2-equivalent, summarises the global warming impacts of a range of greenhouse gases and is the standard metric for carbon footprints, although its accuracy is sometimes debated. More

  • in

    ‘Toggle switch’ can help quantum computers cut through the noise

    What good is a powerful computer if you can’t read its output? Or readily reprogram it to do different jobs? People who design quantum computers face these challenges, and a new device may make them easier to solve.
    The device, introduced by a team of scientists at the National Institute of Standards and Technology (NIST), includes two superconducting quantum bits, or qubits, which are a quantum computer’s analogue to the logic bits in a classical computer’s processing chip. The heart of this new strategy relies on a “toggle switch” device that connects the qubits to a circuit called a “readout resonator” that can read the output of the qubits’ calculations.
    This toggle switch can be flipped into different states to adjust the strength of the connections between the qubits and the readout resonator. When toggled off, all three elements are isolated from each other. When the switch is toggled on to connect the two qubits, they can interact and perform calculations. Once the calculations are complete, the toggle switch can connect either of the qubits and the readout resonator to retrieve the results.
    Having a programmable toggle switch goes a long way toward reducing noise, a common problem in quantum computer circuits that makes it difficult for qubits to make calculations and show their results clearly.
    “The goal is to keep the qubits happy so that they can calculate without distractions, while still being able to read them out when we want to,” said Ray Simmonds, a NIST physicist and one of the paper’s authors. “This device architecture helps protect the qubits and promises to improve our ability to make the high-fidelity measurements required to build quantum information processors out of qubits.”
    The team, which also includes scientists from the University of Massachusetts Lowell, the University of Colorado Boulder and Raytheon BBN Technologies, describes its results in a paper published today in Nature Physics.
    Quantum computers, which are still at a nascent stage of development, would harness the bizarre properties of quantum mechanics to do jobs that even our most powerful classical computers find intractable, such as aiding in the development of new drugs by performing sophisticated simulations of chemical interactions.

    However, quantum computer designers still confront many problems. One of these is that quantum circuits are kicked around by external or even internal noise, which arises from defects in the materials used to make the computers. This noise is essentially random behavior that can create errors in qubit calculations.
    Present-day qubits are inherently noisy by themselves, but that’s not the only problem. Many quantum computer designs have what is called a static architecture, where each qubit in the processor is physically connected to its neighbors and to its readout resonator. The fabricated wiring that connects qubits together and to their readout can expose them to even more noise.
    Such static architectures have another disadvantage: They cannot be reprogrammed easily. A static architecture’s qubits could do a few related jobs, but for the computer to perform a wider range of tasks, it would need to swap in a different processor design with a different qubit organization or layout. (Imagine changing the chip in your laptop every time you needed to use a different piece of software, and then consider that the chip needs to be kept a smidgen above absolute zero, and you get why this might prove inconvenient.)
    The team’s programmable toggle switch sidesteps both of these problems. First, it prevents circuit noise from creeping into the system through the readout resonator and prevents the qubits from having a conversation with each other when they are supposed to be quiet.
    “This cuts down on a key source of noise in a quantum computer,” Simmonds said.

    Second, the opening and closing of the switches between elements are controlled with a train of microwave pulses sent from a distance, rather than through a static architecture’s physical connections. Integrating more of these toggle switches could be the basis of a more easily programmable quantum computer. The microwave pulses can also set the order and sequence of logic operations, meaning a chip built with many of the team’s toggle switches could be instructed to perform any number of tasks.
    “This makes the chip programmable,” Simmonds said. “Rather than having a completely fixed architecture on the chip, you can make changes via software.”
    One last benefit is that the toggle switch can also turn on the measurement of both qubits at the same time. This ability to ask both qubits to reveal themselves as a couple is important for tracking down quantum computational errors.
    The qubits in this demonstration, as well as the toggle switch and the readout circuit, were all made of superconducting components that conduct electricity without resistance and must be operated at very cold temperatures. The toggle switch itself is made from a superconducting quantum interference device, or “SQUID,” which is very sensitive to magnetic fields passing through its loop. Driving a microwave current through a nearby antenna loop can induce interactions between the qubits and the readout resonator when needed.
    At this point, the team has only worked with two qubits and a single readout resonator, but Simmonds said they are preparing a design with three qubits and a readout resonator, and they have plans to add more qubits and resonators as well. Further research could offer insights into how to string many of these devices together, potentially offering a way to construct a powerful quantum computer with enough qubits to solve the kinds of problems that, for now, are insurmountable. More

  • in

    Generative AI models are encoding biases and negative stereotypes in their users

    In the space of a few months generative AI models, such as ChatGPT, Google’s Bard and Midjourney, have been adopted by more and more people in a variety of professional and personal ways. But growing research is underlining that they are encoding biases and negative stereotypes in their users, as well as mass generating and spreading seemingly accurate but nonsensical information. Worryingly, marginalised groups are disproportionately affected by the fabrication of this nonsensical information.
    In addition, mass fabrication has the potential to influence human belief as the models that drive it become increasingly common, populating the World Wide Web. Not only do people grab information from the web, but much of the primary training material used by AI models comes from here too. In other words, a continuous feedback loop evolves in which biases and nonsense become repeated and accepted again and again.
    These findings — and a plea for psychologists and machine learning experts to work together very swiftly to assess the scale of the issue and devise solutions — are published today in a thought-provoking Perspective in leading international journal, Science, co-authored by Abeba Birhane, who is an adjunct assistant professor in Trinity’s School of Computer Science and Statistics (working with Trinity’s Complex Software Lab) and Senior Fellow in Trustworthy AI at the Mozilla Foundation.
    Prof Birhane said: “People regularly communicate uncertainty through phrases such as ‘I think,’ response delays, corrections, and speech disfluencies. By contrast, generative models give confident, fluent responses with no uncertainty representations nor the ability to communicate their absence. As a result, this can cause greater distortion compared with human inputs and lead to people accepting answers as factually accurate. These issues are exacerbated by financial and liability interests incentivising companies to anthropomorphise generative models as intelligent, sentient, empathetic, or even childlike.
    One such example provided in the Perspective focuses on how statistical regularities in a model assigned Black defendants with higher risk scores. Court judges, who learned the patterns, may then change their sentencing practices in order to match the predictions of the algorithms. This basic mechanism of statistical learning could lead a judge to believe Black individuals to be more likely to reoffend — even if use of the system is stopped by regulations like those recently adopted in California.
    Of particular concern is the fact that it is not easy to shake biases or fabricated information once it has become accepted by an individual. Children are at especially high risk as they are more vulnerable to belief distortion as they are more likely to anthropomorphise technology and are more easily influenced.
    What is needed is swift, detailed analysis that measures the impact of generative models on human beliefs and biases.
    Prof Birhane said: “Studies and subsequent interventions would be most effectively focused on impacts on the marginalised populations who are disproportionately affected by both fabrications and negative stereotypes in model outputs. Additionally resources are needed for the education of the public, policymakers, and interdisciplinary scientists to give realistically informed views of how generative AI models work and to correct existing misinformation and hype surrounding these new technologies.” More

  • in

    Perovskite solar cells set new record for power conversion efficiency

    Perovskite solar cells designed by a team of scientists from the National University of Singapore (NUS) have attained a world record efficiency of 24.35% with an active area of 1 cm2. This achievement paves the way for cheaper, more efficient and durable solar cells.
    To facilitate consistent comparisons and benchmarking of different solar cell technologies, the photovoltaic (PV) community uses a standard size of at least 1 cm2 to report the efficiency of one-sun solar cells in the “Solar Cell Efficiency Tables.” Prior to the record-breaking feat by the NUS team, the best 1-cm2 perovskite solar cell recorded a power conversion efficiency of 23.7%. This ground-breaking achievement in maximising power generation from next-generation renewable energy sources will be crucial to securing world’s energy future.
    Perovskites are a class of materials that exhibit high light absorption efficiency and ease of fabrication, making them promising for solar cell applications. In the past decade, perovskite solar cell technology has achieved several breakthroughs, and the technology continues to evolve.
    “To address this challenge, we undertook a dedicated effort to develop innovative and scalable technologies aimed at improving the efficiency of 1-cm2 perovskite solar cells. Our objective was to bridge the efficiency gap and unlock the full potential of larger-sized devices,” said Assistant Professor Hou Yi, leader of the NUS research team comprising scientists from the Department of Chemical and Biomolecular Engineering under the NUS College of Design and Engineering as well as the Solar Energy Research Institute of Singapore (SERIS), a university-level research institute in NUS.
    He added, “Building on more than 14 years of perovskite solar cell development, this work represents the first instance of an inverted-structure perovskite solar cell exceeding the normal structured perovskite solar cells with an active area of 1 cm2, and this is mainly attributed to the innovative charge transporting material incorporated in our perovskite solar cells. Since inverted-structure perovskite solar cells always offer excellent stability and scalability, achieving a higher efficiency than for normal-structured perovskite cells represents a significant milestone in commercialising this cutting-edge technology.”
    This milestone achievement by Asst Prof Hou Yi and his team has been included in the Solar Cell Efficiency Tables (Version 62) in 2023. Published by scientific journal Progress in Photovoltaics on 21 June 2023, these consolidated tables show an extensive listing of the highest independently confirmed efficiencies for solar cells and modules.

    Low-cost, efficient and stable solar cell technology
    The record-breaking accomplishment was made by successfully incorporating a novel interface material into perovskite solar cells.
    “The introduction of this novel interface material brings forth a range of advantageous attributes, including excellent optical, electrical, and chemical properties. These properties work synergistically to enhance both the efficiency and longevity of perovskite solar cells, paving the way for significant improvements in their performance and durability,” explained team member Dr Li Jia, postdoctoral researcher at SERIS.
    The promising results reported by the NUS team mark a pivotal milestone in advancing the commercialisation of a low-cost, efficient, stable perovskite solar cell technology. “Our findings set the stage for the accelerated commercialisation and integration of solar cells into various energy systems. We are excited by the prospects of our invention that represents a major contribution to a sustainable and renewable energy future,” said team member Mr Wang Xi, an NUS doctoral student.
    Towards a greener future
    Building upon this exciting development, Asst Prof Hou and his team aim to push the boundaries of perovskite solar cell technology even further.
    Another key area of focus is to improve the stability of perovskite solar cells, as perovskite materials are sensitive to moisture and can degrade over time. Asst Prof Hou commented, “We are developing a customised accelerating aging methodology to bring this technology from the lab to the fab. One of our next goals is to deliver perovskite solar cells with 25 years of operational stability.”
    The team is also working to scale up the solar cells to modules by expanding the dimensions of the perovskite solar cells and demonstrating their viability and effectiveness on a larger scale.
    “The insights gained from our current study will serve as a roadmap for developing stable, and eventually, commercially-viable perovskite solar cell products that can serve as sustainable energy solutions to help reduce our reliance on fossil fuels,” Asst Prof Hou added. More