More stories

  • in

    How the brain processes numbers — New procedure improves measurement of human brain activity

    Measuring human brain activity down to the cellular level: until now, this has been possible only to a limited extent. With a new approach developed by researchers at the Technical University of Munich (TUM), it will now be much easier. The method relies on microelectrodes along with the support of brain tumor patients, who participate in studies while undergoing “awake” brain surgery. This enabled the team to identify how our brain processes numbers.
    We use numbers every day. It happens in a very concrete way when we count objects. And it happens abstractly, for example when we see the symbol “8” or do complex calculations.
    In a study published in the journal Cell Reports, a team of researchers and clinicians working with Simon Jacob, Professor of Translational Neurotechnology at the Department of Neurosurgery at TUM’s university hospital Klinikum rechts der Isar, was able to show how the brain processes numbers. The researchers found that individual neurons in the brains of participants were specialized in handling specific numbers. Each one of these neurons was particularly active when its “preferred” number of elements in a dot pattern was presented to the patient. To a somewhat lesser degree this was also the case when the subjects processed number symbols.
    “We already knew that animals processed numbers of objects in this way,” says Prof. Jacob. “But until now, it was not possible to demonstrate conclusively how it works in humans. This has brought us a step closer to unravelling the mechanisms of cognitive functions and developing solutions when things go wrong with these brain functions, for example.”
    Recording individual neurons is a challenge
    To get to this result, Prof. Jacob and his team first had to solve a fundamental problem. “The brain functions by means of electrical impulses,” says Simon Jacob. “So it is by detecting these signals directly that we can learn the most about cognition and perception.”
    There are, however, few opportunities for direct measurements of human brain activity. Neurons cannot be individually recorded through the skull. Some medical teams surgically implant electrodes in epilepsy patients. However, these procedures do not reach the brain region believed to be responsible for processing numbers.

    Innovative advancement of established approaches
    Simon Jacob and an interdisciplinary team therefore developed an approach that adapts established technologies and opens up entirely new possibilities in neuroscience. At the heart of the procedure are microelectrode arrays that have undergone extensive testing in animal studies.
    To ensure that the electrodes would produce reliable data in awake surgeries on the human brain, the researchers had to reconfigure them in close collaboration with the manufacturer. The trick was to increase the distance between the needle-like sensors used to record the electrical activities of a cell. “In theory, tightly packed electrodes will produce more data,” says Simon Jacob. “But in practice the large number of contacts stuns the implanted brain region, so that no usable data are recorded.”
    Patients support research
    The development of the procedure was possible only because patients with brain tumors agreed to support the research team. While undergoing brain surgery, they permitted sensors to be implanted and performed test tasks for the researchers. According to Simon Jacob, the experimental procedures did not negatively affect the work of the surgical team.
    A greater number of medical centers can conduct studies
    “Our procedure has two key advantages,” says Simon Jacob. First, such tumor surgeries provide access to a much larger area of the brain. “And second, with the electrodes we used, which have been standardized and tested in years of animal trials, many more medical centers will be able to measure neuronal activity in the future” says Jacob. While epilepsy operations are performed only at a small number of centers and on relatively few patients, he explains, many more university hospitals perform awake operations on patients with brain tumors. “With a significantly larger number of studies with standardized methods and sensors, we can learn a lot more in the coming years about how the human brain functions,” says Simon Jacob. More

  • in

    Emulating how krill swim to build a robotic platform for ocean navigation

    Picture a network of interconnected, autonomous robots working together in a coordinated dance to navigate the pitch-black surroundings of the ocean while carrying out scientific surveys or search-and-rescue missions.
    In a new study published in Scientific Reports, a team led by Brown University researchers has presented important first steps in building these types of underwater navigation robots. In the study, the researchers outline the design of a small robotic platform called Pleobot that can serve as both a tool to help researchers understand the krill-like swimming method and as a foundation for building small, highly maneuverable underwater robots.
    Pleobot is currently made of three articulated sections that replicate krill-like swimming called metachronal swimming. To design Pleobot, the researchers took inspiration from krill, which are remarkable aquatic athletes and display mastery in swimming, accelerating, braking and turning. They demonstrate in the study the capabilities of Pleobot to emulate the legs of swimming krill and provide new insights on the fluid-structure interactions needed to sustain steady forward swimming in krill.
    According to the study, Pleobot has the potential to allow the scientific community to understand how to take advantage of 100 million years of evolution to engineer better robots for ocean navigation.
    “Experiments with organisms are challenging and unpredictable,” said Sara Oliveira Santos, a Ph.D. candidate at Brown’s School of Engineering and lead author of the new study. “Pleobot allows us unparalleled resolution and control to investigate all the aspects of krill-like swimming that help it excel at maneuvering underwater. Our goal was to design a comprehensive tool to understand krill-like swimming, which meant including all the details that make krill such athletic swimmers.”
    The effort is a collaboration between Brown researchers in the lab of Assistant Professor of Engineering Monica Martinez Wilhelmus and scientists in the lab of Francisco Cuenca-Jimenez at the Universidad Nacional Autónoma de México.

    A major aim of the project is to understand how metachronal swimmers, like krill, manage to function in complex marine environments and perform massive vertical migrations of over 1,000 meters — equivalent to stacking three Empire State Buildings — twice daily.
    “We have snapshots of the mechanisms they use to swim efficiently, but we do not have comprehensive data,” said Nils Tack, a postdoctoral associate in the Wilhelmus lab. “We built and programmed a robot that precisely emulates the essential movements of the legs to produce specific motions and change the shape of the appendages. This allows us to study different configurations to take measurements and make comparisons that are otherwise unobtainable with live animals.”
    The metachronal swimming technique can lead to remarkable maneuverability that krill frequently display through the sequential deployment of their swimming legs in a back to front wave-like motion. The researchers believe that in the future, deployable swarm systems can be used to map Earth’s oceans, participate in search-and-recovery missions by covering large areas, or be sent to moons in the solar system, such as Europa, to explore their oceans.
    “Krill aggregations are an excellent example of swarms in nature: they are composed of organisms with a streamlined body, traveling up to one kilometer each way, with excellent underwater maneuverability,” Wilhelmus said. “This study is the starting point of our long-term research aim of developing the next generation of autonomous underwater sensing vehicles. Being able to understand fluid-structure interactions at the appendage level will allow us to make informed decisions about future designs.”
    The researchers can actively control the two leg segments and have passive control of Pleobot’s biramous fins. This is believed to be the first platform that replicates the opening and closing motion of these fins. The construction of the robotic platform was a multi-year project, involving a multi-disciplinary team in fluid mechanics, biology and mechatronics.

    The researchers built their model at 10 times the scale of krill, which are usually about the size of a paperclip. The platform is primarily made of 3D printable parts and the design is open-access, allowing other teams to use Pleobot to continue answering questions on metachronal swimming not just for krill but for other organisms like lobsters.
    In the published study, the group reveals the answer to one of the many unknown mechanisms of krill swimming: how they generate lift in order not to sink while swimming forward. If krill are not swimming constantly, they will start sinking because they are a little heavier than water. To avoid this, they still have to create some lift even while swimming forward to be able to remain at that same height in the water, said Oliveira Santos.
    “We were able to uncover that mechanism by using the robot,” said Yunxing Su, a postdoctoral associate in the lab. “We identified an important effect of a low-pressure region at the back side of the swimming legs that contributes to the lift force enhancement during the power stroke of the moving legs.”
    In the coming years, the researchers hope to build on this initial success and further build and test the designs presented in the article. The team is currently working to integrate morphological characteristics of shrimp into the robotic platform, such as flexibility and bristles around the appendages.
    The work was partially funded by a NASA Rhode Island EPSCoR Seed Grant. More

  • in

    Researchers make a quantum computing leap with a magnetic twist

    Quantum computing could revolutionize our world. For specific and crucial tasks, it promises to be exponentially faster than the zero-or-one binary technology that underlies today’s machines, from supercomputers in laboratories to smartphones in our pockets. But developing quantum computers hinges on building a stable network of qubits — or quantum bits — to store information, access it and perform computations.
    Yet the qubit platforms unveiled to date have a common problem: They tend to be delicate and vulnerable to outside disturbances. Even a stray photon can cause trouble. Developing fault-tolerant qubits — which would be immune to external perturbations — could be the ultimate solution to this challenge.
    A team led by scientists and engineers at the University of Washington has announced a significant advancement in this quest. In a pair of papers published June 14 in Nature and June 22 in Science, they report that, in experiments with flakes of semiconductor materials — each only a single layer of atoms thick — they detected signatures of “fractional quantum anomalous Hall” (FQAH) states. The team’s discoveries mark a first and promising step in constructing a type of fault-tolerant qubit because FQAH states can host anyons — strange “quasiparticles” that have only a fraction of an electron’s charge. Some types of anyons can be used to make what are called “topologically protected” qubits, which are stable against any small, local disturbances.
    “This really establishes a new paradigm for studying quantum physics with fractional excitations in the future,” said Xiaodong Xu, the lead researcher behind these discoveries, who is also the Boeing Distinguished Professor of Physics and a professor of materials science and engineering at the UW.
    FQAH states are related to the fractional quantum Hall state, an exotic phase of matter that exists in two-dimensional systems. In these states, electrical conductivity is constrained to precise fractions of a constant known as the conductance quantum. But fractional quantum Hall systems typically require massive magnetic fields to keep them stable, making them impractical for applications in quantum computing. The FQAH state has no such requirement — it is stable even “at zero magnetic field,” according to the team.
    Hosting such an exotic phase of matter required the researchers to build an artificial lattice with exotic properties. They stacked two atomically thin flakes of the semiconductor material molybdenum ditelluride (MoTe2) at small, mutual “twist” angles relative to one another. This configuration formed a synthetic “honeycomb lattice” for electrons. When researchers cooled the stacked slices to a few degrees above absolute zero, an intrinsic magnetism arose in the system. The intrinsic magnetism takes the place of the strong magnetic field typically required for the fractional quantum Hall state. Using lasers as probes, the researchers detected signatures of the FQAH effect, a major step forward in unlocking the power of anyons for quantum computing.

    The team — which also includes scientists at the University of Hong Kong, the National Institute for Materials Science in Japan, Boston College and the Massachusetts Institute of Technology — envisions their system as a powerful platform to develop a deeper understanding of anyons, which have very different properties from everyday particles like electrons. Anyons are quasiparticles — or particle-like “excitations” — that can act as fractions of an electron. In future work with their experimental system, the researchers hope to discover an even more exotic version of this type of quasiparticle: “non-Abelian” anyons, which could be used as topological qubits. Wrapping — or “braiding” — the non-Abelian anyons around each other can generate an entangled quantum state. In this quantum state, information is essentially “spread out” over the entire system and resistant to local disturbances — forming the basis of topological qubits and a major advancement over the capabilities of current quantum computers.
    “This type of topological qubit would be fundamentally different from those that can be created now,” said UW physics doctoral student Eric Anderson, who is lead author of the Science paper and co-lead author of the Nature paper. “The strange behavior of non-Abelian anyons would make them much more robust as a quantum computing platform.”
    Three key properties, all of which existed simultaneously in the researchers’ experimental setup, allowed FQAH states to emerge: Magnetism: Though MoTe2 is not a magnetic material, when they loaded the system with positive charges, a “spontaneous spin order” — a form of magnetism called ferromagnetism — emerged. Topology: Electrical charges within their system have “twisted bands,” similar to a Möbius strip, which helps make the system topological. Interactions: The charges within their experimental system interact strongly enough to stabilize the FQAH state.The team hopes that, using their approach, non-Abelian anyons await for discovery.
    “The observed signatures of the fractional quantum anomalous Hall effect are inspiring,” said UW physics doctoral student Jiaqi Cai, co-lead author on the Nature paper and co-author of the Science paper. “The fruitful quantum states in the system can be a laboratory-on-a-chip for discovering new physics in two dimensions, and also new devices for quantum applications.”
    “Our work provides clear evidence of the long-sought FQAH states,” said Xu, who is also a member of the Molecular Engineering and Sciences Institute, the Institute for Nano-Engineered Systems and the Clean Energy Institute, all at UW. “We are currently working on electrical transport measurements, which could provide direct and unambiguous evidence of fractional excitations at zero magnetic field.”
    The team believes that, with their approach, investigating and manipulating these unusual FQAH states can become commonplace — accelerating the quantum computing journey.
    Additional co-authors on the papers are William Holtzmann and Yinong Zhang in the UW Department of Physics; Di Xiao, Chong Wang, Xiaowei Zhang, Xiaoyu Liu and Ting Cao in the UW Department of Materials Science & Engineering; Feng-Ren Fan and Wang Yao at the University of Hong Kong and the Joint Institute of Theoretical and Computational Physics at Hong Kong; Takashi Taniguchi and Kenji Watanabe from the National Institute of Materials Science in Japan; Ying Ran of Boston College; and Liang Fu at MIT. The research was funded by the U.S. Department of Energy, the Air Force Office of Scientific Research, the National Science Foundation, the Research Grants Council of Hong Kong, the Croucher Foundation, the Tencent Foundation, the Japan Society for the Promotion of Science and the University of Washington. More

  • in

    How secure are voice authentication systems really?

    Computer scientists at the University of Waterloo have discovered a method of attack that can successfully bypass voice authentication security systems with up to a 99% success rate after only six tries.
    Voice authentication — which allows companies to verify the identity of their clients via a supposedly unique “voiceprint” — has increasingly been used in remote banking, call centers and other security-critical scenarios.
    “When enrolling in voice authentication, you are asked to repeat a certain phrase in your own voice. The system then extracts a unique vocal signature (voiceprint) from this provided phrase and stores it on a server,” said Andre Kassis, a Computer Security and Privacy PhD candidate and the lead author of a study detailing the research.
    “For future authentication attempts, you are asked to repeat a different phrase and the features extracted from it are compared to the voiceprint you have saved in the system to determine whether access should be granted.”
    After the concept of voiceprints was introduced, malicious actors quickly realized they could use machine learning-enabled “deepfake” software to generate convincing copies of a victim’s voice using as little as five minutes of recorded audio.
    In response, developers introduced “spoofing countermeasures” — checks that could examine a speech sample and determine whether it was created by a human or a machine.
    The Waterloo researchers have developed a method that evades spoofing countermeasures and can fool most voice authentication systems within six attempts. They identified the markers in deepfake audio that betray it is computer-generated, and wrote a program that removes these markers, making it indistinguishable from authentic audio.
    In a recent test against Amazon Connect’s voice authentication system, they achieved a 10 per cent success rate in one four-second attack, with this rate rising to over 40 per cent in less than thirty seconds. With some of the less sophisticated voice authentication systems they targeted, they achieved a 99 per cent success rate after six attempts.
    Kassis contends that while voice authentication is obviously better than no additional security, the existing spoofing countermeasures are critically flawed.
    “The only way to create a secure system is to think like an attacker. If you don’t, then you’re just waiting to be attacked,” Kassis said.
    Kassis’ supervisor, computer science professor Urs Hengartner added, “By demonstrating the insecurity of voice authentication, we hope that companies relying on voice authentication as their only authentication factor will consider deploying additional or stronger authentication measures.” More

  • in

    What math can teach us about standing up to bullies

    In a time of income inequality and ruthless politics, people with outsized power or an unrelenting willingness to browbeat others often seem to come out ahead.
    New research from Dartmouth, however, shows that being uncooperative can help people on the weaker side of the power dynamic achieve a more equal outcome — and even inflict some loss on their abusive counterpart.
    The findings provide a tool based in game theory — the field of mathematics focused on optimizing competitive strategies — that could be applied to help equalize the balance of power in labor negotiations or international relations and could even be used to integrate cooperation into interconnected artificial intelligence systems such as driverless cars.
    Published in the latest issue of the journal PNAS Nexus, the study takes a fresh look at what are known in game theory as “zero-determinant strategies” developed by renowned scientists William Press, now at the University of Texas at Austin, and the late Freeman Dyson at the Institute for Advanced Study in Princeton, New Jersey.
    Zero-determinant strategies dictate that “extortionists” control situations to their advantage by becoming less and less cooperative — though just cooperative enough to keep the other party engaged — and by never being the first to concede when there’s a stalemate. Theoretically, they will always outperform their opponent by demanding and receiving a larger share of what’s at stake.
    But the Dartmouth paper uses mathematical models of interactions to uncover an “Achilles heel” to these seemingly uncrackable scenarios, said senior author Feng Fu, an associate professor of mathematics. Fu and first author Xingru Chen, who received her Ph.D. in mathematics from Dartmouth in 2021, discovered an “unbending strategy” in which resistance to being steamrolled not only causes an extortionist to ultimately lose more than their opponent but can result in a more equal outcome as the overbearing party compromises in a scramble to get the best payoff.

    “Unbending players who choose not to be extorted can resist by refusing to fully cooperate. They also give up part of their own payoff, but the extortioner loses even more,” said Chen, who is now an assistant professor at the Beijing University of Posts and Telecommunications.
    “Our work shows that when an extortioner is faced with an unbending player, their best response is to offer a fair split, thereby guaranteeing an equal payoff for both parties,” she said. “In other words, fairness and cooperation can be cultivated and enforced by unbending players.”
    These scenarios frequently play out in the real world, Fu said. Labor relations provide a poignant model. A large corporation can strong-arm suppliers and producers such as farmworkers to accept lower prices for their effort by threatening to replace them and cut them off from a lucrative market. But a strike or protest can turn the balance of power back toward the workers’ favor and result in more fairness and cooperation, such as when a labor union wins some concessions from an employer.
    While the power dynamic in these scenarios is never equal, Fu said, his and Chen’s work shows that unbending players can reap benefits by defecting from time to time and sabotaging what extortioners are truly after — the highest payoff for themselves.
    “The practical insight from our work is for weaker parties to be unbending and resist being the first to compromise, thereby transforming the interaction into an ultimatum game in which extortioners are incentivized to be fairer and more cooperative to avoid ‘lose-lose’ situations,” Fu said.

    “Consider the dynamics of power between dominant entities such as Donald Trump and the lack of unbending from the Republican Party, or, on the other hand, the military and political resistance to Russia’s invasion of Ukraine that has helped counteract incredible asymmetry,” he said. “These results can be applied to real-world situations, from social equity and fair pay to developing systems that promote cooperation among AI agents, such as autonomous driving.”
    Chen and Fu’s paper expands the theoretical understanding of zero-determinant interactions while also outlining how the outsized power of extortioners can be checked, said mathematician Christian Hilbe, leader of the Dynamics of Social Behavior research group at the Max Planck Institute for Evolutionary Biology in Germany
    “Among the technical contributions, they stress that even extortioners can be outperformed in some games. I don’t think that has been fully appreciated by the community before,” said Hilbe, who was not involved in the study but is familiar with it. “Among the conceptual insights, I like the idea of unbending strategies, behaviors that encourage an extortionate player to eventually settle at a fairer outcome.”
    Behavioral research involving human participants has shown that extortioners may constitute a significant portion of our everyday interactions, said Hilbe, who published a 2016 paper in the journal PLOS ONE reporting just that. He also co-authored a 2014 study in Nature Communications that found people playing against a computerized opponent strongly resisted when the computer engaged in threatening conduct, even when it reduced their own payout.
    “The empirical evidence to date suggests that people do engage in these extortionate behaviors, especially in asymmetric situations, and that the extorted party often tries to resist it, which is then costly to both parties,” Hilbe said. More

  • in

    Mathematicians solve long-known problem

    Making history with 42 digits: Scientists at Paderborn University and KU Leuven have unlocked a decades-old mystery of mathematics with the so-called ninth Dedekind number. Experts worldwide have been searching for the value since 1991. The Paderborn scientists arrived at the exact sequence of numbers with the help of the Noctua supercomputer located there. The results will be presented in September at the International Workshop on Boolean Functions and their Applications (BFA) in Norway.What started as a master’s thesis project by Lennart Van Hirtum, then a computer science student at KU Leuven and now a research associate at the University of Paderborn, has become a huge success. The scientists join an illustrious group with their work: Earlier numbers in the series were found by mathematician Richard Dedekind himself when he defined the problem in 1897, and later by greats of early computer science such as Randolph Church and Morgan Ward. “For 32 years, the calculation of D(9) was an open challenge, and it was questionable whether it would ever be possible to calculate this number at all,” Van Hirtum says.The previous number in the Dedekind sequence, the 8th Dedekind number, was found in 1991 using a Cray 2, the most powerful supercomputer at the time. “It therefore seemed conceivable to us that it should be possible by now to calculate the 9th number on a large supercomputer,” says Van Hirtum, describing the motivation for the ambitious project, which he initially implemented jointly with the supervisors of his master’s thesis at KU Leuven.Grains of sand, chess and supercomputersThe main subject of Dedekind numbers are so-called monotone Boolean functions. Van Hirtum explains, “Basically, you can think of a monotone Boolean function in two, three, and infinite dimensions as a game with an n-dimensional cube. You balance the cube on one corner and then color each of the remaining corners either white or red. There is only one rule: you must never place a white corner above a red one. This creates a kind of vertical red-white intersection. The object of the game is to count how many different cuts there are. Their number is what is defined as the Dedekind number. Even if it doesn’t seem like it, the numbers quickly become gigantic in the process: the 8th Dedekind number already has 23 digits.”Comparably large – but incomparably easier to calculate – numbers are known from a legend concerning the invention of the game of chess. “According to this legend, the inventor of the chess game asked the king for only a few grains of rice on each square of the chess board as a reward: one grain on the first square, two grains on the second, four on the third, and twice as many on each of the following squares. The king quickly realized that this request was impossible to fulfill, because so much rice does not exist in the whole world. The number of grains of rice on the complete board would have 20 digits – an unimaginable amount, but still less than D(8). When you realize these orders of magnitude, it is obvious that both an efficient computational method and a very fast computer would be needed to find D(9),” Van Hirtum said.Milestone: Years become months To calculate D(9), the scientists used a technique developed by master’s thesis advisor Patrick De Causmaecker known as the P-coefficient formula. It provides a way to calculate Dedekind numbers not by counting, but by a very large sum. This allows D(8) to be decoded in just eight minutes on a normal laptop. But, “What takes eight minutes for D(8) becomes hundreds of thousands of years for D(9). Even if you used a large supercomputer exclusively for this task, it would still take many years to complete the calculation,” Van Hirtum points out. The main problem is that the number of terms in this formula grows incredibly fast. “In our case, by exploiting symmetries in the formula, we were able to reduce the number of terms to ‘only’ 5.5*10^18 – an enormous amount. By comparison, the number of grains of sand on Earth is about 7.5*10^18, which is nothing to sneeze at, but for a modern supercomputer, 5.5*10^18 operations are quite manageable,” the computer scientist said. The problem: The calculation of these terms on normal processors is slow and also a use of GPUs as currently the fastest hardware accelerator technology for many AI applications is not efficient for this algorithm.The solution: application-specific hardware using highly specialized and parallel arithmetic units – so-called FPGAs (field programmable gate arrays). Van Hirtum developed an initial prototype for the hardware accelerator and began looking for a supercomputer that had the necessary FPGA cards. In the process, he became aware of the Noctua 2 computer at the “Paderborn Center for Parallel Computing (PC2)” at the University of Paderborn, which has one of the world’s most powerful FPGA systems.Prof. Dr. Christian Plessl, head of PC2, explains: “When Lennart Van Hirtum and Patrick De Causmaeker contacted us, it was immediately clear to us that we wanted to support this moonshot project. Solving hard combinatorial problems with FPGAs is a promising field of application and Noctua 2 is one of the few supercomputers worldwide with which the experiment is feasible at all. The extreme reliability and stability requirements also pose a challenge and test for our infrastructure. The FPGA expert consulting team worked closely with Lennart to adapt and optimize the application for our environment.”After several years of development, the program ran on the supercomputer for about five months. And then the time had come: on March 8, the scientists found the 9th Dedekind number: 286386577668298411128469151667598498812366.Today, three years after the start of the Dedekind project, Van Hirtum is working as a fellow of the NHR Graduate School at the Paderborn Center for Parallel Computing to develop the next generation of hardware tools in his PhD. The NHR (National High Performance Computing) Graduate School is the joint graduate school of the NHR centers. He will report on his extraordinary success together with Patrick De Causmaecker on June 27 at 2 p.m. in Lecture Hall O2 of the University of Paderborn. The interested public is cordially invited. More

  • in

    Act now to prevent uncontrolled rise in carbon footprint of computational science

    Cambridge scientists have set out principles for how computational science — which powers discoveries from unveiling the mysteries of the universe to developing treatments to fight cancer to improving our understanding of the human genome, but can have a substantial carbon footprint — can be made more environmentally sustainable.
    Writing in Nature Computational Science, researchers from the Department of Public Health and Primary Care at the University of Cambridge argue that the scientific community needs to act now if it is to prevent a potentially uncontrolled rise in the carbon footprint of computational science as data science and algorithms increase in usage.
    Dr Loïc Lannelongue, who is a research associate in biomedical data science and a postdoctoral associate at Jesus College, Cambridge, said: “Science has transformed our understanding of the world around us and has led to great benefits to society. But this has come with a not-insignificant — and not always well understood — impact on the environment. As scientists — as with people working in every sector — it’s important that we do what we can to reduce the carbon footprint of our work to ensure that the benefits of our discoveries are not outweighed by their environmental costs.”
    Recent studies have begun to explore the environmental impacts of scientific research, with an initial focus on scientific conferences and experimental laboratories. For example, the 2019 Fall Meeting of the American Geophysical Union was estimated to emit 80,000 tons of CO2e* (tCO2e), equivalent to the average weekly emissions of the city of Edinburgh, UK. The annual carbon footprint of a typical life science laboratory has been estimated to be around 20 tCO2e.
    But there is one aspect of research that often gets overlooked — and which can have a substantial environmental impact: high performance and cloud computing.
    In 2020, the Information and Communication Technologies sector was estimated to have made up between 1.8% and 2.8% of global greenhouse gas emissions — more than aviation (1.9%). In addition to the environmental effects of electricity usage, manufacturing and disposal of hardware, there are also concerns around data centres’ water usage and land footprint.

    Professor Michael Inouye said: “While the environmental impact of experimental ‘wet’ labs is more immediately obvious, the impact of algorithms is less clear and often underestimated. While new hardware, lower-energy data centres and more efficient high performance computing systems can help reduce their impact, the increasing ubiquity of artificial intelligence and data science more generally means their carbon footprint could grow exponentially in coming years if we don’t act now.”
    To help address this issue, the team has developed GREENER (Governance, Responsibility, Estimation, Energy and embodied impacts, New collaborations, Education and Research), a set of principles to allow the computational science community to lead the way in sustainable research practices, maximising computational science’s benefit to both humanity and the environment.
    Governance and Responsibility — Everyone involved in computational science has a role to play in making the field more sustainable: individual and institutional responsibility is a necessary step to ensure transparency and reduction of greenhouse gas emission.
    For example, institutions themselves can be key to managing and expanding centralised data infrastructures, and in ensuring that procurement decisions take into account both the manufacturing and operational footprint of hardware purchases. IT teams in high performance computing (HPC) centres can play a key role, both in terms of training and helping scientists monitor the carbon footprint of their work. Principal Investigators can encourage their teams to think about this issue and give access to suitable training. Funding bodies can influence researchers by requiring estimates of carbon footprints to be included in funding applications.
    Estimate and report the energy consumption of algorithms — Estimating and monitoring the carbon footprint of computations identifies inefficiencies and opportunities for improvement.

    User-level metrics are crucial to understanding environmental impacts and promoting personal responsibility. The financial cost of running computations is often negligible, particularly in academia, and scientists may have the impression of unlimited and inconsequential computing capacity. Quantifying the carbon footprint of individual projects helps raise awareness of the true costs of research.
    Tackling Energy and embodied impacts through New collaborations — Minimising carbon intensity — that is, the carbon footprint of producing electricity — is one of the most immediately impactful ways to reduce greenhouse gas emissions. This could involve relocating computations to low-carbon settings and countries, but this needs to be done with equity in mind. Carbon intensities can differ by as much as three orders of magnitude between the top and bottom performing high-income countries (from 0.10 gCO2e/kWh in Iceland to 770 gCO2e/kWh in Australia).
    The footprint of user devices is also a factor: one estimate found that almost three-quarters (72%) of the energy footprint of streaming a video to a laptop is from the laptop, with 23% used in transmission and a mere 5% at the data centre.
    Another key consideration is data storage. The carbon footprint of storing data depends on numerous factors, but the life cycle footprint of storing one terabyte of data for a year is of the order of 10 kg CO2e. This issue is exacerbated by the duplication of such datasets in order for each institution, and sometimes each research group, to have a copy. Large (hyperscale) data centres are expected to be more energy efficient, but they may also encourage unnecessary increases in the scale of computing (the ‘rebound effect’).
    Education and Research — Education is essential to raise awareness of the issues with different stakeholders. Integrating sustainability into computational training courses is a tangible first step toward reducing carbon footprints. Investing in research that will catalyse innovation in the field of environmentally sustainable computational science is a crucial role for funders and institutions to play.
    Recent studies found that the most widely-used programming languages in research, such as R and Python, tend to be the least energy efficient ones, highlighting the importance of having trained Research Software Engineers within research groups to ensure that the algorithms used are efficiently implemented. There is also scope to use current tools more efficiently by better understanding and monitoring how coding choices impact carbon footprints.
    Dr Lannelongue said: “Computational scientists have a real opportunity to lead the way in sustainability, but this is going to involve a change in our culture and the ways we work. There will need to more transparency, more awareness, better training and resources, and improved policies.
    “Cooperation, open science, and equitable access to low-carbon computing facilities will also be crucial. We need to make sure that sustainable solutions work for everyone, as they frequently have the least benefit for populations, often in low- and middle-income countries, who suffer the most from climate change.”
    Professor Inouye added: “Everyone in the field — from funders to journals to institutions down to individuals — plays an important role and can, themselves, make a positive impact. We have an immense opportunity to make a change, but the clock is ticking.”
    The research was a collaboration with major stakeholders including Health Data Research UK, EMBL-EBI, Wellcome and UK Research and Innovation (UKRI).
    *CO2e, or CO2-equivalent, summarises the global warming impacts of a range of greenhouse gases and is the standard metric for carbon footprints, although its accuracy is sometimes debated. More

  • in

    ‘Toggle switch’ can help quantum computers cut through the noise

    What good is a powerful computer if you can’t read its output? Or readily reprogram it to do different jobs? People who design quantum computers face these challenges, and a new device may make them easier to solve.
    The device, introduced by a team of scientists at the National Institute of Standards and Technology (NIST), includes two superconducting quantum bits, or qubits, which are a quantum computer’s analogue to the logic bits in a classical computer’s processing chip. The heart of this new strategy relies on a “toggle switch” device that connects the qubits to a circuit called a “readout resonator” that can read the output of the qubits’ calculations.
    This toggle switch can be flipped into different states to adjust the strength of the connections between the qubits and the readout resonator. When toggled off, all three elements are isolated from each other. When the switch is toggled on to connect the two qubits, they can interact and perform calculations. Once the calculations are complete, the toggle switch can connect either of the qubits and the readout resonator to retrieve the results.
    Having a programmable toggle switch goes a long way toward reducing noise, a common problem in quantum computer circuits that makes it difficult for qubits to make calculations and show their results clearly.
    “The goal is to keep the qubits happy so that they can calculate without distractions, while still being able to read them out when we want to,” said Ray Simmonds, a NIST physicist and one of the paper’s authors. “This device architecture helps protect the qubits and promises to improve our ability to make the high-fidelity measurements required to build quantum information processors out of qubits.”
    The team, which also includes scientists from the University of Massachusetts Lowell, the University of Colorado Boulder and Raytheon BBN Technologies, describes its results in a paper published today in Nature Physics.
    Quantum computers, which are still at a nascent stage of development, would harness the bizarre properties of quantum mechanics to do jobs that even our most powerful classical computers find intractable, such as aiding in the development of new drugs by performing sophisticated simulations of chemical interactions.

    However, quantum computer designers still confront many problems. One of these is that quantum circuits are kicked around by external or even internal noise, which arises from defects in the materials used to make the computers. This noise is essentially random behavior that can create errors in qubit calculations.
    Present-day qubits are inherently noisy by themselves, but that’s not the only problem. Many quantum computer designs have what is called a static architecture, where each qubit in the processor is physically connected to its neighbors and to its readout resonator. The fabricated wiring that connects qubits together and to their readout can expose them to even more noise.
    Such static architectures have another disadvantage: They cannot be reprogrammed easily. A static architecture’s qubits could do a few related jobs, but for the computer to perform a wider range of tasks, it would need to swap in a different processor design with a different qubit organization or layout. (Imagine changing the chip in your laptop every time you needed to use a different piece of software, and then consider that the chip needs to be kept a smidgen above absolute zero, and you get why this might prove inconvenient.)
    The team’s programmable toggle switch sidesteps both of these problems. First, it prevents circuit noise from creeping into the system through the readout resonator and prevents the qubits from having a conversation with each other when they are supposed to be quiet.
    “This cuts down on a key source of noise in a quantum computer,” Simmonds said.

    Second, the opening and closing of the switches between elements are controlled with a train of microwave pulses sent from a distance, rather than through a static architecture’s physical connections. Integrating more of these toggle switches could be the basis of a more easily programmable quantum computer. The microwave pulses can also set the order and sequence of logic operations, meaning a chip built with many of the team’s toggle switches could be instructed to perform any number of tasks.
    “This makes the chip programmable,” Simmonds said. “Rather than having a completely fixed architecture on the chip, you can make changes via software.”
    One last benefit is that the toggle switch can also turn on the measurement of both qubits at the same time. This ability to ask both qubits to reveal themselves as a couple is important for tracking down quantum computational errors.
    The qubits in this demonstration, as well as the toggle switch and the readout circuit, were all made of superconducting components that conduct electricity without resistance and must be operated at very cold temperatures. The toggle switch itself is made from a superconducting quantum interference device, or “SQUID,” which is very sensitive to magnetic fields passing through its loop. Driving a microwave current through a nearby antenna loop can induce interactions between the qubits and the readout resonator when needed.
    At this point, the team has only worked with two qubits and a single readout resonator, but Simmonds said they are preparing a design with three qubits and a readout resonator, and they have plans to add more qubits and resonators as well. Further research could offer insights into how to string many of these devices together, potentially offering a way to construct a powerful quantum computer with enough qubits to solve the kinds of problems that, for now, are insurmountable. More