More stories

  • in

    New super-pure silicon chip opens path to powerful quantum computers

    Researchers at the Universities of Melbourne and Manchester have invented a breakthrough technique for manufacturing highly purified silicon that brings powerful quantum computers a big step closer.
    The new technique to engineer ultra-pure silicon makes it the perfect material to make quantum computers at scale and with high accuracy, the researchers say.
    Project co-supervisor Professor David Jamieson, from the University of Melbourne, said the innovation – published today in Communication Materials, a Nature journal – uses qubits of phosphorous atoms implanted into crystals of pure stable silicon and could overcome a critical barrier to quantum computing by extending the duration of notoriously fragile quantum coherence.
    “Fragile quantum coherence means computing errors build up rapidly. With robust coherence provided by our new technique, quantum computers could solve in hours or minutes some problems that would take conventional or ‘classical’ computers – even supercomputers – centuries,” Professor Jamieson said.
    Quantum bits or qubits* – the building blocks of quantum computers – are susceptible to tiny changes in their environment, including temperature fluctuations. Even when operated in tranquil refrigerators near absolute zero (minus 273 degrees Celsius), current quantum computers can maintain error-free coherence for only a tiny fraction of a second.  
    University of Manchester co-supervisor Professor Richard Curry said ultra-pure silicon allowed construction of high-performance qubit devices – a critical component required to pave the way towards scalable quantum computers.  
    “What we’ve been able to do is effectively create a critical ‘brick’ needed to construct a silicon-based quantum computer. It’s a crucial step to making a technology that has the potential to be transformative for humankind,” Professor Curry said. 
    Lead author Ravi Acharya, a joint University of Manchester/University of Melbourne Cookson Scholar, said the great advantage of silicon chip quantum computing was it used the same essential techniques that make the chips used in today’s computers.

    “Electronic chips currently within an everyday computer consist of billions of transistors — these can also be used to create qubits for silicon-based quantum devices. The ability to create high quality silicon qubits has in part been limited to date by the purity of the silicon starting material used. The breakthrough purity we show here solves this problem.” 
    Professor Jamieson said the new highly purified silicon computer chips house and protect the qubits so they can sustain quantum coherence much longer, enabling complex calculations with greatly reduced need for error correction.
    “Our technique opens the path to reliable quantum computers that promise step changes across society, including in artificial intelligence, secure data and communications, vaccine and drug design, and energy use, logistics and manufacturing,” he said.
    Silicon – made from beach sand – is the key material for today’s information technology industry because it is an abundant and versatile semiconductor: it can act as a conductor or an insulator of electrical current, depending on which other chemical elements are added to it.
    “Others are experimenting with alternatives, but we believe silicon is the leading candidate for quantum computer chips that will enable the enduring coherence required for reliable quantum calculations,” Professor Jamieson said.
    “The problem is that while naturally occurring silicon is mostly the desirable isotope silicon-28, there’s also about 4.5 percent silicon-29. Silicon-29 has an extra neutron in each atom’s nucleus that acts like a tiny rogue magnet, destroying quantum coherence and creating computing errors,” he said.

    The researchers directed a focused, high-speed beam of pure silicon-28 at a silicon chip so the silicon-28 gradually replaced the silicon-29 atoms in the chip, reducing silicon-29 from 4.5 per cent to two parts per million (0.0002 per cent). 
    “The great news is to purify silicon to this level, we can now use a standard machine – an ion implanter – that you would find in any semiconductor fabrication lab, tuned to a specific configuration that we designed,” Professor Jamieson said.
    In previously published research with the ARC Centre of Excellence for Quantum Computation and Communication Technology, the University of  Melbourne set – and still holds – the world record for single-qubit coherence of 30 seconds using silicon that was less purified. 30 seconds is plenty of time to complete error-free, complex quantum calculations.
    Professor Jamieson said the  largest existing quantum computers had more than 1000 qubits, but errors occurred within milliseconds due to lost coherence.
    “Now that we can produce extremely pure silicon-28, our next step will be to demonstrate that we can sustain quantum coherence for many qubits simultaneously. A reliable quantum computer with just 30 qubits would exceed the power of today’s supercomputers for some applications,” he said.
    This latest work was supported by research grants from the Australian and UK governments.  Professor Jamieson’s collaboration with the University of Manchester is supported by a Royal Society Wolfson Visiting Fellowship.
    A 2020 report from Australia’s CSIRO estimated that quantum computing in Australia has potential to create 10,000 jobs and $2.5 billion in annual revenue by 2040.
    “Our research takes us significantly closer to realising this potential,” Professor Jamieson said.
    *A qubit – such as an atomic nucleus, electron, or photon – is a quantum object when it is in a quantum superposition of multiple states. Coherence is lost when the qubit reverts to a single state and becomes a classical object like a conventional computer bit, which is only ever one or zero and never in superposition. More

  • in

    Engineers develop innovative microbiome analysis software tools

    Since the first microbial genome was sequenced in 1995, scientists have reconstructed the genomic makeup of hundreds of thousands of microorganisms and have even devised methods to take a census of bacterial communities on the skin, in the gut, or in soil, water and elsewhere based on bulk samples, leading to the emergence of a relatively new field of study known as metagenomics.
    Parsing through metagenomic data can be a daunting task, much like trying to assemble several massive jigsaw puzzles with all of the pieces jumbled together. Taking on this unique computational challenge, Rice University graph-artificial intelligence (AI) expert Santiago Segarra and computational biologist Todd Treangen paired up to explore how AI-powered data analysis could help craft new tools to supercharge metagenomics research.
    The scientist duo zeroed in on two types of data that make metagenomic analysis particularly challenging — repeats and structural variants — and developed tools for handling these data types that outperform current methods.
    Repeats are identical DNA sequences occurring repeatedly both throughout the genome of single organisms and across multiple genomes in a community of organisms.
    “The DNA in a metagenomic sample from multiple organisms can be represented as a graph,” said Segarra, assistant professor of electrical and computer engineering. “Essentially, one of the tools we developed leverages the structure of this graph in order to determine which pieces of DNA appear repeatedly either across microbes or within the same microorganism.”
    Dubbed GraSSRep, the method combines self-supervised learning, a machine learning process where an AI model trains itself to distinguish between hidden and available input, and graph neural networks, systems that process data representing objects and their interconnections as graphs. The peer-reviewed paper was presented at the 28th session of a leading annual international conference on research in computational molecular biology, RECOMB 2024. The project was led by Rice graduate student and research assistant Ali Azizpour. Advait Balaji, a Rice doctoral alumnus, is also an author on the study.
    Repeats are of interest because they play a significant role in biological processes such as bacterial response to changes in their environment or microbiomes’ interaction with host organisms. A specific example of a phenomenon where repeats can play a role is antibiotic resistance. Generally speaking, tracking repeats’ history or dynamics in a bacterial genome can shed light on microorganisms’ strategies for adaptation or evolution. What’s more, repeats can sometimes actually be viruses in disguise, or bacteriophages. From the Greek word for “devour,” phages are sometimes used to kill bacteria.

    “These phages actually show up looking like repeats, so you can track bacteria-phage dynamics based off the repeats contained in the genomes,” said Treangen, associate professor of computer science. “This could provide clues on how to get rid of hard-to-kill bacteria, or paint a clearer picture of how these viruses are interacting with a bacterial community.”
    Previously when a graph-based approach was used to carry out repeat detection, researchers used predefined specifications for what to look for in the graph data. What sets GraSSRep apart from these prior approaches is the lack of any such predefined parameters or references informing how the data is processed.
    “Our method learns how to better use the graph structure in order to detect repeats as opposed to relying on initial input,” Segarra said. “Self-supervised learning allows this tool to train itself in the absence of any ground truth establishing what is a repeat and what is not a repeat. When you’re handling a metagenomic sample, you don’t need to know anything about what’s in there to analyze it.”
    The same is true in the case of another metagenomic analysis method co-developed by Segarra and Treangen — reference-free structural variant detection in microbiomes via long-read coassembly graphs, or rhea. Their peer-reviewed paper on rhea will be presented at the International Society for Computational Biology’s annual conference, which will take place July 12-16 in Montreal. The lead author on the paper is Rice computer science doctoral alumna Kristen Curry, who will be joining the lab of Rayan Chikhi — also a co-author on the paper — at the Institut Pasteur in Paris as a postdoctoral scientist.
    While GraSSRep is designed to deal with repeats, rhea handles structural variants, which are genomic alterations of 10 base pairs or more that are relevant to medicine and molecular biology due to their role in various diseases, gene expression regulation, evolutionary dynamics and promoting genetic diversity within populations and among species.
    “Identifying structural variants in isolated genomes is relatively straightforward, but it’s harder to do so in metagenomes where there’s no clear reference genome to help categorize the data,” Treangen said.

    Currently one of the widely used methods for processing metagenomic data is through metagenome-assembled genomes or MAGs.
    “These de novo or reference-guided assemblers are pretty well-established tools that entail a whole operational pipeline with repeat detection or structural variants’ identification being just some of their functionalities,” Segarra said. “One thing that we’re looking into is replacing existing algorithms with ours and seeing how that can improve the performance of these very widely used metagenomic assemblers.”
    Rhea does not need reference genomes or MAGs to detect structural variants, and it outperformed methods relying on such prespecified parameters when tested against two mock metagenomes.
    “This was particularly noticeable because we got a much more granular read of the data than we did using reference genomes,” Segarra said. “The other thing that we’re currently looking into is applying the tool to real-world datasets and seeing how the results relate back to biological processes and what insights this might give us.”
    Treangen said GraSSRep and rhea combined — building on previous contributions in the area — have the potential “to unlock the underlying rules of life governing microbial evolution.”
    The projects are the result of a yearslong collaboration between the Segarra and Treangen labs. More

  • in

    Researchers use foundation models to discover new cancer imaging biomarkers

    Researchers at Mass General Brigham have harnessed the technology behind foundation models, which power tools like ChatGPT, to discover new cancer imaging biomarkers that could transform how patterns are identified from radiological images. Improved identification of such patterns can greatly impact the early detection and treatment of cancer.
    The research team developed their foundation model using a comprehensive dataset consisting of 11,467 images of abnormal radiologic scans. Using these images, the model was able to identify patterns that predict anatomical site, malignancy, and prognosis across three different use cases in four cohorts. Compared to existing methods in the field, their approach remained powerful when applied to specialized tasks where only limited data are available. Results are published in Nature Machine Intelligence.
    “Given that image biomarker studies are tailored to answer increasingly specific research questions, we believe that our work will enable more accurate and efficient investigations,” said first author Suraj Pai from the Artificial Intelligence in Medicine (AIM) Program at Mass General Brigham.
    Despite the improved efficacy of AI methods, a key question remains their reliability and explainability (the concept that an AI’s answers can be explained in a way that “makes sense” to humans). The researchers demonstrated that their methods remained stable across inter-reader variations and differences in acquisition. Patterns identified by the foundation model also demonstrated strong associations with underlying biology, mainly correlating with immune-related pathways.
    “Our findings demonstrate the efficacy of foundation models in medicine when only limited data might be available for training deep learning networks, especially when applied to identifying reliable imaging biomarkers for cancer-associated use cases,” said senior author Hugo Aerts, PhD, director of the AIM Program. More

  • in

    Why getting in touch with our ‘gerbil brain’ could help machines listen better

    Macquarie University researchers have debunked a 75-year-old theory about how humans determine where sounds are coming from, and it could unlock the secret to creating a next generation of more adaptable and efficient hearing devices ranging from hearing aids to smartphones.
    In the 1940s, an engineering model was developed to explain how humans can locate a sound source based on differences of just a few tens of millionths of a second in when the sound reaches each ear.
    This model worked on the theory that we must have a set of specialised detectors whose only function was to determine where a sound was coming from, with location in space represented by a dedicated neuron.
    Its assumptions have been guiding and influencing research — and the design of audio technologies — ever since.
    But a new research paper published in Current Biology by Macquarie University Hearing researchers has finally revealed that the idea of a neural network dedicated to spatial hearing does not hold.
    Lead author, Macquarie University Distinguished Professor of Hearing, David McAlpine, has spent the past 25 years proving that one animal after another was actually using a much sparser neural network, with neurons on both sides of the brain performing this function in addition to others.
    Showing this in action in humans was more difficult.

    Now through the combination of a specialised hearing test, advanced brain imaging, and comparisons with the brains of other mammals including rhesus monkeys, he and his team have shown for the first time that humans also use these simpler networks.
    “We like to think that our brains must be far more advanced than other animals in every way, but that is just hubris,” Professor McAlpine says.
    “We’ve been able to show that gerbils are like guinea pigs, guinea pigs are like rhesus monkeys, and rhesus monkeys are like humans in this regard.
    “A sparse, energy efficient form of neural circuitry performs this function — our gerbil brain, if you like.”
    The research team also proved that the same neural network separates speech from background sounds — a finding that is significant for the design of both hearing devices and the electronic assistants in our phones.
    All types of machine hearing struggles with the challenge of hearing in noise, known as the ‘cocktail party problem’. It makes it difficult for people with hearing devices to pick out one voice in a crowded space, and for our smart devices to understand when we talk to them.

    Professor McAlpine says his team’s latest findings suggest that rather than focusing on the large language models (LLMs) that are currently used, we should be taking a far simpler approach.
    “LLMs are brilliant at predicting the next word in a sentence, but they’re trying to do too much,” he says.
    “Being able to locate the source of a sound is the important thing here, and to do that, we don’t need a ‘deep mind’ language brain. Other animals can do it, and they don’t have language.
    “When we are listening, our brains don’t keep tracking sound the whole time, which the large language processors are trying to do.
    “Instead, we, and other animals, use our ‘shallow brain’ to pick out very small snippets of sound, including speech, and use these snippets to tag the location and maybe even the identity of the source.
    “We don’t have to reconstruct a high-fidelity signal to do this, but instead understand how our brain represents that signal neurally, well before it reaches a language centre in the cortex.
    “This shows us that a machine doesn’t have to be trained for language like a human brain to be able to listen effectively.
    “We only need that gerbil brain.”
    The next step for the team is to identify the minimum amount of information that can be conveyed in a sound but still get the maximum amount of spatial listening. More

  • in

    Cybersecurity education varies widely in US

    Cybersecurity programs vary dramatically across the country, a review has found. The authors argue that program leaders should work with professional societies to make sure graduates are well trained to meet industry needs in a fast-changing field.
    In the review, published in the Proceedings of the Association for Computing Machinery’s Technical Symposium on Computer Science Education, a Washington State University-led research team found a shortage of research in evaluating the instructional approaches being used to teach cybersecurity. The authors also contend that programs could benefit from increasing their use of educational and instructional tools and theories.
    “There is a huge variation from school to school on how much cybersecurity content is required for students to take,” said co-author Assefaw Gebremedhin, associate professor in the WSU School of Electrical Engineering and Computer Science and leader of the U.S. Department of Defense-funded VICEROY Northwest Institute for Cybersecurity Education and Research (CySER). “We found that programs could benefit from using ideas from other fields, such as educational psychology, in which there would be a little more rigorous evaluation.”
    Cybersecurity is an increasingly important field of study because compromised data or network infrastructure can directly impact people’s privacy, livelihoods and safety. The researchers also noted that adversaries change their tactics frequently, and cybersecurity professionals must be able to respond effectively.
    As part of the study, the researchers analyzed programs at 100 institutions throughout the U.S. that are designated as a National Security Administration’s National Center of Academic Excellence in Cybersecurity. To have the designation, the programs have to meet the NSA requirements for educational content and quality.
    The researchers assessed factors such as the number and type of programs offered, the number of credits focused on cybersecurity courses, listed learning outcomes and lists of professional jobs available for graduates.
    They found that while the NSA designation provides requirements for the amount of cybersecurity content included in curricula, the center of excellence institutions vary widely in the types of programs they offer and how many cybersecurity-specific courses they provide. Half of the programs offered bachelor’s degrees, while other programs offered certificates, associate degrees, minors or concentration tracks.

    The most common type of program offered was a certificate, and most of the programs were housed within engineering, computer science, or technology schools or departments. The researchers found that industry professionals had different expectations of skill levels from what graduates of the program have.
    The researchers hope the work will serve as a benchmark to compare programs across the U.S. and as a roadmap toward better meeting industry needs.
    With funding from the state of Washington, WSU began offering a cybersecurity degree last year. The oldest cybersecurity programs are only about 25 years old, said Gebremedhin, but programs have traditionally been training students to become information technology professionals or system administrators.
    “In terms of maturity, in being a discipline as a separate degree program, cybersecurity is relatively new, even for computer science,” said Gebremedhin.
    The field is also constantly changing.
    “In cyber operations, you want to be on offense,” he said. “If you are to defend, then you need to stay ahead of your attacker, and if they keep changing, you have to be changing at a faster rate.” More

  • in

    Caterbot? Robatapillar? It crawls with ease through loops and bends

    Engineers at Princeton and North Carolina State University have combined ancient paperfolding and modern materials science to create a soft robot that bends and twists through mazes with ease.
    Soft robots can be challenging to guide because steering equipment often increases the robot’s rigidity and cuts its flexibility. The new design overcomes those problems by building the steering system directly into the robot’s body, said Tuo Zhao, a postdoctoral researcher at Princeton.
    In an article published May 6 in the journal PNAS, the researchers describe how they created the robot out of modular, cylindrical segments. The segments, which can operate independently or join to form a longer unit, all contribute to the robot’s ability to move and steer. The new system allows the flexible robot to crawl forward and reverse, pick up cargo and assemble into longer formations.
    “The concept of modular soft robots can provide insight into future soft robots that can grow, repair, and develop new functions,” the authors write in their article.
    Zhao said the robot’s ability to assemble and split up on the move allows the system to work as a single robot or a swarm.
    “Each segment can be an individual unit, and they can communicate with each other and assemble on command,” he said. “They can separate easily, and we use magnets to connect them.”
    Zhao works in Glaucio Paulino’s lab in the Department of Civil and Environmental Engineering and the Princeton Materials Institute. Paulino, the Margareta Engman Augustine Professor of Engineering, has created a body of research that applies origami to a wide array of engineering applications from medical devices to aerospace and construction.

    “We have created a bio-inspired plug-and-play soft modular origami robot enabled by electrothermal actuation with highly bendable and adaptable heaters,” Paulino said. “This is a very promising technology with potential translation to robots that can grow, heal, and adapt on demand.”
    In this case, the researchers began by building their robot out of cylindrical segments featuring an origami form called a Kresling pattern. The pattern allows each segment to twist into a flattened disk and expand back into a cylinder. This twisting, expanding motion is the basis for the robot’s ability to crawl and change direction. By partially folding a section of the cylinder, the researchers can introduce a lateral bend in a robot segment. By combining small bends, the robot changes direction as it moves forward.
    One of the most challenging aspects of the work involved developing a mechanism to control the bending and folding motions used to drive and steer the robot. Researchers at North Carolina State University developed the solution. They used two materials that shrink or expand differently when heated (liquid crystal elastomer and polyimide) and combined them into thin strips along the creases of the Kresling pattern. The researchers also installed a thin stretchable heater made of silver nanowire network along each fold. Electrical current on the nanowire heater heats the control strips, and the two materials’ different expansion introduces a fold in the strip. By calibrating the current, and the material used in the control strips, the researchers can precisely control the folding and bending to drive the robot’s movement and steering.
    “Silver nanowire is an excellent material to fabricate stretchable conductors. Stretchable conductors are building blocks for a variety of stretchable electronic devices including stretchable heaters. Here we used the stretchable heater as the actuation mechanism for the bending and folding motions” said Yong Zhu, the Andrew A. Adams Distinguished Professor in the Department of Mechanical and Aerospace Engineering at N.C. State and one of the lead researchers.
    Shuang Wu, a postdoctoral researcher in Zhu’s lab, said the lab’s previous work used the stretchable heater for continuously bending a bilayer structure. “In this work we achieved localized, sharp folding to actuate the origami pattern. This effective actuation method can be generally applied to origami structures (with creases) for soft robotics,” Wu said.
    The researchers said that the current version of the robot has limited speed, and they are working to increase the locomotion in later generations.
    Zhao said the researchers also plan to experiment with different shapes, patterns, and instability to improve both the speed and the steering. Support for the research was provided in part by the National Science Foundation and the National Institutes of Health. More

  • in

    Simulated chemistry: New AI platform designs tomorrow’s cancer drugs

    Scientists at UC San Diego have developed a machine learning algorithm to simulate the time-consuming chemistry involved in the earliest phases of drug discovery, which could significantly streamline the process and open doors for never-before-seen treatments. Identifying candidate drugs for further optimization typically involves thousands of individual experiments, but the new artificial intelligence (AI) platform could potentially give the same results in a fraction of the time. The researchers used the new tool, described in Nature Communications, to synthesize 32 new drug candidates for cancer.
    The technology is part of a new but growing trend in pharmaceutical science of using AI to improve drug discovery and development.
    “A few years ago, AI was a dirty word in the pharmaceutical industry, but now the trend is definitely the opposite, with biotech startups finding it difficult to raise funds without addressing AI in their business plan,” said senior author Trey Ideker, professor in the Department of Medicine at UC San Diego School of Medicine and adjunct professor of bioengineering and computer science at the UC San Diego Jacobs School of Engineering. “AI-guided drug discovery has become a very active area in industry, but unlike the methods being developed in companies, we’re making our technology open source and accessible to anybody who wants to use it.”
    The new platform, called POLYGON, is unique among AI tools for drug discovery in that it can identify molecules with multiple targets, while existing drug discovery protocols currently prioritize single target therapies. Multi-target drugs are of major interest to doctors and scientists because of their potential to deliver the same benefits as combination therapy, in which several different drugs are used together to treat cancer, but with fewer side effects.
    “It takes many years and millions of dollars to find and develop a new drug, especially if we’re talking about one with multiple targets.” said Ideker. “The rare few multi-target drugs we do have were discovered largely by chance, but this new technology could help take chance out of the equation and kickstart a new generation of precision medicine.”
    The researchers trained POLYGON on a database of over a million known bioactive molecules containing detailed information about their chemical properties and known interactions with protein targets. By learning from patterns found in the database, POLYGON is able to generate original chemical formulas for new candidate drugs that are likely to have certain properties, such as the ability to inhibit specific proteins.
    “Just like AI is now very good at generating original drawings and pictures, such as creating pictures of human faces based off desired properties like age or sex, POLYGON is able to generate original molecular compounds based off of desired chemical properties,” said Ideker. “In this case, instead of telling the AI how old we want our face to look, we’re telling it how we want our future drug to interact with disease proteins.”
    To put POLYGON to the test, the researchers used it to generate hundreds of candidate drugs that target various pairs of cancer-related proteins. Of these, the researchers synthesized 32 molecules that had the strongest predicted interactions with the MEK1 and mTOR proteins, a pair of cellular signaling proteins that are a promising target for cancer combination therapy. These two proteins are what scientists call synthetically lethal, which means that inhibiting both together is enough to kill cancer cells even if inhibiting one alone is not.

    The researchers found that the drugs they synthesized had significant activity against MEK1 and mTOR, but had few off-target reactions with other proteins. This suggests that one or more of the drugs identified by POLYGON could be able to target both proteins as a cancer treatment, providing a list of choices for fine-tuning by human chemists.
    “Once you have the candidate drugs, you still need to do all the other chemistry it takes to refine those options into a single, effective treatment,” said Ideker. “We can’t and shouldn’t try to eliminate human expertise from the drug discovery pipeline, but what we can do is shorten a few steps of the process.”
    Despite this caution, the researchers are optimistic that the possibilities of AI for drug discovery are only just being explored.
    “Seeing how this concept plays out over the next decade, both in academia and in the private sector, is going to be very exciting.” said Ideker. “The possibilities are virtually endless.”
    This study was funded, in part, by the National Institutes of Health (Grants CA274502, GM103504, ES014811, CA243885, CA212456). More

  • in

    Experiment opens door for millions of qubits on one chip

    Researchers from the University of Basel and the NCCR SPIN have achieved the first controllable interaction between two hole spin qubits in a conventional silicon transistor. The breakthrough opens up the possibility of integrating millions of these qubits on a single chip using mature manufacturing processes.
    The race to build a practical quantum computer is well underway. Researchers around the world are working on a huge variety of qubit technologies. So far, there is no consensus on what type of qubit is most suitable for maximizing the potential of quantum information science.
    Qubits are the foundation of a quantum computer: they handle the processing, transfer and storage of data. To work correctly, they have to both reliably store and rapidly process information. The basis for rapid information processing is stable and fast interactions between a large number of qubits whose states can be reliably controlled from the outside.
    For a quantum computer to be practical, millions of qubits must be accommodated on a single chip. The most advanced quantum computers today have only a few hundred qubits, meaning they can only perform calculations that are already possible (and often more efficient) on conventional computers..
    Electrons and holes
    To solve the problem of arranging and linking thousands of qubits, researchers at the University of Basel and the NCCR SPIN rely on a type of qubit that uses the spin (intrinsic angular momentum) of an electron or a hole. A hole is essentially a missing electron in a semiconductor. Both holes and electrons possess spin, which can adopt one of two states: up or down, analogous to 0 and 1 in classical bits. Compared to an electron spin, a hole spin has the advantage that it can be entirely electrically controlled without needing additional components like micromagnets on the chip.
    As early as 2022, Basel physicists were able to show that the hole spins in an existing electronic device can be trapped and used as qubits. These “FinFETs” (fin field-effect transistors) are built into modern smartphones and are produced in widespread industrial processes. Now, a team led by Dr. Andreas Kuhlmann has succeeded for the first time in achieving a controllable interaction between two qubits within this setup.

    Fast and precise controlled spin-flip
    A quantum computer needs “quantum gates” to perform calculations. These represent operations that manipulate the qubits and couple them to each other. As the researchers report in the journal Nature Physics, they were able to couple two qubits and bring about a controlled flip of one of their spins, depending on the state of the other’s spin — known as a controlled spin-flip. “Hole spins allow us to create two-qubit gates that are both fast and high-fidelity. This principle now also makes it possible to couple a larger number of qubit pairs,” says Kuhlmann.
    The coupling of two spin qubits is based on their exchange interaction, which occurs between two indistinguishable particles that interact with each other electrostatically. Surprisingly, the exchange energy of holes is not only electrically controllable, but strongly anisotropic. This is a consequence of spin-orbit coupling, which means that the spin state of a hole is influenced by its motion through space.
    To describe this observation in a model, experimental and theoretical physicists at the University of Basel and the NCCR SPIN combined forces. “The anisotropy makes two-qubit gates possible without the usual trade-off between speed and fidelity,” Dr. Kuhlmann says in summary.
    “Qubits based on hole spins not only leverage the tried-and-tested fabrication of silicon chips, they are also highly scalable and have proven to be fast and robust in experiments.” The study underscores that this approach has a strong chance in the race to develop a large-scale quantum computer. More