More stories

  • in

    Optimizing SWAP networks for quantum computing

    A research partnership at the Advanced Quantum Testbed (AQT) at Lawrence Berkeley National Laboratory (Berkeley Lab) and Chicago-based Super.tech (acquired by ColdQuanta in May 2022) demonstrated how to optimize the execution of the ZZ SWAP network protocol, important to quantum computing. The team also introduced a new technique for quantum error mitigation that will improve the network protocol’s implementation in quantum processors. The experimental data was published this July in Physical Review Research, adding more pathways in the near term to implement quantum algorithms using gate-based quantum computing.
    A Smart Compiler for Superconducting Quantum Hardware
    Quantum processors with two- or three-dimensional architectures have limited qubit connectivity where each qubit interacts with only a limited number of other qubits. Furthermore, each qubit’s information can only exist for so long before noise and errors cause decoherence, limiting the runtime and fidelity of quantum algorithms. Therefore, when designing and executing a quantum circuit, researchers must optimize the translation of the circuit made up of abstract (logical) gates to physical instructions based on the native hardware gates available in a given quantum processor. Efficient circuit decompositions minimize the operating time because they consider the number of gates and operations natively supported by the hardware to perform the desired logical operations.
    SWAP gates — which swap information between qubits — are often introduced in quantum circuits to facilitate interactions between information in non-adjacent qubits. If a quantum device only allows gates between adjacent qubits, swaps are used to move information from one qubit to another non-adjacent qubit.
    In noisy intermediate-scale quantum (NISQ) hardware, introducing swap gates can require a large experimental overhead. The swap gate must often be decomposed into native gates, such as controlled-NOT gates. Therefore, when designing quantum circuits with limited qubit connectivity, it is important to use a smart compiler that can search for, decompose, and cancel redundant quantum gates to improve the runtime of a quantum algorithm or application.
    The research partnership used Super.tech’s SuperstaQ software enabling scientists to finely tailor their applications and automate the compilations of circuits for AQT’s superconducting hardware, particularly for a native high-fidelity controlled-S gate, which is not available on most hardware systems. This smart compiling approach with four transmon qubits allows the SWAP networks to be decomposed more efficiently than standard decomposition methods. More

  • in

    The Windchime experiment could use gravity to hunt for dark matter ‘wind’

    The secret to directly detecting dark matter might be blowin’ in the wind.

    The mysterious substance continues to elude scientists even though it outweighs visible matter in the universe by about 8 to 1. All laboratory attempts to directly detect dark matter — seen only indirectly by the effect its gravity has on the motions of stars and galaxies — have gone unfulfilled.

    Those attempts have relied on the hope that dark matter has at least some other interaction with ordinary matter in addition to gravity (SN: 10/25/16). But a proposed experiment called Windchime, though decades from being realized, will try something new: It will search for dark matter using the only force it is guaranteed to feel — gravity.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    “The core idea is extremely simple,” says theoretical physicist Daniel Carney, who described the scheme in May at a meeting of the American Physical Society’s Division of Atomic Molecular and Optical Physics in Orlando, Fla. Like a wind chime on a porch rattling in a breeze, the Windchime detector would try to sense a dark matter “wind” blowing past Earth as the solar system whips around the galaxy.  

    If the Milky Way is mostly a cloud of dark matter, as astronomical measurements suggest, then we should be sailing through it at about 200 kilometers per second. This creates a dark matter wind, for the same reason you feel a wind when you stick your hand out the window of a moving car.

    The Windchime detector is based on the notion that a collection of pendulums will swing in a breeze. In the case of backyard wind chimes, it might be metal rods or dangling bells that jingle in moving air. For the dark matter detector, the pendulums are arrays of minute, ultrasensitive detectors that will be jostled by the gravitational forces they feel from passing bits of dark matter. Instead of air molecules bouncing off metal chimes, the gravitational attraction of the particles that make up the dark matter wind would cause distinctive ripples as it blows through a billion or so sensors in a box measuring about a meter per side.

    Within the Windchime detector (illustrated as an array of small pendulums), a passing dark matter particle (red dot) would gravitationally tug on sensors (blue squares) and cause a detectable ripple, much like wind blowing through a backyard wind chime.D. Carney et al/Physical Review D 2020

    While it may seem logical to search for dark matter using gravity, no one has tried it in the nearly 40 years that scientists have been pursuing dark matter in the lab. That’s because gravity is, comparatively, a very weak force and difficult to isolate in experiments. 

    “You’re looking for dark matter to [cause] a gravitational signal in the sensor,” says Carney, of Lawrence Berkeley National Laboratory in California. “And you just ask . . . could I possibly see this gravitational signal? When you first make the estimate, the answer is no. It’s actually going to be infeasibly difficult.”

    That didn’t stop Carney and a small group of colleagues from exploring the idea anyway in 2020. “Thirty years ago, this would have been totally nuts to propose,” he says. “It’s still kind of nuts, but it’s like borderline insanity.”

    The Windchime Project collaboration has since grown to include 20 physicists. They have a prototype Windchime built of commercial accelerometers and are using it to develop the software and analysis that will lead to the final version of the detector, but it’s a far cry from the ultimate design. Carney estimates that it could take another few decades to develop sensors good enough to measure gravity even from heavy dark matter.

    Carney bases the timeline on the development of the Laser Interferometer Gravitational-Wave Observatory, or LIGO, which was designed to look for gravitational ripples coming from black holes colliding (SN: 2/11/16). When LIGO was first conceived, he says, it was clear that the technology would need to be improved by a hundred million times. Decades of development resulted in an observatory that views the sky in gravitational waves. With Windchime, “we’re in the exact same boat,” he says.

    Even in its final form, Windchime will be sensitive only to dark matter bits that are roughly the mass of a fine speck of dust. That’s enormous on the spectrum of known particles — more than a million trillion times the mass of a proton.

    “There is a variety of very interesting dark matter candidates at [that scale] that are definitely worth looking for … including primordial black holes from the early universe,” says Katherine Freese, a physicist at the University of Michigan in Ann Arbor who is not part of the Windchime collaboration. Black holes slowly evaporate, leaking mass back into space, she notes, which could leave many relics formed shortly after the Big Bang at the mass Windchime could detect.

    But if it never detects anything at all, the experiment still stands out from other dark matter detection schemes, says Dan Hooper, a physicist at Fermilab in Batavia, Ill., also not affiliated with the project. That’s because it would be the first experiment that could entirely rule out some types of dark matter.

    Even if the experiment turns up nothing, Hooper says, “the amazing thing about [Windchime] … is that, independent of anything else you know about dark matter particles, they aren’t in this mass range.” With existing experiments, a failure to detect anything could instead be due to flawed guesses about the forces that affect dark matter (SN: 7/7/22).  

    Windchime will be the only experiment yet imagined where seeing nothing would definitively tell researchers what dark matter isn’t. With a little luck, though, it could uncover a wind of tiny black holes, or even more exotic dark matter bits, blowing past as we careen around the Milky Way. More

  • in

    New chip-based beam steering device lays groundwork for smaller, cheaper lidar

    Researchers have developed a new chip-based beam steering technology that provides a promising route to small, cost-effective and high-performance lidar (or light detection and ranging) systems. Lidar, which uses laser pulses to acquire 3D information about a scene or object, is used in a wide range of applications such as autonomous driving, free-space optical communications, 3D holography, biomedical sensing and virtual reality.
    “Optical beam steering is a key technology for lidar systems, but conventional mechanical-based beam steering systems are bulky, expensive, sensitive to vibration and limited in speed,” said research team leader Hao Hu from the Technical University of Denmark. “Although devices known as chip-based optical phased arrays (OPAs) can quickly and precisely steer light in a non-mechanical way, so far, these devices have had poor beam quality and a field of view typically below 100 degrees.”
    In Optica, Optica Publishing Group’s journal for high-impact research, Hu and co-author Yong Liu describe their new chip-based OPA that solves many of the problems that have plagued OPAs. They show that the device can eliminate a key optical artifact known as aliasing, achieving beam steering over a large field of view while maintaining high beam quality, a combination that could greatly improve lidar systems.
    “We believe our results are groundbreaking in the field of optical beam steering,” said Hu. “This development lays the groundwork for OPA-based lidar that is low cost and compact, which would allow lidar to be widely used for a variety of applications such as high-level advanced driver-assistance systems that can assist in driving and parking and increase safety.”
    A new OPA design
    OPAs perform beam steering by electronically controlling light’s phase profile to form specific light patterns. Most OPAs use an array of waveguides to emit many beams of light and then interference is applied in far field (away from the emitter) to form the pattern. However, the fact that these waveguide emitters are typically spaced far apart from each other and generate multiple beams in the far field creates an optical artifact known as aliasing. To avoid the aliasing error and achieve a 180° field of view, the emitters need to be close together, but this causes strong crosstalk between adjacent emitters and degrades the beam quality. Thus, until now, there has been a trade-off between OPA field of view and beam quality.
    To overcome this trade-off, the researchers designed a new type of OPA that replaces the multiple emitters of traditional OPAs with a slab grating to create a single emitter. This setup eliminates the aliasing error because the adjacent channels in the slab grating can be very close to each other. The coupling between the adjacent channels is not detrimental in the slab grating because it enables the interference and beam formation in the near field (close to the single emitter). The light can then be emitted to the far field with the desired angle. The researchers also applied additional optical techniques to lower the background noise and reduce other optical artifacts such as side lobes.
    High quality and wide field of view
    To test their new device, the researchers built a special imaging system to measure the average far-field optical power along the horizontal direction over a 180° field of view. They demonstrated aliasing-free beam steering in this direction, including steering beyond ±70°, although some beam degradation was seen.
    They then characterized beam steering in the vertical direction by tuning the wavelength from 1480 nm to 1580 nm, achieving a 13.5° tuning range. Finally, they showed the versatility of the OPA by using it to form 2D images of the letters “D,” “T” and “U” centered at the angles of -60°, 0° and 60° by tuning both the wavelength and the phase shifters. The experiments were performed with a beam width of 2.1°, which the researchers are now working to decrease to achieve beam steering with a higher resolution and a longer range.
    “Our new chip-based OPA shows an unprecedented performance and overcomes the long-standing issues of OPAs by simultaneously achieving aliasing-free 2D beam steering over the entire 180° field of view and high beam quality with a low side lobe level,” said Hu.
    This work is funded by VILLUM FONDEN and Innovationsfonden Denmark.
    Story Source:
    Materials provided by Optica. Note: Content may be edited for style and length. More

  • in

    Pairing imaging, AI may improve colon cancer screening, diagnosis

    A research team from the lab of Quing Zhu, the Edwin H. Murty Professor of Engineering in the Department of Biomedical Engineering at the McKelvey School of Engineering at Washington University in St. Louis, has combined optical coherence tomography (OCT) and machine learning to develop a colorectal cancer imaging tool that may one day improve the traditional endoscopy currently used by doctors.
    The results were published in the June issue of the Journal of Biophotonics.
    Screening for colon cancer now relies on human visual inspection of tissue during a colonoscopy procedure. This technique, however, does not detect and diagnose subsurface lesions.
    An endoscopy OCT essentially shines a light in the colon to help a clinician see deeper to visualize and diagnose abnormalities. By collaborating with physicians at Washington University School of Medicine and with Chao Zhou, associate professor of biomedical engineering, the team developed a small OCT catheter, which uses a longer wavelength of light, to penetrate 1-2 mm into the tissue samples.
    Hongbo Luo, a PhD student in Zhu’s lab, led the work.
    The technique provided more information about an abnormality than surface-level, white-light images currently used by physicians. Shuying Li, a biomedical engineering PhD student, used the imaging data to train a machine learning algorithm to differentiate between “normal” and “cancerous” tissue. The combined system allowed them to detect and classify cancerous tissue samples with a 93% diagnostic accuracy.
    Zhu also is a professor of radiology at the School of Medicine. Her team worked with Vladimir Kushnir and Vladimir Lamm at the School of Medicine, Zhu’s team of PhD students, including Tiger Nie, started a trial in patients in July 2022.
    Story Source:
    Materials provided by Washington University in St. Louis. Original written by Brandie Jefferson. Note: Content may be edited for style and length. More

  • in

    Proteins and natural language: Artificial intelligence enables the design of novel proteins

    Artificial intelligence (AI) has created new possibilities for designing tailor-made proteins to solve everything from medical to ecological problems. A research team at the University of Bayreuth led by Prof. Dr. Birte Höcker has now successfully applied a computer-based natural language processing model to protein research. Completely independently, the ProtGPT2 model designs new proteins that are capable of stable folding and could take over defined functions in larger molecular contexts. The model and its potential are detailed scientifically in Nature Communications.
    Natural languages and proteins are actually similar in structure. Amino acids arrange themselves in a multitude of combinations to form structures that have specific functions in the living organism — similar to the way words form sentences in different combinations that express certain facts. In recent years, numerous approaches have therefore been developed to use principles and processes that control the computer-assisted processing of natural language in protein research. “Natural language processing has made extraordinary progress thanks to new AI technologies. Today, models of language processing enable machines not only to understand meaningful sentences but also to generate them themselves. Such a model was the starting point of our research. With detailed information concerning about 50 million sequences of natural proteins, my colleague Noelia Ferruz trained the model and enabled it to generate protein sequences on its own. It now understands the language of proteins and can use it creatively. We have found that these creative designs follow the basic principles of natural proteins,” says Prof. Dr. Birte Höcker, Head of the Protein Design Group at the University of Bayreuth.
    The language processing model transferred to protein evolution is called “ProtGPT2.” It can now be used to design proteins that adopt stable structures through folding and are permanently functional in this state. In addition, the Bayreuth biochemists have found out, through complex investigations, that the model can even create proteins that do not occur in nature and have possibly never existed in the history of evolution. These findings shed light on the immeasurable world of possible proteins and open a door to designing them in novel and unexplored ways. There is a further advantage: Most proteins that have been designed de novo so far have idealised structures. Before such structures can have a potential application, they usually must pass through an elaborate functionalization process — for example by inserting extensions and cavities — so that they can interact with their environment and take on precisely defined functions in larger system contexts. ProtGPT2, on the other hand, generates proteins that have such differentiated structures innately, and are thus already operational in their respective environments.
    “Our new model is another impressive demonstration of the systemic affinity of protein design and natural language processing. Artificial intelligence opens up highly interesting and promising possibilities to use methods of language processing for the production of customised proteins. At the University of Bayreuth, we hope to contribute in this way to developing innovative solutions for biomedical, pharmaceutical, and ecological problems,” says Prof. Dr. Birte Höcker.
    Story Source:
    Materials provided by Universität Bayreuth. Note: Content may be edited for style and length. More

  • in

    Gesture-based communication techniques may ease video meeting challenges

    Researchers have developed and demonstrated the potential benefit of a simple set of physical gestures that participants in online group video meetings can use to improve their meeting experience. Paul D. Hills of University College London, U.K., and colleagues from University College London and the University of Exeter, U.K., present the technique, which they call Video Meeting Signals (VMS™), in the open-access journal PLOS ONE on August 3, 2022.
    During the COVID-19 pandemic, online video conferencing has been a useful tool for industry, education, and social interactions. However, it has also been associated with poor mental well-being, poor communication, and fatigue.
    To help overcome the challenges of online video meetings, Hills developed VMS, a set of simple physical gestures that can be used alongside verbal communication during a video meeting. The gestures — including two thumbs up to signal agreement or a hand over the heart to show sympathy — are meant to improve experiences by serving a similar function as subtle face-to-face signals, such as raised eyebrows, while being more visible in a small video setting.
    To investigate the potential of VMS, Hills and colleagues first tested it among more than 100 undergraduate students. After half were trained on the technique, the students participated in two video-based seminars in groups of about 10 students each, before answering a survey about their experience.
    Analysis of the survey results showed that, compared to students without VMS training, those with VMS training reported a better personal experience, better feelings about their seminar group, and better learning outcomes. Analysis of seminar transcripts also suggested that students with VMS training were more likely to use positive language.
    Similar results were seen in a follow-up experiment with participants who were not students. This experiment also suggested that participants trained to use emojis instead of VMS gestures did not experience the same improved experience as participants with VMS training.
    These findings suggest that VMS may be an effective technique to help overcome the challenges of video conferencing. In the future, the researchers plan to continue to study VMS, for instance by investigating the mechanisms that may underlie its effects and how to apply it for maximum benefit.
    Paul D. Hills adds: “Our research indicates that there’s something about the use of gestures specifically which appears to help online interactions and help people connect and engage with each other. This can improve team performance, make meetings more inclusive and help with psychological wellbeing.”
    Daniel C. Richardson adds: “Because you can’t make eye contact or pick up on subtle nods, gestures and murmurs of agreement or dissent in video conferences, it can be hard to know if people are engaged with what you’re saying. We found strong evidence that encouraging people to use more natural hand gestures had a much better effect on their experience.”
    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    Machine learning enables optimal design of anti-biofouling polymer brush films

    Polymer brush films consists of monomer chains grown in close proximity on a substrate. The monomers, which look like “bristles” at the nanoscale, form a highly functional and versatile coating such that it can selectively adsorb or repel a variety of chemicals or biological molecules. For instance, polymer brush films have been used as a scaffold to grow biological cells and as protective anti-biofouling coatings that repel unwanted biological organisms.
    As anti-biofouling coatings, polymer brushes have been designed based primarily on the interaction between monomers and water molecules. While this makes for simple design, quantitative prediction of the adsorption of biomolecules, such as proteins, onto monomers have proved challenging owing to the complex interactions involved.
    Now, in a recent study published in ACS Biomaterials Science & Engineering, a research group led by Associate Professor Tomohiro Hayashi from Tokyo Institute of Technology (Tokyo Tech), Japan, has used machine learning to predict these interactions and identify the film characteristics that have a significant impact on protein adsorption.
    In their study, the team fabricated 51 different polymer brush films of different thicknesses and densities with five different monomers to train the machine learning algorithm. They then tested several of these algorithms to see how well their predictions matched up against the measured protein adsorption. “We tested several supervised regression algorithms, namely gradient boosting regression, support vector regression, linear regression, and random forest regression, to select the most reliable and suitable model in terms of the prediction accuracy,” says Dr. Hayashi.
    Out of these models, the random forest (RF) regression model showed the best agreement with the measured protein adsorption values. Accordingly, the researchers used the RF model to correlate the physical and chemical properties of the polymer brush with its ability to adsorb serum protein and allow for cell adhesion.
    “Our analyses showed that the hydrophobicity index, or the relative hydrophobicity, was the most critical parameter. Next in line were thickness and density of polymer brush films, the number of C-H bonds, the net charge on monomer, and the density of the films. Monomer molecular weight and the number of O-H bonds, on the other hand, were ranked low in importance,” highlights Dr. Hayashi.
    Given the highly varied nature of polymer brush films and the multiple factors that affect the monomer-protein interactions, adoption of machine learning as a way to optimize polymer brush film properties can provide a good starting point for the efficient design of anti-biofouling materials and functional biomaterials.
    Story Source:
    Materials provided by Tokyo Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Augmented reality could be the future of paper books, according to new research

    Augmented reality might allow printed books to make a comeback against the e-book trend, according to researchers from the University of Surrey.
    Surrey has introduced the third generation (3G) version of its Next Generation Paper (NGP) project, allowing the reader to consume information on the printed paper and screen side by side.
    Dr Radu Sporea, Senior lecturer at the Advanced Technology Institute (ATI), comments:
    “The way we consume literature has changed over time with so many more options than just paper books. Multiple electronic solutions currently exist, including e-readers and smart devices, but no hybrid solution which is sustainable on a commercial scale.
    “Augmented books, or a-books, can be the future of many book genres, from travel and tourism to education. This technology exists to assist the reader in a deeper understanding of the written topic and get more through digital means without ruining the experience of reading a paper book.”
    Power efficiency and pre-printed conductive paper are some of the new features which allow Surrey’s augmented books to now be manufactured on a semi-industrial scale. With no wiring visible to the reader, Surrey’s augmented reality books allow users to trigger digital content with a simple gesture (such as a swipe of a finger or turn of a page), which will then be displayed on a nearby device.
    George Bairaktaris, Postgraduate researcher at the University of Surrey and part of the Next Generation Paper project team, said:
    “The original research was carried out to enrich travel experiences by creating augmented travel guides. This upgraded 3G model allows for the possibility of using augmented books for different areas such as education. In addition, the new model disturbs the reader less by automatically recognising the open page and triggering the multimedia content.”
    “What started as an augmented book project, evolved further into scalable user interfaces. The techniques and knowledge from the project led us into exploring organic materials and printing techniques to fabricate scalable sensors for interfaces beyond the a-book.”
    Story Source:
    Materials provided by University of Surrey. Note: Content may be edited for style and length. More