More stories

  • in

    Experiments reveal formation of a new state of matter-electron quadruplets

    The central principle of superconductivity is that electrons form pairs. But can they also condense into foursomes? Recent findings have suggested they can, and a physicist at KTH Royal Institute of Technology today published the first experimental evidence of this quadrupling effect and the mechanism by which this state of matter occurs.
    Reporting today in Nature Physics, Professor Egor Babaev and collaborators presented evidence of fermion quadrupling in a series of experimental measurements on the iron-based material, Ba1?xKxFe2As2. The results follow nearly 20 years after Babaev first predicted this kind of phenomenon, and eight years after he published a paper predicting that it could occur in the material.
    The pairing of electrons enables the quantum state of superconductivity, a zero-resistance state of conductivity which is used in MRI scanners and quantum computing. It occurs within a material as a result of two electrons bonding rather than repelling each other, as they would in a vacuum. The phenomenon was first described in a theory by, Leon Cooper, John Bardeen and John Schrieffer, whose work was awarded the Nobel Prize in 1972.
    So-called Cooper pairs are basically “opposites that attract.” Normally two electrons, which are negatively-charged subatomic particles, would strongly repel each other. But at low temperatures in a crystal they become loosely bound in pairs, giving rise to a robust long-range order. Currents of electron pairs no longer scatter from defects and obstacles and a conductor can lose all electrical resistance, becoming a new state of matter: a superconductor.
    Only in recent years has the theoretical idea of four-fermion condensates become broadly accepted.
    For a fermion quadrupling state to occur there has to be something that prevents condensation of pairs and prevents their flow without resistance, while allowing condensation of four-electron composites, Babaev says. More

  • in

    Cutting through the noise: AI enables high-fidelity quantum computing

    Researchers led by the Institute of Scientific and Industrial Research (SANKEN) at Osaka University have trained a deep neural network to correctly determine the output state of quantum bits, despite environmental noise. The team’s novel approach may allow quantum computers to become much more widely used.
    Modern computers are based on binary logic, in which each bit is constrained to be either a 1 or a 0. But thanks to the weird rules of quantum mechanics, new experimental systems can achieve increased computing power by allowing quantum bits, also called qubits, to be in “superpositions” of 1 and 0. For example, the spins of electrons confined to tiny islands called quantum dots can be oriented both up and down simultaneously. However, when the final state of a bit is read out, it reverts to the classical behavior of being one orientation or the other. To make quantum computing reliable enough for consumer use, new systems will need to be created that can accurately record the output of each qubit even if there is a lot of noise in the signal.
    Now, a team of scientists led by SANKEN used a machine learning method called a deep neural network to discern the signal created by the spin orientation of electrons on quantum dots. “We developed a classifier based on deep neural network to precisely measure a qubit state even with noisy signals,” co-author Takafumi Fujita explains.
    In the experimental system, only electrons with a particular spin orientation can leave a quantum dot. When this happens, a temporary “blip” of increased voltage is created. The team trained the machine learning algorithm to pick out these signals from among the noise. The deep neural network they used had a convolutional neural network to identify the important signal features, combined with a recurrent neural network to monitor the time-series data.
    “Our approach simplified the learning process for adapting to strong interference that could vary based on the situation,” senior author Akira Oiwa says. The team first tested the robustness of the classifier by adding simulated noise and drift. Then, they trained the algorithm to work with actual data from an array of quantum dots, and achieved accuracy rates over 95%. The results of this research may allow for the high-fidelity measurement of large-scale arrays of qubits in future quantum computers.
    Story Source:
    Materials provided by Osaka University. Note: Content may be edited for style and length. More

  • in

    AI predicts extensive material properties to break down a previously insurmountable wall

    If the properties of materials can be reliably predicted, then the process of developing new products for a huge range of industries can be streamlined and accelerated. In a study published XXX in Advanced Intelligent Systems, researchers from The University of Tokyo Institute of Industrial Science used core-loss spectroscopy to determine the properties of organic molecules using machine learning.
    The spectroscopy techniques energy loss near-edge structure (ELNES) and X-ray near-edge structure (XANES) are used to determine information about the electrons, and through that the atoms, in materials. They have high sensitivity and high resolution and have been used to investigate a range of materials from electronic devices to drug delivery systems.
    However, connecting spectral data to the properties of a material — things like optical properties, electron conductivity, density, and stability — remains ambiguous. Machine learning (ML) approaches have been used to extract information for large complex sets of data. Such approaches use artificial neural networks, which are based on how our brains work, to constantly learn to solve problems. Although the group previously used ELNES/XANES spectra and ML to find out information about materials, what they found did not relate to the properties of the material itself. Therefore, the information could not be easily translated into developments.
    Now the team has used ML to reveal information hidden in the simulated ELNES/XANES spectra of 22,155 organic molecules. “The ELNES/XANES spectra of the molecules, or their “descriptors” in this scenario, were then input into the system,” explains lead author Kakeru Kikumasa. “This descriptor is something that can be directly measured in experiments and can therefore be determined with high sensitivity and resolution. This method is highly beneficial for materials development because it has the potential to reveal where, when, and how certain material properties arise.”
    A model created from the spectra alone was able to successfully predict what are known as intensive properties. However, it was unable to predict extensive properties, which are dependent on the molecular size. Therefore, to improve the prediction, the new model was constructed by including the ratios of three elements in relation to carbon (which is present in all organic molecules) as extra parameters to allow extensive properties such as the molecular weight to be correctly predicted.
    “Our ML learning treatment of core-loss spectra provides accurate prediction of extensive material properties, such as internal energy and molecular weight. The link between core-loss spectra and extensive properties has previously never been made; however, artificial intelligence was able to unveil the hidden connections. Our approach might also be applied to predict the properties of new materials and functions” says senior author Teruyasu Mizoguchi. “We believe that our model will be a very useful tool for the high-throughput development of materials in a wide range of industries.”
    The study, “Quantification of the Properties of Organic Molecules Using Core-Loss Spectra as Neural Network Descriptors,” was published in Advanced Intelligent Systems.
    Story Source:
    Materials provided by Institute of Industrial Science, The University of Tokyo. Note: Content may be edited for style and length. More

  • in

    How to program DNA robots to poke and prod cell membranes

    Scientists have worked out how to best get DNA to communicate with membranes in our body, paving the way for the creation of ‘mini biological computers’ in droplets that have potential uses in biosensing and mRNA vaccines.
    UNSW’s Dr Matthew Baker and the University of Sydney’s Dr Shelley Wickham co-led the study, published recently in Nucleic Acids Research.
    It discovered the best way to design and build DNA ‘nanostructures’ to effectively manipulate synthetic liposomes — tiny bubbles which have traditionally been used to deliver drugs for cancer and other diseases.
    But by modifying the shape, porosity and reactivity of liposomes, there are far greater applications, such as building small molecular systems that sense their environment and respond to a signal to release a cargo, such as a drug molecule when it nears its target.
    Lead author Dr Matt Baker from UNSW’s School of Biotechnology and Biomolecular Sciences says the study discovered how to build “little blocks” out of DNA and worked out how best to label these blocks with cholesterol to get them to stick to lipids, the main constituents of plant and animal cells.
    “One major application of our study is biosensing: you could stick some droplets in a person or patient, as it moves through the body it records local environment, processes this and delivers a result so you can ‘read out’, the local environment,” Dr Baker says. More

  • in

    Using quantum Parrondo’s random walks for encryption

    Assistant Professor Kang Hao Cheong and his research team from the Singapore University of Technology and Design (SUTD) have set out to apply concepts from quantum Parrondo’s paradox in search of a working protocol for semiclassical encryption. In a recent Physical Review Research letter, the team published the paper ‘Chaotic switching for quantum coin Parrondo’s games with application to encryption’ and discovered that chaotic switching for quantum coin Parrondo’s games has similar underlying ideas and working dynamics to encryption.
    Parrondo’s paradox is a phenomenon where the switching of two losing games results in a winning outcome. In the two-sided quantum coin tossing game introduced by the authors, they showed in a previous work that random and certain periodic tossing of two quantum coins can turn a quantum walker’s expected position from a losing position into a fair and winning position, respectively. In such a game, the quantum walker is given a set of instructions on how to move depending on the outcome of the quantum coin toss.
    Inspired by the underlying principles of this quantum game, Joel Lai, the lead author of the study from SUTD explained, “Suppose I present to you the outcome of the quantum walker at the end of 100 coin tosses, knowing the initial position, can you tell me the sequence of tosses that lead to this final outcome?” As it turns out, this task can either be very difficult or very easy. Lai added: “In the case of random switching, it is almost impossible to determine the sequence of tosses that lead to the end result. However, for periodic tossing, we could get the sequence of tosses rather easily, because a periodic sequence has structure and is deterministic.”
    Random sequences have too much uncertainty, on the other hand, periodic sequences are deterministic. This led to the idea of incorporating chaotic sequences as a means to perform the switching. The authors discovered that using chaotic switching through a pre-generated chaotic sequence significantly enhances the work. For an observer who does not know parts of the information required to generate the chaotic sequence, deciphering the sequence of tosses is analogous to determining a random sequence. However, for an agent with information on how to generate the chaotic sequence, this is analogous to a periodic sequence. According to the authors, this information on generating the chaotic sequence is likened to the keys in encryption. Knowing just the keys and the final outcome (i.e. the encrypted message), this outcome can be inverted to obtain the original state of the quantum walker (i.e. the original message).
    Assistant Professor Cheong, the senior author of the study remarked, “The introduction of chaotic switching, when combined with Parrondo’s paradox, extends the application of Parrondo’s paradox from simply a mathematical tool used in quantum information for classification or identification of the initial state and final outcome to one that has real-world engineering applications. Cheong added, “The development of a fully implementable quantum chaotic Parrondo’s game may also improve on our semiclassical framework and provide advances to bridge some of the problems still faced in quantum encryption.”
    Story Source:
    Materials provided by Singapore University of Technology and Design. Note: Content may be edited for style and length. More

  • in

    Intelligent optical chip to improve telecommunications

    From the internet, to fibre or satellite communications and medical diagnostics, our everyday life relies on optical technologies. These technologies use optical pulsed sources to transfer, retrieve or compute information. Gaining control over optical pulse shapes thus paves the way for further advances.
    PhD student Bennet Fischer and postdoctoral researcher Mario Chemnitz, in the team of Professor Roberto Morandotti of the Institut national de la recherche scientifique (INRS), developed a smart pulse-shaper integrated on a chip. The device output can autonomously adjust to a user-defined target waveform with strikingly low technical and computational requirements.
    An Innovative Design
    Ideally, an optical waveform generator should autonomously output a target waveform for user-friendliness, minimize the experimental requirements for driving the system and reading out the waveform, to ease online monitoring. It should also feature a long-term reliability, low losses, fibre connectivity, and maximal functionality.
    Among other things, practical imperfections, such as individual device fidelities, deteriorate the performances accessible from those initially designed or simulated for. “We find that evolutionary optimization can help in overcoming the inherent design limitations of on-chip systems and hence elevate their performance and reconfigurability to a new level,” says the postdoctoral researcher.
    Machine Learning for Smart Photonics
    The team was able to achieve this device alongside with the recent emergence of machine learning concepts into photonics, which promises unprecedented capabilities and system performance. “The optics community is eager to learn about new methods and smart device implementations. In our work, we present an interlinked bundle of machine-learning enabling methods of high relevance, for both the technical and academic optical communities.”
    The researchers used evolutionary optimization algorithms as a key tool for repurposing a programmable photonic chip beyond its original use. Evolutionary algorithms are nature-inspired computer programs, which allows to efficiently optimize many-parameter systems at significantly reduced computational resources.
    This innovative research was published in the journal Optica. “For us young researchers, PhDs and postdocs, it is of paramount importance for our careers that our research is visible and shared. Thus, we are truly grateful and overwhelmed with the news that our work is published in such an outstanding and interdisciplinary journal. It heats up our ambitions to continue our work and search for even better implementations and breakthrough applications. It endorses our efforts and it is simply a great honour,” says Mario Chemnitz.
    The team’s next steps include the investigation of more complex chip designs. The target is to improve the device performance, as well as the on-chip integration of the optical sampling (detection scheme). At terms, they could provide a single compact device ready-to-use.
    Story Source:
    Materials provided by Institut national de la recherche scientifique – INRS. Original written by Audrey-Maude Vézina. Note: Content may be edited for style and length. More

  • in

    Disease outbreak simulations reveal influence of 'seeding' by multiple infected people

    A new computational analysis suggests that, beyond the initial effect of one infected person arriving and spreading disease to a previously uninfected population, the continuous arrival of more infected individuals has a significant influence on the evolution and severity of the local outbreak. Mattia Mazzoli, Jose Javier Ramasco, and colleagues present these findings in the open-access journal PLOS Computational Biology.
    In light of the ongoing COVID-19 pandemic, much research has investigated the dynamics of local outbreaks caused by the first detected cases in a population, which are linked to travel. However, few studies have explored whether and how the arrival of multiple infected individuals might impact the development of a local outbreak — a situation termed “multi-seeding.”
    To examine the impact of multi-seeding, Mazzoli and colleagues first simulated local outbreaks in Europe using a computational modeling approach. To capture travel and seeding events, the simulations incorporated real-world location data from mobile phones during March 2020, when the COVID-19 pandemic began.
    These simulations suggested that there is indeed an association between the number of “seed” arrivals per local population and the speed of spread, the final number of people infected, and the peak incidence rate experienced by the population. This relationship appears to be complex and non-linear, and it depends on the details of the social contact network within the affected population, including the effects of lockdowns.
    To test whether the simulations accurately reflect real-world outbreaks, the researchers looked for similar associations between mobility data and COVID-19 incidence and mortality during the first wave of COVID-19 infection in England, France, Germany, Italy, and Spain. This analysis revealed strong signs of real-world multi-seeding effects similar to those observed in the simulations.
    Based on these findings, the researchers propose a method to understand and reconstruct the spatial spreading patterns of the main outbreak-producing events in every country.
    “Now that the relevance of multi-seeding is understood, it is crucial to develop containment measures that take it into account,” Ramasco says. Next, the researchers hope to incorporate the effects of vaccinations and antibodies acquired through infection into their simulations.
    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    Many US adults worry about facial image data in healthcare settings

    Uses of facial images and facial recognition technologies — to unlock a phone or in airport security — are becoming increasingly common in everyday life. But how do people feel about using such data in healthcare and biomedical research?
    Through surveying over 4,000 US adults, researchers found that a significant proportion of respondents considered the use of facial image data in healthcare across eight varying scenarios as unacceptable (15-25 percent). Taken with those that responded as unsure of whether the uses were acceptable, roughly 30-50 percent of respondents indicated some degree of concern for uses of facial recognition technologies in healthcare scenarios. Whereas using facial image data in some cases — such as to avoid medical errors, for diagnosis and screening, or for security — was acceptable to the majority, more than half of respondents did not accept or were uncertain about healthcare providers using this data to monitor patients’ emotions or symptoms, or for health research.
    In the biomedical research setting, most respondents were equally worried about the use of medical records, DNA data and facial image data in a study.
    While respondents were a diverse group in terms of age, geographic region, gender, racial and ethnic background, educational attainment, household income, and political views, their perspectives on these issues did not differ by demographics. Findings were published in the journal PLOS ONE.
    “Our results show that a large segment of the public perceives a potential privacy threat when it comes to using facial image data in healthcare,” said lead author Sara Katsanis, who heads the Genetics and Justice Laboratory at Ann & Robert H. Lurie Children’s Hospital of Chicago and is a Research Assistant Professor of Pediatrics at Northwestern University Feinberg School of Medicine. “To ensure public trust, we need to consider greater protections for personal information in healthcare settings, whether it relates to medical records, DNA data, or facial images. As facial recognition technologies become more common, we need to be prepared to explain how patient and participant data will be kept confidential and secure.”
    Senior author Jennifer K. Wagner, Assistant Professor of Law, Policy and Engineering in Penn State’s School of Engineering Design, Technology, and Professional Programs adds: “Our study offers an important opportunity for those pursuing possible use of facial analytics in healthcare settings and biomedical research to think about human-centeredness in a more meaningful way. The research that we are doing hopefully will help decisionmakers find ways to facilitate biomedical innovation in a thoughtful, responsible way that does not undermine public trust.”
    The research team, which includes co-authors with expertise in bioethics, law, genomics, facial analytics, and bioinformatics, hopes to conduct further research to understand the nuances where public trust is lacking.
    Story Source:
    Materials provided by Ann & Robert H. Lurie Children’s Hospital of Chicago. Note: Content may be edited for style and length. More