More stories

  • in

    Transparent ultrasound chip improves cell stimulation and imaging

    Ultrasound scans — best known for monitoring pregnancies or imaging organs — can also be used to stimulate cells and direct cell function. A team of Penn State researchers has developed an easier, more effective way to harness the technology for biomedical applications.
    The team created a transparent, biocompatible ultrasound transducer chip that resembles a microscope glass slide and can be inserted into any optical microscope for easy viewing. Cells can be cultured and stimulated directly on top of the transducer chip and the cells’ resulting changes can be imaged with optical microscopy techniques.
    Published in the Royal Society of Chemistry’s journal Lab on a Chip, the paper was selected as the cover article for the December 2021 issue. Future applications of the technology could impact stem cell, cancer and neuroscience research.
    “In the conventional ultrasound stimulation experiments, a cell culture dish is placed in a water bath, and a bulky ultrasound transducer directs the ultrasound waves to the cells through the water medium,” said Sri-Rajasekhar “Raj” Kothapalli, principal investigator and assistant professor of biomedical engineering at Penn State. “This was a complex setup that didn’t provide reproducible results: The results that one group saw another did not, even while using the same parameters, because there are several things that could affect the cells’ survival and stimulation while they are in water, as well as how we visualize them.”
    Kothapalli and his collaborators miniaturized the ultrasound stimulation setup by creating a transparent transducer platform made of a piezoelectric lithium niobate material. Piezoelectric materials generate mechanical energy when electric voltage is applied. The chip’s biocompatible surface allows the cells to be cultured directly on the transducer and used for repeated stimulation experiments over several weeks.
    When connected to a power supply, the transducer emits ultrasound waves, which pulse the cells and trigger ion influx and outflux. More

  • in

    A new platform for customizable quantum devices

    A ground-up approach to qubit design leads to a new framework for creating versatile, highly tailored quantum devices.
    Advances in quantum science have the potential to revolutionize the way we live. Quantum computers hold promise for solving problems that are intractable today, and we may one day use quantum networks as hackerproof information highways.
    The realization of such forward-looking technologies hinges in large part on the qubit — the fundamental component of quantum systems. A major challenge of qubit research is designing them to be customizable, tailored to work with all kinds of sensing, communication and computational devices.
    Scientists have taken a major step in the development of tailored qubits. In a paper published in the Journal of the American Chemical Society, the team, which includes researchers at MIT, the University of Chicago and Columbia University, demonstrates how a particular molecular family of qubits can be finely tuned over a broad spectrum, like turning a sensitive dial on a wideband radio.
    The team also outlines the underlying design features that enable exquisite control over these quantum bits.
    “This is a new platform for qubit design. We can use our predictable, controllable, tunable design strategy to create a new quantum system,” said Danna Freedman, MIT professor of chemistry and a co-author of the study. ?”We’ve demonstrated the broad range of tunability over which these design principles work.”
    The work is partially supported by Q-NEXT, a U.S. Department of Energy (DOE) National Quantum Information Science Research Center led by Argonne National Laboratory. More

  • in

    More sensitive X-ray imaging

    Scintillators are materials that emit light when bombarded with high-energy particles or X-rays. In medical or dental X-ray systems, they convert incoming X-ray radiation into visible light that can then be captured using film or photosensors. They’re also used for night-vision systems and for research, such as in particle detectors or electron microscopes.
    Researchers at MIT have now shown how one could improve the efficiency of scintillators by at least tenfold, and perhaps even a hundredfold, by changing the material’s surface to create certain nanoscale configurations, such as arrays of wave-like ridges. While past attempts to develop more efficient scintillators have focused on finding new materials, the new approach could in principle work with any of the existing materials.
    Though it will require more time and effort to integrate their scintillators into existing X-ray machines, the team believes that this method might lead to improvements in medical diagnostic X-rays or CT scans, to reduce dose exposure and improve image quality. In other applications, such as X-ray inspection of manufactured parts for quality control, the new scintillators could enable inspections with higher accuracy or at faster speeds.
    The findings are described in the journal Science, in a paper by MIT doctoral students Charles Roques-Carmes and Nicholas Rivera; MIT professors Marin Soljacic, Steven Johnson, and John Joannopoulos; and 10 others.
    While scintillators have been in use for some 70 years, much of the research in the field has focused on developing new materials that produce brighter or faster light emissions. The new approach instead applies advances in nanotechnology to existing materials. By creating patterns in scintillator materials at a length scale comparable to the wavelengths of the light being emitted, the team found that it was possible to dramatically change the material’s optical properties.
    To make what they coined “nanophotonic scintillators,” Roques-Carmes says, “you can directly make patterns inside the scintillators, or you can glue on another material that would have holes on the nanoscale. The specifics depend on the exact structure and material.” For this research, the team took a scintillator and made holes spaced apart by roughly one optical wavelength, or about 500 nanometers (billionths of a meter). More

  • in

    Largest ever human family tree: 27 million ancestors

    Researchers from the University of Oxford’s Big Data Institute have taken a major step towards mapping the entirety of genetic relationships among humans: a single genealogy that traces the ancestry of all of us. The study has been published today in Science.
    The past two decades have seen extraordinary advancements in human genetic research, generating genomic data for hundreds of thousands of individuals, including from thousands of prehistoric people. This raises the exciting possibility of tracing the origins of human genetic diversity to produce a complete map of how individuals across the world are related to each other.
    Until now, the main challenges to this vision were working out a way to combine genome sequences from many different databases and developing algorithms to handle data of this size. However, a new method published today by researchers from the University of Oxford’s Big Data Institute can easily combine data from multiple sources and scale to accommodate millions of genome sequences.
    Dr Yan Wong, an evolutionary geneticist at the Big Data Institute, and one of the principal authors, explained: “We have basically built a huge family tree, a genealogy for all of humanity that models as exactly as we can the history that generated all the genetic variation we find in humans today. This genealogy allows us to see how every person’s genetic sequence relates to every other, along all the points of the genome.”
    Since individual genomic regions are only inherited from one parent, either the mother or the father, the ancestry of each point on the genome can be thought of as a tree. The set of trees, known as a “tree sequence” or “ancestral recombination graph,” links genetic regions back through time to ancestors where the genetic variation first appeared.
    Lead author Dr Anthony Wilder Wohns, who undertook the research as part of his PhD at the Big Data Institute and is now a postdoctoral researcher at the Broad Institute of MIT and Harvard, said: “Essentially, we are reconstructing the genomes of our ancestors and using them to form a vast network of relationships. We can then estimate when and where these ancestors lived. The power of our approach is that it makes very few assumptions about the underlying data and can also include both modern and ancient DNA samples.”
    The study integrated data on modern and ancient human genomes from eight different databases and included a total of 3,609 individual genome sequences from 215 populations. The ancient genomes included samples found across the world with ages ranging from 1,000s to over 100,000 years. The algorithms predicted where common ancestors must be present in the evolutionary trees to explain the patterns of genetic variation. The resulting network contained almost 27 million ancestors.
    After adding location data on these sample genomes, the authors used the network to estimate where the predicted common ancestors had lived. The results successfully recaptured key events in human evolutionary history, including the migration out of Africa.
    Although the genealogical map is already an extremely rich resource, the research team plans to make it even more comprehensive by continuing to incorporate genetic data as it becomes available. Because tree sequences store data in a highly efficient way, the dataset could easily accommodate millions of additional genomes.
    Dr Wong said: “This study is laying the groundwork for the next generation of DNA sequencing. As the quality of genome sequences from modern and ancient DNA samples improves, the trees will become even more accurate and we will eventually be able to generate a single, unified map that explains the descent of all the human genetic variation we see today.”
    Dr Wohns added: “While humans are the focus of this study, the method is valid for most living things; from orangutans to bacteria. It could be particularly beneficial in medical genetics, in separating out true associations between genetic regions and diseases from spurious connections arising from our shared ancestral history.”
    Story Source:
    Materials provided by University of Oxford. Note: Content may be edited for style and length. More

  • in

    Entanglement unlocks scaling for quantum machine learning

    The field of machine learning on quantum computers got a boost from new research removing a potential roadblock to the practical implementation of quantum neural networks. While theorists had previously believed an exponentially large training set would be required to train a quantum neural network, the quantum No-Free-Lunch theorem developed by Los Alamos National Laboratory shows that quantum entanglement eliminates this exponential overhead.
    “Our work proves that both big data and big entanglement are valuable in quantum machine learning. Even better, entanglement leads to scalability, which solves the roadblock of exponentially increasing the size of the data in order to learn it,” said Andrew Sornborger, a computer scientist at Los Alamos and a coauthor of the paper published Feb. 18 in Physical Review Letters. “The theorem gives us hope that quantum neural networks are on track towards the goal of quantum speed-up, where eventually they will outperform their counterparts on classical computers.”
    The classical No-Free-Lunch theorem states that any machine-learning algorithm is as good as, but no better than, any other when their performance is averaged over all possible functions connecting the data to their labels. A direct consequence of this theorem that showcases the power of data in classical machine learning is that the more data one has, the better the average performance. Thus, data is the currency in machine learning that ultimately limits performance.
    The new Los Alamos No-Free-Lunch theorem shows that in the quantum regime entanglement is also a currency, and one that can be exchanged for data to reduce data requirements.
    Using a Rigetti quantum computer, the team entangled the quantum data set with a reference system to verify the new theorem.
    “We demonstrated on quantum hardware that we could effectively violate the standard No-Free-Lunch theorem using entanglement, while our new formulation of the theorem held up under experimental test,” said Kunal Sharma, the first author on the article.
    “Our theorem suggests that entanglement should be considered a valuable resource in quantum machine learning, along with big data,” said Patrick Coles, a physicist at Los Alamos and senior author on the article. “Classical neural networks depend only on big data.”
    Entanglement describes the state of a system of atomic-scale particles that cannot be fully described independently or individually. Entanglement is a key component of quantum computing.
    Story Source:
    Materials provided by DOE/Los Alamos National Laboratory. Note: Content may be edited for style and length. More

  • in

    Error mitigation approach helps quantum computers level up

    A collaboration between Lawrence Berkeley National Laboratory’s (Berkeley Lab’s) Applied Mathematics and Computational Research Division (AMCRD) and Physics Division has yielded a new approach to error mitigation that could help make quantum computing’s theoretical potential a reality.
    The research team describes this work in a paper published in Physical Review Letters, “Mitigating Depolarizing Noise on Quantum Computers with Noise-Estimation Circuits.”
    “Quantum computers have the potential to solve more complex problems way faster than classical computers,” said Bert de Jong, one of the lead authors of the study and the director of the AIDE-QC and QAT4Chem quantum computing projects. De Jong also leads the AMCRD’s Applied Computing for Scientific Discovery Group. “But the real challenge is quantum computers are relatively new. And there’s still a lot of work that has to be done to make them reliable.”
    For now, one of the problems is that quantum computers are still too error-prone to be consistently useful. This is due in large part to something known as “noise” (errors).
    There are different types of noise, including readout noise and gate noise. The former has to do with reading out the result of a run on a quantum computer; the more noise, the higher the chance a qubit — the quantum equivalent of a bit on a classical computer — will be measured in the wrong state. The latter relates to the actual operations performed; noise here means the probability of applying the wrong operation. And the prevalence of noise dramatically increases the more operations one tries to perform with a quantum computer, which makes it harder to tease out the right answer and severely limits quantum computers’ usability as they’re scaled up.
    “So noise here just basically means: It’s stuff you don’t want, and it obscures the result you do want,” said Ben Nachman, a Berkeley Lab physicist and co-author on the study who also leads the cross-cutting Machine Learning for Fundamental Physics group. More

  • in

    California's push for computer science education examined

    New studies of computer science education at California high schools found that a greater emphasis on computer science education did not produce the anticipated spillover effects, neither improving or harming students’ math or English language arts skills, according to school-level test scores.
    However, one trade-off of increased enrollments in computing courses may be that students are taking fewer humanities courses such as the arts and social studies, researchers at the University of Illinois Urbana-Champaign found.
    Paul Bruno and Colleen M. Lewis examined the implications of California’s recent state policies promoting computer science education and the proliferation of these courses in the state’s high schools. Bruno is a professor of education policy, organization and leadership, and Lewis is a professor of computer science in the Grainger College of Engineering, both at Illinois.
    Using data that schools reported to the California Department of Education from 2003-2019, the researchers explored the effects on student test scores and the curricular trade-offs of student enrollments in computer science courses. That study was published in the journal Educational Administration Quarterly.
    In a related project, the couple — who are marital as well as research partners — explored equity and diversity among California’s computer science teachers and their students. That study was published in Policy Futures in Education.
    The Google Computer Science Education Research Program supported both projects. More

  • in

    Development of a diamond transistor with high hole mobility

    Using a new fabrication technique, NIMS has developed a diamond field-effect transistor (FET) with high hole mobility, which allows reduced conduction loss and higher operational speed. This new FET also exhibits normally-off behavior (i.e., electric current flow through the transistor ceases when no gate voltage is applied, a feature that makes electronic devices safer). These results may facilitate the development of low-loss power conversion and high-speed communications devices.
    Diamond has excellent wide bandgap semiconductor properties: its bandgap is larger than those of silicon carbide and gallium nitride, which are already in practical use. Diamond therefore could potentially be used to create power electronics and communications devices capable of operating more energy efficiently at higher speeds, voltages and temperatures. A number of R&D projects have previously been carried out with the aim of creating FETs using hydrogen-terminated diamonds (i.e., diamonds with their superficial carbon atoms covalently bonded with hydrogen atoms). However, these efforts have failed to fully exploit diamonds’ excellent wide bandgap semiconductor properties: the hole mobility (a measure of how quickly holes can move) of these diamond-integrated transistors was only 1-10% that of the diamonds before integration.
    The NIMS research team succeeded in developing a high-performance FET by using hexagonal boron nitride (h-BN) as a gate insulator instead of conventionally used oxides (e.g., alumina), and by employing a new fabrication technique capable of preventing the surface of hydrogen-terminated diamond from being exposed to air. At high hole densities, the hole mobility of this FET was five times that of conventional FETs with oxide gate insulators. FETs with high hole mobility can operate with lower electrical resistance, thereby reducing conduction loss, and can be used to develop higher speed and smaller electronic devices. The team also demonstrated normally-off operation of the FET, an important feature for power electronics applications. The new fabrication technique enabled removal of electron acceptors from the surface of the hydrogen-terminated diamond. This was the key to the team’s success in developing the high-performance FET, although these acceptors had generally been thought to be necessary in inducing electrical conductivity in hydrogen-terminated diamonds.
    These results are new mileposts in the development of efficient diamond transistors for high-performance power electronics and communications devices. The team hopes to further improve the physical properties of the diamond FET and to make it more suitable for practical use.
    Story Source:
    Materials provided by National Institute for Materials Science, Japan. Note: Content may be edited for style and length. More