More stories

  • in

    Largest ever human family tree: 27 million ancestors

    Researchers from the University of Oxford’s Big Data Institute have taken a major step towards mapping the entirety of genetic relationships among humans: a single genealogy that traces the ancestry of all of us. The study has been published today in Science.
    The past two decades have seen extraordinary advancements in human genetic research, generating genomic data for hundreds of thousands of individuals, including from thousands of prehistoric people. This raises the exciting possibility of tracing the origins of human genetic diversity to produce a complete map of how individuals across the world are related to each other.
    Until now, the main challenges to this vision were working out a way to combine genome sequences from many different databases and developing algorithms to handle data of this size. However, a new method published today by researchers from the University of Oxford’s Big Data Institute can easily combine data from multiple sources and scale to accommodate millions of genome sequences.
    Dr Yan Wong, an evolutionary geneticist at the Big Data Institute, and one of the principal authors, explained: “We have basically built a huge family tree, a genealogy for all of humanity that models as exactly as we can the history that generated all the genetic variation we find in humans today. This genealogy allows us to see how every person’s genetic sequence relates to every other, along all the points of the genome.”
    Since individual genomic regions are only inherited from one parent, either the mother or the father, the ancestry of each point on the genome can be thought of as a tree. The set of trees, known as a “tree sequence” or “ancestral recombination graph,” links genetic regions back through time to ancestors where the genetic variation first appeared.
    Lead author Dr Anthony Wilder Wohns, who undertook the research as part of his PhD at the Big Data Institute and is now a postdoctoral researcher at the Broad Institute of MIT and Harvard, said: “Essentially, we are reconstructing the genomes of our ancestors and using them to form a vast network of relationships. We can then estimate when and where these ancestors lived. The power of our approach is that it makes very few assumptions about the underlying data and can also include both modern and ancient DNA samples.”
    The study integrated data on modern and ancient human genomes from eight different databases and included a total of 3,609 individual genome sequences from 215 populations. The ancient genomes included samples found across the world with ages ranging from 1,000s to over 100,000 years. The algorithms predicted where common ancestors must be present in the evolutionary trees to explain the patterns of genetic variation. The resulting network contained almost 27 million ancestors.
    After adding location data on these sample genomes, the authors used the network to estimate where the predicted common ancestors had lived. The results successfully recaptured key events in human evolutionary history, including the migration out of Africa.
    Although the genealogical map is already an extremely rich resource, the research team plans to make it even more comprehensive by continuing to incorporate genetic data as it becomes available. Because tree sequences store data in a highly efficient way, the dataset could easily accommodate millions of additional genomes.
    Dr Wong said: “This study is laying the groundwork for the next generation of DNA sequencing. As the quality of genome sequences from modern and ancient DNA samples improves, the trees will become even more accurate and we will eventually be able to generate a single, unified map that explains the descent of all the human genetic variation we see today.”
    Dr Wohns added: “While humans are the focus of this study, the method is valid for most living things; from orangutans to bacteria. It could be particularly beneficial in medical genetics, in separating out true associations between genetic regions and diseases from spurious connections arising from our shared ancestral history.”
    Story Source:
    Materials provided by University of Oxford. Note: Content may be edited for style and length. More

  • in

    Entanglement unlocks scaling for quantum machine learning

    The field of machine learning on quantum computers got a boost from new research removing a potential roadblock to the practical implementation of quantum neural networks. While theorists had previously believed an exponentially large training set would be required to train a quantum neural network, the quantum No-Free-Lunch theorem developed by Los Alamos National Laboratory shows that quantum entanglement eliminates this exponential overhead.
    “Our work proves that both big data and big entanglement are valuable in quantum machine learning. Even better, entanglement leads to scalability, which solves the roadblock of exponentially increasing the size of the data in order to learn it,” said Andrew Sornborger, a computer scientist at Los Alamos and a coauthor of the paper published Feb. 18 in Physical Review Letters. “The theorem gives us hope that quantum neural networks are on track towards the goal of quantum speed-up, where eventually they will outperform their counterparts on classical computers.”
    The classical No-Free-Lunch theorem states that any machine-learning algorithm is as good as, but no better than, any other when their performance is averaged over all possible functions connecting the data to their labels. A direct consequence of this theorem that showcases the power of data in classical machine learning is that the more data one has, the better the average performance. Thus, data is the currency in machine learning that ultimately limits performance.
    The new Los Alamos No-Free-Lunch theorem shows that in the quantum regime entanglement is also a currency, and one that can be exchanged for data to reduce data requirements.
    Using a Rigetti quantum computer, the team entangled the quantum data set with a reference system to verify the new theorem.
    “We demonstrated on quantum hardware that we could effectively violate the standard No-Free-Lunch theorem using entanglement, while our new formulation of the theorem held up under experimental test,” said Kunal Sharma, the first author on the article.
    “Our theorem suggests that entanglement should be considered a valuable resource in quantum machine learning, along with big data,” said Patrick Coles, a physicist at Los Alamos and senior author on the article. “Classical neural networks depend only on big data.”
    Entanglement describes the state of a system of atomic-scale particles that cannot be fully described independently or individually. Entanglement is a key component of quantum computing.
    Story Source:
    Materials provided by DOE/Los Alamos National Laboratory. Note: Content may be edited for style and length. More

  • in

    Error mitigation approach helps quantum computers level up

    A collaboration between Lawrence Berkeley National Laboratory’s (Berkeley Lab’s) Applied Mathematics and Computational Research Division (AMCRD) and Physics Division has yielded a new approach to error mitigation that could help make quantum computing’s theoretical potential a reality.
    The research team describes this work in a paper published in Physical Review Letters, “Mitigating Depolarizing Noise on Quantum Computers with Noise-Estimation Circuits.”
    “Quantum computers have the potential to solve more complex problems way faster than classical computers,” said Bert de Jong, one of the lead authors of the study and the director of the AIDE-QC and QAT4Chem quantum computing projects. De Jong also leads the AMCRD’s Applied Computing for Scientific Discovery Group. “But the real challenge is quantum computers are relatively new. And there’s still a lot of work that has to be done to make them reliable.”
    For now, one of the problems is that quantum computers are still too error-prone to be consistently useful. This is due in large part to something known as “noise” (errors).
    There are different types of noise, including readout noise and gate noise. The former has to do with reading out the result of a run on a quantum computer; the more noise, the higher the chance a qubit — the quantum equivalent of a bit on a classical computer — will be measured in the wrong state. The latter relates to the actual operations performed; noise here means the probability of applying the wrong operation. And the prevalence of noise dramatically increases the more operations one tries to perform with a quantum computer, which makes it harder to tease out the right answer and severely limits quantum computers’ usability as they’re scaled up.
    “So noise here just basically means: It’s stuff you don’t want, and it obscures the result you do want,” said Ben Nachman, a Berkeley Lab physicist and co-author on the study who also leads the cross-cutting Machine Learning for Fundamental Physics group. More

  • in

    California's push for computer science education examined

    New studies of computer science education at California high schools found that a greater emphasis on computer science education did not produce the anticipated spillover effects, neither improving or harming students’ math or English language arts skills, according to school-level test scores.
    However, one trade-off of increased enrollments in computing courses may be that students are taking fewer humanities courses such as the arts and social studies, researchers at the University of Illinois Urbana-Champaign found.
    Paul Bruno and Colleen M. Lewis examined the implications of California’s recent state policies promoting computer science education and the proliferation of these courses in the state’s high schools. Bruno is a professor of education policy, organization and leadership, and Lewis is a professor of computer science in the Grainger College of Engineering, both at Illinois.
    Using data that schools reported to the California Department of Education from 2003-2019, the researchers explored the effects on student test scores and the curricular trade-offs of student enrollments in computer science courses. That study was published in the journal Educational Administration Quarterly.
    In a related project, the couple — who are marital as well as research partners — explored equity and diversity among California’s computer science teachers and their students. That study was published in Policy Futures in Education.
    The Google Computer Science Education Research Program supported both projects. More

  • in

    Development of a diamond transistor with high hole mobility

    Using a new fabrication technique, NIMS has developed a diamond field-effect transistor (FET) with high hole mobility, which allows reduced conduction loss and higher operational speed. This new FET also exhibits normally-off behavior (i.e., electric current flow through the transistor ceases when no gate voltage is applied, a feature that makes electronic devices safer). These results may facilitate the development of low-loss power conversion and high-speed communications devices.
    Diamond has excellent wide bandgap semiconductor properties: its bandgap is larger than those of silicon carbide and gallium nitride, which are already in practical use. Diamond therefore could potentially be used to create power electronics and communications devices capable of operating more energy efficiently at higher speeds, voltages and temperatures. A number of R&D projects have previously been carried out with the aim of creating FETs using hydrogen-terminated diamonds (i.e., diamonds with their superficial carbon atoms covalently bonded with hydrogen atoms). However, these efforts have failed to fully exploit diamonds’ excellent wide bandgap semiconductor properties: the hole mobility (a measure of how quickly holes can move) of these diamond-integrated transistors was only 1-10% that of the diamonds before integration.
    The NIMS research team succeeded in developing a high-performance FET by using hexagonal boron nitride (h-BN) as a gate insulator instead of conventionally used oxides (e.g., alumina), and by employing a new fabrication technique capable of preventing the surface of hydrogen-terminated diamond from being exposed to air. At high hole densities, the hole mobility of this FET was five times that of conventional FETs with oxide gate insulators. FETs with high hole mobility can operate with lower electrical resistance, thereby reducing conduction loss, and can be used to develop higher speed and smaller electronic devices. The team also demonstrated normally-off operation of the FET, an important feature for power electronics applications. The new fabrication technique enabled removal of electron acceptors from the surface of the hydrogen-terminated diamond. This was the key to the team’s success in developing the high-performance FET, although these acceptors had generally been thought to be necessary in inducing electrical conductivity in hydrogen-terminated diamonds.
    These results are new mileposts in the development of efficient diamond transistors for high-performance power electronics and communications devices. The team hopes to further improve the physical properties of the diamond FET and to make it more suitable for practical use.
    Story Source:
    Materials provided by National Institute for Materials Science, Japan. Note: Content may be edited for style and length. More

  • in

    Rebooting evolution

    The building blocks of life-saving therapeutics could be developed in days instead of years thanks to new software that simulates evolution.
    Proseeker is the name of a new computational tool that mimics the processes of natural selection, producing proteins that can be used for a range of medicinal and household uses.
    The enzymes in your laundry detergent, the insulin in your diabetes medication or the antibodies used in cancer therapy are currently made in the laboratory using a painstaking process called directed evolution.
    Laboratory evolution mimics natural evolution by making mutations in naturally-sourced proteins and selecting the best mutants, to be mutated and selected again, in a time-intensive and laborious process that creates useful proteins.
    Scientists at the ARC Centre of Excellence in Synthetic Biology have now discovered a way to perform the entire process of directed evolution using a computer. It can reduce the time required from many months or even years to just days.
    The team was led by Professor Oliver Rackham, Curtin University, in collaboration with Professor Aleksandra Filipovska, the University of Western Australia, and is based at the Harry Perkins Institute of Medical Research in Perth, Western Australia.
    To prove how useful this process could be they took a protein with no function at all and gave it the ability to bind DNA.
    ‘Proteins that bind DNA are currently revolutionising the field of gene therapy where scientists are using them to reverse disease-causing mutations,’ says Professor Rackham. ‘So this could be of great use in the future.
    ‘Reconstituting the entire process of directed evolution represents a radical advance for the field.’
    Story Source:
    Materials provided by Curtin University. Original written by Lucien Wilkinson. Note: Content may be edited for style and length. More

  • in

    New methods for network visualizations enable change of perspectives and views

    When visualizing data using networks, the type of representation is crucial for extracting hidden information and relationships. The research group of Jörg Menche, Adjunct Principal Investigator at the CeMM Research Center for Molecular Medicine of the Austrian Academy of Sciences, Professor at the University of Vienna, and Group leader at Max Perutz Labs, developed a new method for generating network layouts that allow for visualizing different information of a network in two- and three-dimensional virtual space and exploring different perspectives. The results could also facilitate future research on rare diseases by providing more versatile, comprehensible representations of complex protein interactions.
    Network visualizations allow for exploring connections between individual data points. However, the more complex and larger the networks, the more difficult it becomes to find the information you are looking for. For lack of suitable layouts, so-called “hairballs” visualizations emerge, that often obscure network structure, rather than elucidate it. Scientists from Jörg Menche’s research group at CeMM and Max Perutz Labs (a joint venture of the University of Vienna and the Medical University of Vienna), developed a method that makes it possible to specify in advance which network properties and information should be visually represented in order to explore them interactively. The results have now been published in Nature Computational Science.
    Reducing complexity
    For the study, first author Christiane V. R. Hütter, a PhD student in Joerg Menche’s research group, used the latest dimensionality reduction techniques that allow visualizations for networks with thousands of points to be computed within a very short time on a standard laptop. “The key idea behind our research was to develop different views for large networks to capture the complexity and get a more comprehensive view and present it in a visually understandable way — similar to looking at maps of the same region with different information content, detailed views and perspectives.” Menche Lab scientists developed four different network layouts, which they termed cartographs, as well as two- and three-dimensional visualizations, each following different rules to open up new perspectives on a given dataset. Any network information can be encoded and visualized in this fashion, for example, the structural significance of a particular point, but also functional features. Users can switch between different layouts to get a comprehensive picture. Study leader Jörg Menche explains: “Using the new layouts, we can now specify in advance that we want to see, for example, the number of connections of a point within the network represented, or a particular functional characteristic. In a biological network, for instance, I can explore connections between genes that are associated with a particular disease and what they might have in common.”
    The interplay of genes
    The scientists performed a proof-of-concept on both simple model networks and the complex interactome network, which maps all the proteins of the human body and their interactions. This consists of more than 16,000 points and over 300,000 connections. Christiane V.R. Hütter explains: “Using our new layouts, we are now able to visually represent different features of proteins and their connections, such as the close relationship between the biological importance of a protein and its centrality within the network. We can also visualize connection patterns between a group of proteins associated with the same disease that are difficult to decipher using conventional methods.”
    Tailored solutions
    The flexibility of the new framework allows users to tailor network visualizations for a specific application. For example, the study authors were able to develop 3D interactome layouts specifically for studying the biological functions of certain genes whose mutations are suspected to cause rare diseases. Jörg Menche adds, “To facilitate the visual representation and also analysis of large networks such as the interactome, our layouts can also be integrated into a virtual reality platform.” More

  • in

    Visualization of the origin of magnetic forces by atomic resolution electron microscopy

    The joint development team of Professor Shibata (the University of Tokyo), JEOL Ltd. and Monash University succeeded in directly observing an atomic magnetic field, the origin of magnets (magnetic force), for the first time in the world. The observation was conducted using the newly developed Magnetic-field-free Atomic-Resolution STEM (MARS) (1). This team had already succeeded in observing the electric field inside atoms for the first time in 2012. However, since the magnetic fields in atoms are extremely weak compared with electric fields, the technology to observe the magnetic fields had been unexplored since the development of electron microscopes. This is an epoch-making achievement that will rewrite the history of microscope development.
    Electron microscopes have the highest spatial resolution among all currently used microscopes. However, in order to achieve ultra-high resolution so that atoms can be observed directly, we have to observe the sample by placing it in an extremely strong lens magnetic field. Therefore, atomic observation of magnetic materials that are strongly affected by the lens magnetic field such as magnets and steels had been impossible for many years. For this difficult problem, the team succeeded in developing a lens that has a completely new structure in 2019. Using this new lens, the team realized atomic observation of magnetic materials, which is not affected by the lens magnetic field. The team’s next goal was to observe the magnetic fields of atoms, which are the origin of magnets (magnetic force), and they continued technological development to achieve the goal.
    This time, the joint development team took on the challenge of observing the magnetic fields of iron (Fe) atoms in a hematite crystal (α-Fe2O3) by loading MARS with a newly developed high-sensitivity high-speed detector, and further using computer image processing technology. To observe the magnetic fields, they used the Differential Phase Contrast (DPC) method (2) at atomic resolution, which is an ultrahigh-resolution local electromagnetic field measurement method using a scanning transmission electron microscope (STEM) (3), developed by Professor Shibata et al. The results directly demonstrated that iron atoms themselves are small magnets (atomic magnet). The results also clarified the origin of magnetism (antiferromagnetism (4)) exhibited by hematite at the atomic level.
    From the present research results, the observation on atomic magnetic field was demonstrated, and a method for observation of atomic magnetic fields was established. This method is expected to become a new measuring method in the future that will lead the research and development of various magnetic materials and devices such as magnets, steels, magnetic devices, magnetic memory, magnetic semiconductors, spintronics and topological materials.
    This research was conducted by the joint development team of Professor Naoya Shibata (Director of the Institute of Engineering Innovation, School of Engineering, the University of Tokyo) and Dr. Yuji Kohno et al. (Specialists of JEOL Ltd.) in collaboration with Monash University, Australia, under the Advanced Measurement and Analysis Systems Development (SENTAN), Japan Science and Technology Agency (JST).
    Terms
    (1) Magnetic-field-free Atomic-Resolution STEM (MARS) More