More stories

  • in

    Machine learning algorithm revolutionizes how scientists study behavior

    To Eric Yttri, assistant professor of biological sciences and Neuroscience Institute faculty at Carnegie Mellon University, the best way to understand the brain is to watch how organisms interact with the world.
    “Behavior drives everything we do,” Yttri said.
    As a behavioral neuroscientist, Yttri studies what happens in the brain when animals walk, eat, sniff or do any action. This kind of research could help answer questions about neurological diseases or disorders like Parkinson’s disease or stroke. But identifying and predicting animal behavior is extremely difficult.
    Now, a new unsupervised machine learning algorithm developed by Yttri and Alex Hsu, a biological sciences Ph.D. candidate in his lab, makes studying behavior much easier and more accurate. The researchers published a paper on the new tool, B-SOiD (Behavioral segmentation of open field in DeepLabCut), in Nature Communications.
    Previously, the standard method to capture animal behavior was to track very simple actions, like whether a trained mouse pressed a lever or whether an animal was eating food or not. Alternatively, the experimenter could spend hours and hours manually identifying behavior, usually frame by frame on a video, a process prone to human error and bias.
    Hsu realized he could let an unsupervised learning algorithm do the time-consuming work. B-SOiD discovers behaviors by identifying patterns in the position of an animal’s body. The algorithm works with computer vision software and can tell researchers what behavior is happening at every frame in a video. More

  • in

    Exploring the past: Computational models shed new light on the evolution of prehistoric languages

    A new linguistic study sheds light on the nature of languages spoken before the written period, using computational modeling to reconstruct the grammar of the 6500-7000 year-old Proto-Indo-European language, which is the ancestor of most languages of Eurasia, including English and Hindi. The model employed makes it possible to observe evolutionary trends in language over the millennia. The article, “Reconstructing the evolution of Indo-European grammar,” authored by Gerd Carling (Lund University) and Chundra Cathcart (University of Zurich) will be published in the September 2021 issue of the scholarly journal Language.
    In the article, Carling & Cathcart use a database of features from 125 different languages of the Indo-European family, including extinct languages such as Sanskrit and Latin. Features include most of the differences that make the languages difficult to learn, such as differentiation in word order (the girl throws the stone in English or caitheann an cailín an chloch “throws the girl the stone” in Irish), gender (the apple in English or der Apfel in German), number of cases, number of forms of the verb, or whether languages have prepositions or postpositions (to the house in English but ghar ko “house-to” in Hindi). With the aid of methods adopted from computational biology, the authors use known grammars to reconstruct grammars of unknown prehistorical periods.
    The reconstruction of Indo-European grammar has been the subject of lengthy discussion for over a century. In the 19th century, scholars held the view that the ancient written languages, such as Classical Greek, were most similar to the reconstructed Proto-Indo-European language. The discovery of the archaic but highly dissimilar Hittite language in the early 20th century shifted the focus. Instead, scholars believed that Proto-Indo-European was a language with a structure more similar to non-Indo-European languages of Eurasia such as Basque or languages of the Caucasus region.
    The study confirms that Proto-Indo-European was similar to Classical Greek and Sanskrit, supporting the theory of the 19th century scholars. However, the study also provides new insights into the mechanisms of language change. Some features of the proto-language were very stable and dominant over time. Moreover, features of higher prominence and frequency were less likely to change.
    Though this study focused on one single family (Indo-European), the methods used in the paper can be applied to many other language families to reconstruct earlier states of languages and to observe how language evolves over time. The model also forms a basis for predicting future changes in language evolution.
    Story Source:
    Materials provided by Linguistic Society of America. Note: Content may be edited for style and length. More

  • in

    Revealing the hidden structure of quantum entangled states

    Quantum states that are entangled in many dimensions are key to our emerging quantum technologies, where more dimensions mean a higher quantum bandwidth (faster) and better resilience to noise (security), crucial for both fast and secure communication and speed up in error-free quantum computing. Now researchers at the University of the Witwatersrand in Johannesburg, South Africa, together with collaborators from Scotland, have invented a new approach to probing these “high-dimensional” quantum states, reducing the measurement time from decades to minutes.
    The study was published in the scientific journal, Nature Communications, on Friday, 27 August 2021. Wits PhD student Isaac Nape worked with Distinguished Professor Andrew Forbes, lead investigator on this study and Director of the Structured Light Laboratory in the School of Physics at Wits University, as well as postdoctoral fellow Dr Valeria Rodriguez-Fajardo, visiting Taiwanese researcher Dr Hasiao-Chih Huang, and Dr Jonathan Leach and Dr Feng Zhu from Heriot-Watt University in Scotland.
    In their paper titled: Measuring dimensionality and purity of high-dimensional entangled states, the team outlined a new approach to quantum measurement, testing it on a 100 dimensional quantum entangled state. With traditional approaches, the time of measurement increases unfavourably with dimension, so that to unravel a 100 dimensional state by a full ‘quantum state tomography’ would take decades. Instead, the team showed that the salient information of the quantum system — how many dimensions are entangled and to what level of purity? — could be deduced in just minutes. The new approach requires only simple ‘projections’ that could easily be done in most laboratories with conventional tools. Using light as an example, the team using an all-digital approach to perform the measurements.
    The problem, explains Forbes, is that while high-dimensional states are easily made, particularly with entangled particles of light (photons) they are not easy to measure — our toolbox for measuring and controlling them is almost empty. You can think of a high-dimensional quantum state like faces of a dice. A conventional dice has 6 faces, numbered 1 through 6, for a six-dimensional alphabet that can be used for computing, or for transferring information in communication. To make a “high-dimensional dice” means a dice with many more faces: 100 dimensions equals 100 faces — a rather complicated polygon. In our everyday world it would be easy to count the faces to know what sort of resource we had available to us, but not so in the quantum world. In the quantum world, you can never see the whole dice, so counting the faces is very difficult. The way we get around this is to do a tomography, like they do in the medical world, building up a picture from many, many slices of the object. But the information in quantum objects can be enormous, so the time for this process is prohibitive. A faster approach is a ‘Bell measurement’, a famous test to tell if what you have in front of you is entangled, like asking it “are you quantum or not?.” But while this confirms quantum correlations of the dice, it doesn’t say much about the number of faces it has.
    “Our work circumvented the problem by a chance discovery, that there is a set of measurements that is not a tomography and not a Bell measurement, but that holds important information of both,” says Isaac Nape, the PhD student who executed the research. “In technical parlance, we blended these two measurement approaches to do multiple projections that looks like a tomography but measuring the visibilities of the outcome, as if they were Bell measurements. This revealed the hidden information that could be extracted from the strength of the quantum correlations across many dimensions.” The combination of speed from the Bell-like approach and information from the tomography-like approach meant that key quantum parameters such as dimensionality and the purity of the quantum state could be determined quickly and quantitatively, the first approach to do so.
    “We are not suggesting that our approach replace other techniques,” says Forbes. “Rather, we see it as a fast probe to reveal what you are dealing with, and then use this information to make an informed decision on what to do next. A case of horses-for-courses.” For example, the team see their approach as changing the game in real-world quantum communication links, where a fast measurement of how noisy that quantum state has become and what this has done to the useful dimensions is crucial.
    Story Source:
    Materials provided by University of the Witwatersrand. Note: Content may be edited for style and length. More

  • in

    New artificial intelligence tech set to transform heart imaging

    A new artificial-intelligence technology for heart imaging can potentially improve care for patients, allowing doctors to examine their hearts for scar tissue while eliminating the need for contrast injections required for traditional cardiovascular magnetic resonance imaging (CMR).
    A team of researchers who developed the technology, including doctors at UVA Health, reports the success of the approach in a new article in the scientific journal Circulation. The team compared its AI approach, known as Virtual Native Enhancement (VNE), with contrast-enhanced CMR scans now used to monitor hypertrophic cardiomyopathy, the most common genetic heart condition. The researchers found that VNE produced higher-quality images and better captured evidence of scar in the heart, all without the need for injecting the standard contrast agent required for CMR.
    “This is a potentially important advance, especially if it can be expanded to other patient groups,” said researcher Christopher Kramer, MD, the chief of the Division of Cardiovascular Medicine at UVA Health, Virginia’s only designated Center of Excellence by the Hypertrophic Cardiomyopathy Association. “Being able to identify scar in the heart, an important contributor to progression to heart failure and sudden cardiac death, without contrast, would be highly significant. CMR scans would be done without contrast, saving cost and any risk, albeit low, from the contrast agent.”
    Imaging Hypertrophic Cardiomyopathy
    Hypertrophic cardiomyopathy is the most common inheritable heart disease, and the most common cause of sudden cardiac death in young athletes. It causes the heart muscle to thicken and stiffen, reducing its ability to pump blood and requiring close monitoring by doctors.
    The new VNE technology will allow doctors to image the heart more often and more quickly, the researchers say. It also may help doctors detect subtle changes in the heart earlier, though more testing is needed to confirm that.
    The technology also would benefit patients who are allergic to the contrast agent injected for CMR, as well as patients with severely failing kidneys, a group that avoids the use of the agent.
    The new approach works by using artificial intelligence to enhance “T1-maps” of the heart tissue created by magnetic resonance imaging (MRI). These maps are combined with enhanced MRI “cines,” which are like movies of moving tissue — in this case, the beating heart. Overlaying the two types of images creates the artificial VNE image
    Based on these inputs, the technology can produce something virtually identical to the traditional contrast-enhanced CMR heart scans doctors are accustomed to reading — only better, the researchers conclude. “Avoiding the use of contrast and improving image quality in CMR would only help both patients and physicians down the line,” Kramer said.
    While the new research examined VNE’s potential in patients with hypertrophic cardiomyopathy, the technology’s creators envision it being used for many other heart conditions as well.
    “While currently validated in the HCM population, there is a clear pathway to extend the technology to a wider range of myocardial pathologies,” they write. “VNE has enormous potential to significantly improve clinical practice, reduce scan time and costs, and expand the reach of CMR in the near future.” More

  • in

    Quantum networks in our future

    Large-scale quantum networks have been proposed, but so far, they do not exist. Some components of what would make up such networks are being studied, but the control mechanism for such a large-scale network has not been developed. In AVS Quantum Science, by AIP Publishing, investigators outline how a time-sensitive network control plane could be a key component of a workable quantum network.
    Quantum networks are similar to classical networks. Information travels through them, providing a means of communication between devices and over distances. Quantum networks move quantum bits of information, called qubits, through the network.
    These qubits are usually photons. Through the quantum phenomena of superposition and entanglement, they can transmit much more information than classical bits, which are limited to logical states of 0 and 1, are able to. Successful long-distance transmission of a qubit requires precise control and timing.
    In addition to the well-understood requirements of transmission distance and data rate, for quantum networks to be useful in a real-world setting there are at least two other requirements of industry that need to be considered.
    One is real-time network control, specifically time-sensitive networking. This control method, which takes network traffic into account, has been used successfully in other types of networks, such as Ethernet, to ensure messages are transmitted and received at precise times. This is precisely what is required to control quantum networks.
    The second requirement is cost. Large-scale adoption of an industrial quantum network will only happen if costs can be significantly reduced. One way to accomplish cost reduction is with photonic integrated circuits. More

  • in

    Standards for studies using machine learning

    Researchers in the life sciences who use machine learning for their studies should adopt standards that allow other researchers to reproduce their results, according to a comment article published today in the journal Nature Methods.
    The authors explain that the standards are key to advancing scientific breakthroughs, making advances in knowledge, and ensuring research findings are reproducible from one group of scientists to the next. The standards would allow other groups of scientists to focus on the next breakthrough rather than spending time recreating the wheel built by the authors of the original study.
    Casey S. Greene, PhD, director of the University of Colorado School of Medicine’s Center for Health AI, is a corresponding author of the article, which he co-authored with first author Benjamin J. Heil, a member of Greene’s research team, and researchers from the United States, Canada, and Europe.
    “Ultimately all science requires trust — no scientist can reproduce the results from every paper they read,” Greene and his co-authors write. “The question, then, is how to ensure that machine-learning analyses in the life sciences can be trusted.”
    Greene and his co-authors outline standards to qualify for one of three levels of accessibility: bronze, silver, and gold. These standards each set minimum levels for sharing study materials so that other life science researchers can trust the work and, if warranted, validate the work and build on it.
    To qualify for a bronze standard, life science researchers would need to make their data, code, and models publicly available. In machine learning, computers learn from training data and having access to that data enables scientists to look for problems that can confound the process. The code tells future researchers how the computer was told to carry out the steps of the work.
    In machine learning, the resulting model is critically important. For future researchers, knowing the original research team’s model is critical for understanding how it relates to the data it is supposed to analyze. Without access to the model, other researchers cannot determine biases that might influence the work. For example, it can be difficult to determine whether an algorithm favors one group of people over another.
    “Being unable to examine a model also makes trusting it difficult,” the authors write.
    The silver standard calls for the data, models, and code provided at the bronze level, and adds more information about the system in which to run the code. For the next scientists, that information makes it theoretically possible that they could duplicate the training process.
    To qualify for the gold standard, researchers must add an “easy button” to their work to make it possible for future researchers to reproduce the previous analysis with a single command. The original researchers must automate all steps of their analysis so that “the burden of reproducing their work is as small as possible.” For the next scientists, this information makes it practically possible to duplicate the training process and either adapt or extend it.
    Greene and his co-authors also offer recommendations for documenting the steps and sharing them.
    The Nature Methods article is an important contribution to the continuing refinement of the use of machine learning and other data-analysis methods in health sciences and other fields where trust is particularly important. Greene is one of several leaders recently recruited by the CU School of Medicine to establish a program in developing and applying robust data science methodologies to advance biomedical research, education, and clinical care. More

  • in

    New mathematical solutions to an old problem in astronomy

    For millennia, humanity has observed the changing phases of the Moon. The rise and fall of sunlight reflected off the Moon, as it presents its different faces to us, is known as a “phase curve.” Measuring phase curves of the Moon and Solar System planets is an ancient branch of astronomy that goes back at least a century. The shapes of these phase curves encode information on the surfaces and atmospheres of these celestial bodies. In modern times, astronomers have measured the phase curves of exoplanets using space telescopes such as Hubble, Spitzer, TESS and CHEOPS. These observations are compared with theoretical predictions. In order to do so, one needs a way of calculating these phase curves. It involves seeking a solution to a difficult mathematical problem concerning the physics of radiation.
    Approaches for the calculation of phase curves have existed since the 18th century. The oldest of these solutions goes back to the Swiss mathematician, physicist and astronomer, Johann Heinrich Lambert, who lived in the 18th century. “Lambert’s law of reflection” is attributed to him. The problem of calculating reflected light from Solar System planets was posed by the American astronomer Henry Norris Russell in an influential 1916 paper. Another well-known 1981 solution is attributed to the American lunar scientist Bruce Hapke, who built on the classic work of the Indian-American Nobel laureate Subrahmanyan Chandrasekhar in 1960. Hapke pioneered the study of the Moon using mathematical solutions of phase curves. The Soviet physicist Viktor Sobolev also made important contributions to the study of reflected light from celestial bodies in his influential 1975 textbook. Inspired by the work of these scientists, theoretical astrophysicist Kevin Heng of the Center for Space and Habitability CSH at the University of Bern has discovered an entire family of new mathematical solutions for calculating phase curves. The paper, authored by Kevin Heng in collaboration with Brett Morris from the National Center of Competence in Research NCCR PlanetS — which the University of Bern manages together with the University of Geneva — and Daniel Kitzmann from the CSH, has just been published in Nature Astronomy.
    Generally applicable solutions
    “I was fortunate that this rich body of work had already been done by these great scientists. Hapke had discovered a simpler way to write down the classic solution of Chandrasekhar, who famously solved the radiative transfer equation for isotropic scattering. Sobolev had realised that one can study the problem in at least two mathematical coordinate systems.” Sara Seager brought the problem to Heng’s attention by her summary of it in her 2010 textbook.
    By combining these insights, Heng was able to write down mathematical solutions for the strength of reflection (the albedo) and the shape of the phase curve, both completely on paper and without resorting to a computer. “The ground-breaking aspect of these solutions is that they are valid for any law of reflection, which means they can be used in very general ways. The defining moment came for me when I compared these pen-and-paper calculations to what other researchers had done using computer calculations. I was blown away by how well they matched,” said Heng.
    Successful analysis of the phase curve of Jupiter
    “What excites me is not just the discovery of new theory, but also its major implications for interpreting data,” says Heng. For example, the Cassini spacecraft measured phase curves of Jupiter in the early 2000s, but an in-depth analysis of the data had not previously been done, probably because the calculations were too computationally expensive. With this new family of solutions, Heng was able to analyze the Cassini phase curves and infer that the atmosphere of Jupiter is filled with clouds made up of large, irregular particles of different sizes. This parallel study has just been published by the Astrophysical Journal Letters, in collaboration with Cassini data expert and planetary scientist Liming Li of Houston University in Texas, U.S.A.
    New possibilities for the analysis of data from space telescopes
    “The ability to write down mathematical solutions for phase curves of reflected light on paper means that one can use them to analyze data in seconds,” said Heng. It opens up new ways of interpreting data that were previously infeasible. Heng is collaborating with Pierre Auclair-Desrotour (formerly CSH, currently at Paris Observatory) to further generalize these mathematical solutions. “Pierre Auclair-Desrotour is a more talented applied mathematician than I am, and we promise exciting results in the near future,” said Heng.
    In the Nature Astronomy paper, Heng and his co-authors demonstrated a novel way of analyzing the phase curve of the exoplanet Kepler-7b from the Kepler space telescope. Brett Morris led the data analysis part of the paper. “Brett Morris leads the data analysis for the CHEOPS mission in my research group, and his modern data science approach was critical for successfully applying the mathematical solutions to real data,” explained Heng. They are currently collaborating with scientists from the American-led TESS space telescope to analyze TESS phase curve data. Heng envisions that these new solutions will lead to novel ways of analyzing phase curve data from the upcoming, 10-billion-dollar James Webb Space Telescope, which is due to launch later in 2021. “What excites me most of all is that these mathematical solutions will remain valid long after I am gone, and will probably make their way into standard textbooks,” said Heng.
    Story Source:
    Materials provided by University of Bern. Note: Content may be edited for style and length. More

  • in

    'Charging room' system powers lights, phones, laptops without wires

    In a move that could one day free the world’s countertops from their snarl of charging cords, researchers at the University of Michigan and University of Tokyo have developed a system to safely deliver electricity over the air, potentially turning entire buildings into wireless charging zones.
    Detailed in a new study published in Nature Electronics, the technology can deliver 50 watts of power using magnetic fields.
    Study author Alanson Sample, U-M professor of computer science and engineering, says that in addition to untethering phones and laptops, the technology could also power implanted medical devices and open new possibilities for mobile robotics in homes and manufacturing facilities. The team is also working on implementing the system in spaces that are smaller than room-size, for example a toolbox that charges tools placed inside it.
    “This really ups the power of the ubiquitous computing world — you could put a computer in anything without ever having to worry about charging or plugging in,” Sample said. “There are a lot of clinical applications as well; today’s heart implants, for example, require a wire that runs from the pump through the body to an external power supply. This could eliminate that, reducing the risk of infection and improving patients’ quality of life.”
    The team, led by researchers at the University of Tokyo, demonstrated the technology in a purpose-built aluminum test room measuring approximately 10 feet by 10 feet. They wirelessly powered lamps, fans and cell phones that could draw current from anywhere in the room regardless of the placement of people and furniture.
    The system is a major improvement over previous attempts at wireless charging systems, which used potentially harmful microwave radiation or required devices to be placed on dedicated charging pads, the researchers say. Instead, it uses a conductive surface on room walls and a conductive pole to generate magnetic fields. More