More stories

  • in

    Machine learning platform identifies activated neurons in real-time

    Biomedical engineers at Duke University have developed an automatic process that uses streamlined artificial intelligence (AI) to identify active neurons in videos faster and more accurately than current techniques.
    The technology should allow researchers to watch an animal’s brain activity in real time, as they are behaving.
    The work appears May 20 in Nature Machine Intelligence.
    One of the ways researchers study the activity of neurons in living animals is through a process known as two-photon calcium imaging, which makes active neurons appear as flashes of light. Analyzing these videos, however, typically requires a human circling every burst of intensity they see in a process called segmentation. While this may seem straightforward, these bursts often overlap in spaces where thousands of neurons are imaged simultaneously. Analyzing just a five-minute video this way could take weeks or even months.
    “People try to figure out how the brain works by recording the activity of neurons as an animal does a behavior to study the relationship between the two,” said Yiyang Gong, the primary author on the paper. “But manual segmentation creates a big bottleneck and doesn’t allow researchers to see the activation of the neurons in real-time.”
    Gong, an assistant professor of biomedical engineering, and Sina Farsiu, a professor of biomedical engineering, previously addressed this bottleneck in a 2019 paper, where they shared the development of a deep-learning platform that maps active neurons as accurately as humans in a fraction of the time. But because videos can be tens of gigabytes, researchers still have to wait hours or days for them to process. More

  • in

    AI spots neurons better than human experts

    A new combination of optical coherence tomography (OCT), adaptive optics and deep neural networks should enable better diagnosis and monitoring for neuron-damaging eye and brain diseases like glaucoma.
    Biomedical engineers at Duke University led a multi-institution consortium to develop the process, which easily and precisely tracks changes in the number and shape of retinal ganglion cells in the eye.
    This work appears in a paper published on May 3 in the journal Optica.
    The retina of the eye is an extension of the central nervous system. Ganglion cells are one of the primary neurons in the eye that process and send visual information to the brain. In many neurodegenerative diseases like glaucoma, ganglion cells degenerate and disappear, leading to irreversible blindness. Traditionally, researchers use OCT, an imaging technology similar to ultrasound that uses light instead of sound, to peer beneath layers of eye tissue to diagnose and track the progression of glaucoma and other eye diseases.
    Although OCT allows researchers to efficiently view the ganglion cell layer in the retina, the technique is only sensitive enough to show the thickness of the cell layer — it can’t reveal individual ganglion cells. This hinders early diagnosis or rapid tracking of the disease progression, as large quantities of ganglion cells need to disappear before physicians can see the changes in thickness.
    To remedy this, a recent technology called adaptive optics OCT (AO-OCT) enables imaging sensitive enough to view individual ganglion cells. Adaptive optics is a technology that minimizes the effect of optical aberrations that occur when examining the eye, which are a major limiting factor in achieving high-resolution in OCT imaging. More

  • in

    Quantum sensing: Odd angles make for strong spin-spin coupling

    Sometimes things are a little out of whack, and it turns out to be exactly what you need.
    That was the case when orthoferrite crystals turned up at a Rice University laboratory slightly misaligned. Those crystals inadvertently became the basis of a discovery that should resonate with researchers studying spintronics-based quantum technology.
    Rice physicist Junichiro Kono, alumnus Takuma Makihara and their collaborators found an orthoferrite material, in this case yttrium iron oxide, placed in a high magnetic field showed uniquely tunable, ultrastrong interactions between magnons in the crystal.
    Orthoferrites are iron oxide crystals with the addition of one or more rare-earth elements.
    Magnons are quasiparticles, ghostly constructs that represent the collective excitation of electron spin in a crystal lattice.
    What one has to do with the other is the basis of a study that appears in Nature Communications, where Kono and his team describe an unusual coupling between two magnons dominated by antiresonance, through which both magnons gain or lose energy simultaneously. More

  • in

    Young teens should only use recreational internet and video games one hour daily

    Middle-school aged children who use the internet, social media or video games recreationally for more than an hour each day during the school week have significantly lower grades and test scores, according to a study from the Center for Gambling Studies at Rutgers University-New Brunswick.
    The findings appear in the journal Computers in Human Behavior.
    Researchers say the findings give parents and children a moderate threshold for using entertainment-related technology — no more than one hour daily on school days and four hours a day on weekends.
    “Interactive technology is widely used to promote children’s educational access and achievement,” said lead author Vivien (Wen Li) Anthony, an assistant professor at the School of Social Work and research associate at the Rutgers Center for Gambling Studies. “During the COVID-19 pandemic, technology has been essential to facilitating remote learning. At the same time, there is a growing concern that excessive technology use, particularly for entertainment, may adversely affect children’s educational development by facilitating undesirable study habits and detracting from time spent on learning activities.”
    The researchers, which include Professor Lia Nower of the Rutgers Center for Gambling Studies and a researcher from Renmin University of China, analyzed the China Education Panel Survey data, a national survey of educational needs and outcomes of children in China. Approximately 10,000 first-year middle school students were surveyed and followed. Their average age was 13.5 years.
    The results showed that children who used the internet, social media or video games for entertainment four or more hours daily were four times more likely to skip school than those who did not. Boys used interactive technology for entertainment significantly more than girls. Boys also performed worse and showed lower school engagement levels than girls.
    “Such findings are critical, particularly in light of the recent movement toward online learning in countries throughout the world,” said Anthony. “In a learning environment that integrates the internet, it is easy for children to move across educational and entertainment platforms during learning without alerting teachers or adults to alternate activities.”
    Anthony said children in the study who used technology in moderation (i.e., less than one hour per day on weekends) experienced less boredom at school, potentially due to the positive effects of participation in social media, video games and video streaming such as peer bonding and relationship building. Using interactive technology for entertainment in moderation advanced children’s cognitive development.
    The findings suggest that parents place time limits on their children’s interactive technology use, and that parents and teachers should help children to develop effective time management and self-regulation skills to reduce their reliance on technology.
    Story Source:
    Materials provided by Rutgers University. Note: Content may be edited for style and length. More

  • in

    Pristine quantum criticality found

    U.S. and Austrian physicists searching for evidence of quantum criticality in topological materials have found one of the most pristine examples yet observed.
    In an open access paper published online in Science Advances, researchers from Rice University, Johns Hopkins University, the Vienna University of Technology (TU Wien) and the National Institute of Standards and Technology (NIST) present the first experimental evidence to suggest that quantum criticality — a disordered state in which electrons waver between competing states of order — may give rise to topological phases, “protected” quantum states that are of growing interest for quantum computation.
    “The thought that underlies this work is, ‘Why not quantum criticality?'” said study co-author Qimiao Si, a theoretical physicist from Rice who’s spent two decades studying the interplay between quantum criticality and one of the most mysterious processes in modern physics, high-temperature superconductivity.
    “Maybe quantum criticality is not the only mechanism that can nucleate topological phases of matter, but we know quantum criticality provides a setting in which things are fluctuating and from which new states of matter can emerge,” said Si, director of the Rice Center for Quantum Materials (RCQM).
    In the study Si and colleagues, including experimentalist Silke Bühler-Paschen, a longtime collaborator at TU Wien, and Collin Broholm of both NIST and Johns Hopkins, studied a semimetal made from one part cerium, four parts ruthenium and six parts tin. Topological phases have not been observed in CeRu4Sn6, but it is similar to a number of other materials in which these have been observed. And it is known to host the Kondo effect, a strong interaction between the magnetic moments of electrons attached to atoms in a metal and the spins of passing conduction electrons.
    In typical metals and semiconductors, interactions between electrons are weak enough that engineers and physicists need not take them into account when designing a computer chip or other electronic device. Not so in “strongly correlated” materials, like Kondo semimetals. In these, the overall behavior of the material — and of any device built from it — relies on electron-electron interactions. And these are the interactions that give rise to quantum criticality. More

  • in

    Accurate evaluation of CRISPR genome editing

    CRISPR technology allows researchers to edit genomes by altering DNA sequences and by thus modifying gene function. Its many potential applications include correcting genetic defects, treating and preventing the spread of diseases and improving crops.
    Genome editing tools, such as the CRISPR-Cas9 technology, can be engineered to make extremely well-defined alterations to the intended target on a chromosome where a particular gene or functional element is located. However, one potential complication is that CRISPR editing may lead to other, unintended, genomic changes. These are known as off-target activity. When targeting several different sites in the genome off target activity can lead to translocations, unusual rearrangement of chromosomes, as well as to other unintended genomic modifications.
    Controlling off-target editing activity is one of the central challenges in making CRISPR-Cas9 technology accurate and applicable in medical practice. Current measurement assays and data analysis methods for quantifying off-target activity do not provide statistical evaluation, are not sufficiently sensitive in separating signal from noise in experiments with low editing rates, and require cumbersome efforts to address the detection of translocations.
    A multidisciplinary team of researchers from the Interdisciplinary Center Herzliya and Bar-Ilan University report in the May 24th issue of the journal Nature Communications the development of a new software tool to detect, evaluate and quantify off-target editing activity, including adverse translocation events that can cause cancer. The software is based on input taken from a standard measurement assay, involving multiplexed PCR amplification and Next-Generation Sequencing (NGS).
    Known as CRISPECTOR, the tool analyzes next generation sequencing data obtained from CRISPR-Cas9 experiments, and applies statistical modeling to determine and quantify editing activity. CRISPECTOR accurately measures off-target activity at every interrogated locus. It further enables better false-negative rates in sites with weak, yet significant, off-target activity. Importantly, one of the novel features of CRISPECTOR is its ability to detect adverse translocation events occurring in an editing experiment.
    “In genome editing, especially for clinical applications, it is critical to identify low level off-target activity and adverse translocation events. Even a small number of cells with carcinogenic potential, when transplanted into a patient in the context of gene therapy, can have detrimental consequences in terms of cancer pathogenesis. As part of treatment protocols, it is therefore important to detect these potential events in advance,” says Dr. Ayal Hendel, of Bar-Ilan University’s Mina and Everard Goodman Faculty of Life Sciences. Dr. Hendel led the study together with Prof. Zohar Yakhini, of the Arazi School of Computer Science at Interdisciplinary Center (IDC) Herzliya. “CRISPECTOR provides an effective method to characterize and quantify potential CRISPR-induced errors, thereby significantly improving the safety of future clinical use of genome editing.” Hendel’s team utilized CRISPR-Cas9 technology to edit genes in stem cells relevant to disorders of the blood and the immune system. In the process of analyzing the data they became aware of the shortcomings of the existing tools for quantifying off-target activity and of gaps that should be bridged to improve applicability. This experience led to the collaboration with Prof Yakhini’s leading computational biology and bioinformatics group.
    Prof. Zohar Yakhini, of IDC Herzliya and the Technion, adds that “in experiments utilizing deep sequencing techniques that have significant levels of background noise, low levels of true off-target activity can get lost under the noise. The need for a measurement approach and related data analysis that are capable of seeing beyond the noise, as well as of detecting adverse translocation events occurring in an editing experiment, is evident to genome editing scientists and practitioners. CRISPECTOR is a tool that can sift through the background noise to identify and quantify true off-target signal. Moreover, using statistical modelling and careful analysis of the data CRISPECTOR can also identify a wider spectrum of genomic aberrations. By characterizing and quantifying potential CRISPR-induced errors our methods will support the safer clinical use of genome editing therapeutic approaches.”
    The Hendel Lab and the Yakhini Research Group plan to apply the tool towards the study of potential therapies for genetic disorders of the immune system and of immunotherapy approaches in cancer.
    The study is a collaboration between the Hendel Lab at Bar-Ilan University (BIU) and the Yakhini Research Group (IDC Herzliya and the Technion). The project was led by Ido Amit (IDC) and Ortal Iancu (BIU). Also participating in this research were Daniel Allen, Dor Breier and Nimrod Ben Haim (BIU); Alona Levy-Jurgenson (Technion); Leon Anavy (Technion and IDC); Gavin Kurgan, Matthew S. McNeil, Garrett R. Rettig and Yu Wang (Integrated DNA Technologies, Inc. (IDT, US)). Additional contributors included Chihong Choi (IDC) and Mark Behlke (IDT, US).
    This study was supported by a grant from the European Research Council (ERC) under the Horizon 2020 research and innovation program, and the Adams Fellowships Program of the Israel Academy of Sciences and Humanities.
    Story Source:
    Materials provided by Bar-Ilan University. Note: Content may be edited for style and length. More

  • in

    Will COVID-19 eventually become just a seasonal nuisance?

    Within the next decade, the novel coronavirus responsible for COVID-19 could become little more than a nuisance, causing no more than common cold-like coughs and sniffles. That possible future is predicted by mathematical models that incorporate lessons learned from the current pandemic on how our body’s immunity changes over time. Scientists at the University of Utah carried out the research, now published in the journal Viruses.
    “This shows a possible future that has not yet been fully addressed,” says Fred Adler, PhD, professor of mathematics and biological sciences at the U. “Over the next decade, the severity of COVID-19 may decrease as populations collectively develop immunity.”
    The findings suggest that changes in the disease could be driven by adaptations of our immune response rather than by changes in the virus itself. Adler was senior author on the publication with Alexander Beams, first author and graduate student in the Department of Mathematics and the Division of Epidemiology at University of Utah Health, and undergraduate co-author Rebecca Bateman.
    Although SARS-CoV-2 (the sometimes-deadly coronavirus causing COVID-19) is the best-known member of that virus family, other seasonal coronaviruses circulate in the human population — and they are much more benign. Some evidence indicates that one of these cold-causing relatives might have once been severe, giving rise to the “Russian flu” pandemic in the late 19th century. The parallels led the U of U scientists to wonder whether the severity of SARS-CoV-2 could similarly lessen over time.
    To test the idea, they built mathematical models incorporating evidence on the body’s immune response to SARS-CoV-2 based on the following data from the current pandemic. There is likely a dose response between virus exposure and disease severity. A person exposed to a small dose of virus will be more likely to get a mild case of COVID-19 and shed small amounts of virus. By contrast, adults exposed to a large dose of virus are more likely to have severe disease and shed more virus. Masking and social distancing decrease the viral dose. Children are unlikely to develop severe disease. Adults who have had COVID-19 or have been vaccinated are protected against severe disease.Running several versions of these scenarios showed that the three mechanisms in combination set up a situation where an increasing proportion of the population will become predisposed for mild disease over the long term. The scientists felt the transformation was significant enough that it needed a new term. In this scenario, SARS-CoV-2 would become “Just Another Seasonal Coronavirus,” or JASC for short.
    “In the beginning of the pandemic, no one had seen the virus before,” Adler explains. “Our immune system was not prepared.” The models show that as more adults become partially immune, whether through prior infection or vaccination, severe infections all but disappear over the next decade. Eventually, the only people who will be exposed to the virus for the first time will be children — and they’re naturally less prone to severe disease.
    “The novel approach here is to recognize the competition taking place between mild and severe COVID-19 infections and ask which type will get to persist in the long run,” Beams says. “We’ve shown that mild infections will win, as long as they train our immune systems to fight against severe infections.”
    The models do not account for every potential influence on disease trajectory. For example, if new virus variants overcome partial immunity, COVID-19 could take a turn for the worse. In addition, the predictions rely on the key assumptions of the model holding up.
    “Our next step is comparing our model predictions with the most current disease data to assess which way the pandemic is going as it is happening,” Adler says. “Do things look like they’re heading in a bad or good direction? Is the proportion of mild cases increasing? Knowing that might affect decisions we make as a society.”
    The research, published as “Will SARS-CoV-2 Become Just Another Seasonal Coronavirus?,” was supported by COVID MIND 2020 and the University of Utah.
    Story Source:
    Materials provided by University of Utah Health. Note: Content may be edited for style and length. More

  • in

    Robotic 'Third Thumb' use can alter brain representation of the hand

    Using a robotic ‘Third Thumb’ can impact how the hand is represented in the brain, finds a new study led by UCL researchers.
    The team trained people to use a robotic extra thumb and found they could effectively carry out dextrous tasks, like building a tower of blocks, with one hand (now with two thumbs). The researchers report in the journal Science Robotics that participants trained to use the thumb also increasingly felt like it was a part of their body.
    Designer Dani Clode began developing the device, called the Third Thumb, as part of an award-winning graduate project at the Royal College of Art, seeking to reframe the way we view prosthetics, from replacing a lost function, to an extension of the human body. She was later invited to join Professor Tamar Makin’s team of neuroscientists at UCL who were investigating how the brain can adapt to body augmentation.
    Professor Makin (UCL Institute of Cognitive Neuroscience), lead author of the study, said: “Body augmentation is a growing field aimed at extending our physical abilities, yet we lack a clear understanding of how our brains can adapt to it. By studying people using Dani’s cleverly-designed Third Thumb, we sought to answer key questions around whether the human brain can support an extra body part, and how the technology might impact our brain.”
    The Third Thumb is 3D-printed, making it easy to customise, and is worn on the side of the hand opposite the user’s actual thumb, near the little (pinky) finger. The wearer controls it with pressure sensors attached to their feet, on the underside of the big toes. Wirelessly connected to the Thumb, both toe sensors control different movements of the Thumb by immediately responding to subtle changes of pressure from the wearer.
    For the study, 20 participants were trained to use the Thumb over five days, during which they were also encouraged to take the Thumb home each day after training to use it in daily life scenarios, totalling two to six hours of wear time per day. Those participants were compared to an additional group of 10 control participants who wore a static version of the Thumb while completing the same training. More