More stories

  • in

    Accurate evaluation of CRISPR genome editing

    CRISPR technology allows researchers to edit genomes by altering DNA sequences and by thus modifying gene function. Its many potential applications include correcting genetic defects, treating and preventing the spread of diseases and improving crops.
    Genome editing tools, such as the CRISPR-Cas9 technology, can be engineered to make extremely well-defined alterations to the intended target on a chromosome where a particular gene or functional element is located. However, one potential complication is that CRISPR editing may lead to other, unintended, genomic changes. These are known as off-target activity. When targeting several different sites in the genome off target activity can lead to translocations, unusual rearrangement of chromosomes, as well as to other unintended genomic modifications.
    Controlling off-target editing activity is one of the central challenges in making CRISPR-Cas9 technology accurate and applicable in medical practice. Current measurement assays and data analysis methods for quantifying off-target activity do not provide statistical evaluation, are not sufficiently sensitive in separating signal from noise in experiments with low editing rates, and require cumbersome efforts to address the detection of translocations.
    A multidisciplinary team of researchers from the Interdisciplinary Center Herzliya and Bar-Ilan University report in the May 24th issue of the journal Nature Communications the development of a new software tool to detect, evaluate and quantify off-target editing activity, including adverse translocation events that can cause cancer. The software is based on input taken from a standard measurement assay, involving multiplexed PCR amplification and Next-Generation Sequencing (NGS).
    Known as CRISPECTOR, the tool analyzes next generation sequencing data obtained from CRISPR-Cas9 experiments, and applies statistical modeling to determine and quantify editing activity. CRISPECTOR accurately measures off-target activity at every interrogated locus. It further enables better false-negative rates in sites with weak, yet significant, off-target activity. Importantly, one of the novel features of CRISPECTOR is its ability to detect adverse translocation events occurring in an editing experiment.
    “In genome editing, especially for clinical applications, it is critical to identify low level off-target activity and adverse translocation events. Even a small number of cells with carcinogenic potential, when transplanted into a patient in the context of gene therapy, can have detrimental consequences in terms of cancer pathogenesis. As part of treatment protocols, it is therefore important to detect these potential events in advance,” says Dr. Ayal Hendel, of Bar-Ilan University’s Mina and Everard Goodman Faculty of Life Sciences. Dr. Hendel led the study together with Prof. Zohar Yakhini, of the Arazi School of Computer Science at Interdisciplinary Center (IDC) Herzliya. “CRISPECTOR provides an effective method to characterize and quantify potential CRISPR-induced errors, thereby significantly improving the safety of future clinical use of genome editing.” Hendel’s team utilized CRISPR-Cas9 technology to edit genes in stem cells relevant to disorders of the blood and the immune system. In the process of analyzing the data they became aware of the shortcomings of the existing tools for quantifying off-target activity and of gaps that should be bridged to improve applicability. This experience led to the collaboration with Prof Yakhini’s leading computational biology and bioinformatics group.
    Prof. Zohar Yakhini, of IDC Herzliya and the Technion, adds that “in experiments utilizing deep sequencing techniques that have significant levels of background noise, low levels of true off-target activity can get lost under the noise. The need for a measurement approach and related data analysis that are capable of seeing beyond the noise, as well as of detecting adverse translocation events occurring in an editing experiment, is evident to genome editing scientists and practitioners. CRISPECTOR is a tool that can sift through the background noise to identify and quantify true off-target signal. Moreover, using statistical modelling and careful analysis of the data CRISPECTOR can also identify a wider spectrum of genomic aberrations. By characterizing and quantifying potential CRISPR-induced errors our methods will support the safer clinical use of genome editing therapeutic approaches.”
    The Hendel Lab and the Yakhini Research Group plan to apply the tool towards the study of potential therapies for genetic disorders of the immune system and of immunotherapy approaches in cancer.
    The study is a collaboration between the Hendel Lab at Bar-Ilan University (BIU) and the Yakhini Research Group (IDC Herzliya and the Technion). The project was led by Ido Amit (IDC) and Ortal Iancu (BIU). Also participating in this research were Daniel Allen, Dor Breier and Nimrod Ben Haim (BIU); Alona Levy-Jurgenson (Technion); Leon Anavy (Technion and IDC); Gavin Kurgan, Matthew S. McNeil, Garrett R. Rettig and Yu Wang (Integrated DNA Technologies, Inc. (IDT, US)). Additional contributors included Chihong Choi (IDC) and Mark Behlke (IDT, US).
    This study was supported by a grant from the European Research Council (ERC) under the Horizon 2020 research and innovation program, and the Adams Fellowships Program of the Israel Academy of Sciences and Humanities.
    Story Source:
    Materials provided by Bar-Ilan University. Note: Content may be edited for style and length. More

  • in

    Will COVID-19 eventually become just a seasonal nuisance?

    Within the next decade, the novel coronavirus responsible for COVID-19 could become little more than a nuisance, causing no more than common cold-like coughs and sniffles. That possible future is predicted by mathematical models that incorporate lessons learned from the current pandemic on how our body’s immunity changes over time. Scientists at the University of Utah carried out the research, now published in the journal Viruses.
    “This shows a possible future that has not yet been fully addressed,” says Fred Adler, PhD, professor of mathematics and biological sciences at the U. “Over the next decade, the severity of COVID-19 may decrease as populations collectively develop immunity.”
    The findings suggest that changes in the disease could be driven by adaptations of our immune response rather than by changes in the virus itself. Adler was senior author on the publication with Alexander Beams, first author and graduate student in the Department of Mathematics and the Division of Epidemiology at University of Utah Health, and undergraduate co-author Rebecca Bateman.
    Although SARS-CoV-2 (the sometimes-deadly coronavirus causing COVID-19) is the best-known member of that virus family, other seasonal coronaviruses circulate in the human population — and they are much more benign. Some evidence indicates that one of these cold-causing relatives might have once been severe, giving rise to the “Russian flu” pandemic in the late 19th century. The parallels led the U of U scientists to wonder whether the severity of SARS-CoV-2 could similarly lessen over time.
    To test the idea, they built mathematical models incorporating evidence on the body’s immune response to SARS-CoV-2 based on the following data from the current pandemic. There is likely a dose response between virus exposure and disease severity. A person exposed to a small dose of virus will be more likely to get a mild case of COVID-19 and shed small amounts of virus. By contrast, adults exposed to a large dose of virus are more likely to have severe disease and shed more virus. Masking and social distancing decrease the viral dose. Children are unlikely to develop severe disease. Adults who have had COVID-19 or have been vaccinated are protected against severe disease.Running several versions of these scenarios showed that the three mechanisms in combination set up a situation where an increasing proportion of the population will become predisposed for mild disease over the long term. The scientists felt the transformation was significant enough that it needed a new term. In this scenario, SARS-CoV-2 would become “Just Another Seasonal Coronavirus,” or JASC for short.
    “In the beginning of the pandemic, no one had seen the virus before,” Adler explains. “Our immune system was not prepared.” The models show that as more adults become partially immune, whether through prior infection or vaccination, severe infections all but disappear over the next decade. Eventually, the only people who will be exposed to the virus for the first time will be children — and they’re naturally less prone to severe disease.
    “The novel approach here is to recognize the competition taking place between mild and severe COVID-19 infections and ask which type will get to persist in the long run,” Beams says. “We’ve shown that mild infections will win, as long as they train our immune systems to fight against severe infections.”
    The models do not account for every potential influence on disease trajectory. For example, if new virus variants overcome partial immunity, COVID-19 could take a turn for the worse. In addition, the predictions rely on the key assumptions of the model holding up.
    “Our next step is comparing our model predictions with the most current disease data to assess which way the pandemic is going as it is happening,” Adler says. “Do things look like they’re heading in a bad or good direction? Is the proportion of mild cases increasing? Knowing that might affect decisions we make as a society.”
    The research, published as “Will SARS-CoV-2 Become Just Another Seasonal Coronavirus?,” was supported by COVID MIND 2020 and the University of Utah.
    Story Source:
    Materials provided by University of Utah Health. Note: Content may be edited for style and length. More

  • in

    Robotic 'Third Thumb' use can alter brain representation of the hand

    Using a robotic ‘Third Thumb’ can impact how the hand is represented in the brain, finds a new study led by UCL researchers.
    The team trained people to use a robotic extra thumb and found they could effectively carry out dextrous tasks, like building a tower of blocks, with one hand (now with two thumbs). The researchers report in the journal Science Robotics that participants trained to use the thumb also increasingly felt like it was a part of their body.
    Designer Dani Clode began developing the device, called the Third Thumb, as part of an award-winning graduate project at the Royal College of Art, seeking to reframe the way we view prosthetics, from replacing a lost function, to an extension of the human body. She was later invited to join Professor Tamar Makin’s team of neuroscientists at UCL who were investigating how the brain can adapt to body augmentation.
    Professor Makin (UCL Institute of Cognitive Neuroscience), lead author of the study, said: “Body augmentation is a growing field aimed at extending our physical abilities, yet we lack a clear understanding of how our brains can adapt to it. By studying people using Dani’s cleverly-designed Third Thumb, we sought to answer key questions around whether the human brain can support an extra body part, and how the technology might impact our brain.”
    The Third Thumb is 3D-printed, making it easy to customise, and is worn on the side of the hand opposite the user’s actual thumb, near the little (pinky) finger. The wearer controls it with pressure sensors attached to their feet, on the underside of the big toes. Wirelessly connected to the Thumb, both toe sensors control different movements of the Thumb by immediately responding to subtle changes of pressure from the wearer.
    For the study, 20 participants were trained to use the Thumb over five days, during which they were also encouraged to take the Thumb home each day after training to use it in daily life scenarios, totalling two to six hours of wear time per day. Those participants were compared to an additional group of 10 control participants who wore a static version of the Thumb while completing the same training. More

  • in

    AI-enabled EKGs find difference between numerical age and biological age significantly affects health

    You might be older — or younger — than you think. A new study found that differences between a person’s age in years and his or her biological age, as predicted by an artificial intelligence (AI)-enabled EKG, can provide measurable insights into health and longevity.
    The AI model accurately predicted the age of most subjects, with a mean age gap of 0.88 years between EKG age and actual age. However, a number of subjects had a gap that was much larger, either seemingly much older or much younger by EKG age.
    The likelihood to die during follow-up was much higher among those seemingly older by EKG age, compared to those whose EKG age was the same as their chronologic or actual age. The association was even stronger when predicting death caused by heart disease. Conversely, those who had a lesser age gap ? considered younger by EKG — had decreased risk.
    “Our results validate and expand on our prior observations that EKG age using AI may detect accelerated aging by proving that those with older-than-expected age by EKG die sooner, particularly from heart disease. We know that mortality rate is one of the best ways to measure biological age, and our model proved that,” says Francisco Lopez-Jimenez, M.D., chair of the Division of Preventive Cardiology at Mayo Clinic. Dr. Lopez-Jimenez is senior author of the study.
    When researchers adjusted these data to consider multiple standard risk factors, the association between the age gap and cardiovascular mortality was even more pronounced. Subjects who were found to be oldest by EKG compared to their actual age had the greatest risk, even after accounting for medical conditions that would predict their survival, while those found the youngest compared to their actual age had lower cardiovascular risks.
    Mayo Clinic researchers evaluated the12-lead EKG data of more than 25,000 subjects with an AI algorithm previously trained and validated to provide a biologic age prediction. Subjects with a positive age gap — an EKG age higher than their chronological or actual age — showed a clear connection to all-cause and cardiovascular mortality over time. The findings are published in European Heart Journal — Digital Health.
    Study subjects were selected through the Rochester Epidemiology Project, an index of health-related information from medical providers in Olmsted County, Minnesota. The subjects had a mean age around 54 and were followed for approximately 12.5 years. The study excluded those with a baseline history of heart attacks, bypass surgery or stents, stroke or atrial fibrillation.
    “Our findings open up a number of opportunities to help identify those who may benefit from preventive strategies the most. Now that the concept has been proven that EKG age relates to survival, it is time to think how we can incorporate this in clinical practice.More research will be needed to find the best ways to do it,” says Dr. Lopez-Jimenez.
    Story Source:
    Materials provided by Mayo Clinic. Original written by Terri Malloy. Note: Content may be edited for style and length. More

  • in

    Brain stimulation evoking sense of touch improves control of robotic arm

    Most able-bodied people take their ability to perform simple daily tasks for granted — when they reach for a warm mug of coffee, they can feel its weight and temperature and adjust their grip accordingly so that no liquid is spilled. People with full sensory and motor control of their arms and hands can feel that they’ve made contact with an object the instant they touch or grasp it, allowing them to start moving or lifting it with confidence.
    But those tasks become much more difficult when a person operates a prosthetic arm, let alone a mind-controlled one.
    In a paper published today in Science, a team of bioengineers from the University of Pittsburgh Rehab Neural Engineering Labs describe how adding brain stimulation that evokes tactile sensations makes it easier for the operator to manipulate a brain-controlled robotic arm. In the experiment, supplementing vision with artificial tactile perception cut the time spent grasping and transferring objects in half, from a median time of 20.9 to 10.2 seconds.
    “In a sense, this is what we hoped would happen — but perhaps not to the degree that we observed,” said co-senior author Jennifer Collinger, Ph.D., associate professor in the Pitt Department of Physical Medicine and Rehabilitation. “Sensory feedback from limbs and hands is hugely important for doing normal things in our daily lives, and when that feedback is lacking, people’s performance is impaired.”
    Study participant Nathan Copeland, whose progress was described in the paper, is the first person in the world who was implanted with tiny electrode arrays not just in his brain’s motor cortex but in his somatosensory cortex as well — a region of the brain that processes sensory information from the body. Arrays allow him to not only control the robotic arm with his mind, but also to receive tactile sensory feedback, which is similar to how neural circuits operate when a person’s spinal cord is intact.
    “I was already extremely familiar with both the sensations generated by stimulation and performing the task without stimulation. Even though the sensation isn’t ‘natural’ — it feels like pressure and gentle tingle — that never bothered me,” said Copeland. “There wasn’t really any point where I felt like stimulation was something I had to get used to. Doing the task while receiving the stimulation just went together like PB&J.”
    After a car crash that left him with limited use of his arms, Copeland enrolled in a clinical trial testing the sensorimotor microelectrode brain-computer interface (BCI) and was implanted with four microelectrode arrays developed by Blackrock Microsystems (also commonly referred to as Utah arrays).
    This paper is a step forward from an earlier study that described for the first time how stimulating sensory regions of the brain using tiny electrical pulses can evoke sensation in distinct regions of a person’s hand, even though they lost feeling in their limbs due to spinal cord injury. In this new study, the researchers combined reading the information from the brain to control the movement of the robotic arm with writing information back in to provide sensory feedback.
    In a series of tests, where the BCI operator was asked to pick up and transfer various objects from a table to a raised platform, providing tactile feedback through electrical stimulation allowed the participant to complete tasks twice as fast compared to tests without stimulation.
    In the new paper, the researchers wanted to test the effect of sensory feedback in conditions that would resemble the real world as closely as possible.
    “We didn’t want to constrain the task by removing the visual component of perception,” said co-senior author Robert Gaunt, Ph.D., associate professor in the Pitt Department of Physical Medicine and Rehabilitation. “When even limited and imperfect sensation is restored, the person’s performance improved in a pretty significant way. We still have a long way to go in terms of making the sensations more realistic and bringing this technology to people’s homes, but the closer we can get to recreating the normal inputs to the brain, the better off we will be.”
    This work was supported by the Defense Advanced Research Projects Agency (DARPA) and Space and Naval Warfare Systems Center Pacific (SSC Pacific) under Contract No. N66001-16-C-4051 and the Revolutionizing Prosthetics program (Contract No. N66001-10-C-4056).
    Story Source:
    Materials provided by University of Pittsburgh. Note: Content may be edited for style and length. More

  • in

    A new form of carbon opens door to nanosized wires

    Carbon exists in various forms. In addition to diamond and graphite, there are recently discovered forms with astonishing properties. For example graphene, with a thickness of just one atomic layer, is the thinnest known material, and its unusual properties make it an extremely exciting candidate for applications like future electronics and high-tech engineering. In graphene, each carbon atom is linked to three neighbours, forming hexagons arranged in a honeycomb network. Theoretical studies have shown that carbon atoms can also arrange in other flat network patterns, while still binding to three neighbours, but none of these predicted networks had been realized until now.
    Researchers at the University of Marburg in Germany and Aalto University in Finland have now discovered a new carbon network, which is atomically thin like graphene, but is made up of squares, hexagons, and octagons forming an ordered lattice. They confirmed the unique structure of the network using high-resolution scanning probe microscopy and interestingly found that its electronic properties are very different from those of graphene.
    In contrast to graphene and other forms of carbon, the new Biphenylene network — as the new material is named — has metallic properties. Narrow stripes of the network, only 21 atoms wide, already behave like a metal, while graphene is a semiconductor at this size. “These stripes could be used as conducting wires in future carbon-based electronic devices.” said professor Michael Gottfried, at University of Marburg, who leads the team who developed the idea. The lead author of the study, Qitang Fan from Marburg continues, “This novel carbon network may also serve as a superior anode material in lithium-ion batteries, with a larger lithium storage capacity compared to that of the current graphene-based materials.”
    The team at Aalto University helped image the material and decipher its properties. The group of Professor Peter Liljeroth carried out the high-resolution microscopy that showed the structure of the material, while researchers led by Professor Adam Foster used computer simulations and analysis to understand the exciting electrical properties of the material.
    The new material is made by assembling carbon-containing molecules on an extremely smooth gold surface. These molecules first form chains, which consist of linked hexagons, and a subsequent reaction connects these chains together to form the squares and octagons. An important feature of the chains is that they are chiral, which means that they exist in two mirroring types, like left and right hands. Only chains of the same type aggregate on the gold surface, forming well-ordered assemblies, before they connect. This is critical for the formation of the new carbon material, because the reaction between two different types of chains leads only to graphene. “The new idea is to use molecular precursors that are tweaked to yield biphenylene instead of graphene” explains Linghao Yan, who carried out the high-resolution microscopy experiments at Aalto University.
    For now, the teams work to produce larger sheets of the material, so that its application potential can be further explored. However, “We are confident that this new synthesis method will lead to the discovery of other novel carbon networks.” said Professor Liljeroth.
    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More

  • in

    Silicon chips combine light and ultrasound for better signal processing

    The continued growth of wireless and cellular data traffic relies heavily on light waves. Microwave photonics is the field of technology that is dedicated to the distribution and processing of electrical information signals using optical means. Compared with traditional solutions based on electronics alone, microwave photonic systems can handle massive amounts of data. Therefore, microwave photonics has become increasingly important as part of 5G cellular networks and beyond. A primary task of microwave photonics is the realization of narrowband filters: the selection of specific data, at specific frequencies, out of immense volumes that are carried over light.
    Many microwave photonic systems are built of discrete, separate components and long optical fiber paths. However, the cost, size, power consumption and production volume requirements of advanced networks call for a new generation of microwave photonic systems that are realized on a chip. Integrated microwave photonic filters, particularly in silicon, are highly sought after. There is, however, a fundamental challenge: Narrowband filters require that signals are delayed for comparatively long durations as part of their processing.
    “Since the speed of light is so fast,” says Prof. Avi Zadok from Bar-Ilan University, Israel, “we run out of chip space before the necessary delays are accommodated. The required delays may reach over 100 nanoseconds. Such delays may appear to be short considering daily experience, however the optical paths that support them are over ten meters long! We cannot possibly fit such long paths as part of a silicon chip. Even if we could somehow fold over that many meters in a certain layout, the extent of optical power losses to go along with it would be prohibitive.”
    These long delays require a different type of wave, one that travels much more slowly. In a study recently published in the journal Optica, Zadok and his team from the Faculty of Engineering and Institute of Nanotechnology and Advanced Materials at Bar-Ilan University, and collaborators from the Hebrew University of Jerusalem and Tower Semiconductors, suggest a solution. They brought together light and ultrasonic waves to realize ultra-narrow filters of microwave signals, in silicon integrated circuits. The concept allows large freedom for filters design.
    Bar-Ilan University doctoral student Moshe Katzman explains: “We’ve learned how to convert the information of interest from the form of light waves to ultrasonic, surface acoustic waves, and then back to optics. The surface acoustic waves travel at a speed that is 100,000 slower. We can accommodate the delays that we need as part of our silicon chip, within less than a millimeter, and with losses that are very reasonable.”
    Acoustic waves have served for the processing of information for sixty years, however their chip-level integration alongside light waves has proven tricky. Moshe Katzman continues: “Over the last decade we have seen landmark demonstrations of how light and ultrasound waves can be brought together on a chip device, to make up excellent microwave photonic filters. However, the platforms used were more specialized. Part of the appeal of the solution is in its simplicity. The fabrication of devices is based on routine protocols of silicon waveguides. We are not doing anything fancy here.” The realized filters are very narrowband: the spectral width of the filters passbands is only 5 MHz.
    In order to realize narrowband filters, the information-carrying surface acoustic waves is imprinted upon the output light wave multiple times. Doctoral student Maayan Priel elaborates: “The acoustic signal crosses the light path up to 12 times, depending on choice of layout. Each such event imprints a replica of our signal of interest on the optical wave. Due to the slow acoustic speed, these events are separated by long delays. Their overall summation is what makes the filters work.” As part of their research, the team reports complete control over each replica, towards the realization of arbitrary filter responses. Maayan Priel concludes: “The freedom to design the response of the filters is making the most out of the integrated, microwave-photonic platform.”
    Story Source:
    Materials provided by Bar-Ilan University. Note: Content may be edited for style and length. More

  • in

    Ultra-sensitive light detector gives self-driving tech a jolt

    Realizing the potential of self-driving cars hinges on technology that can quickly sense and react to obstacles and other vehicles in real time. Engineers from The University of Texas at Austin and the University of Virginia created a new first-of-its-kind light detecting device that can more accurately amplify weak signals bouncing off of faraway objects than current technology allows, giving autonomous vehicles a fuller picture of what’s happening on the road.
    The new device is more sensitive than other light detectors in that it also eliminates inconsistency, or noise, associated with the detection process. Such noise can cause systems to miss signals and put autonomous vehicle passengers at risk.
    “Autonomous vehicles send out laser signals that bounce off objects to tell you how far away you are. Not much light comes back, so if your detector is putting out more noise than the signal coming in you get nothing,” said Joe Campbell, professor of electrical and computer engineering at the University of Virginia School of Engineering.
    Researchers around the globe are working on devices, known as avalanche photodiodes, to meet these needs. But what makes this new device stand out is its staircase-like alignment. It includes physical steps in energy that electrons roll down, multiplying along the way and creating a stronger electrical current for light detection as they go.
    In 2015, the researchers created a single-step staircase device. In this new discovery, detailed in Nature Photonics, they’ve shown, for the first time, a staircase avalanche photodiode with multiple steps.
    “The electron is like a marble rolling down a flight of stairs,” said Seth Bank, professor in the Cockrell School’s Department of Electrical and Computer Engineering who led the research with Campbell, a former professor in the Cockrell School from 1989 to 2006 and UT Austin alumnus (B.S., Physics, 1969). “Each time the marble rolls off a step, it drops and crashes into the next one. In our case, the electron does the same thing, but each collision releases enough energy to actually free another electron. We may start with one electron, but falling off each step doubles the number of electrons: 1, 2, 4, 8, and so on.”
    The new pixel-sized device is ideal for Light Detection and Ranging (lidar) receivers, which require high-resolution sensors that detect optical signals reflected from distant objects. Lidar is an important part of self-driving car technology, and it also has applications in robotics, surveillance and terrain mapping. More