More stories

  • in

    Researchers use artificial intelligence language tools to decode molecular movements

    By applying natural language processing tools to the movements of protein molecules, University of Maryland scientists created an abstract language that describes the multiple shapes a protein molecule can take and how and when it transitions from one shape to another.
    A protein molecule’s function is often determined by its shape and structure, so understanding the dynamics that control shape and structure can open a door to understanding everything from how a protein works to the causes of disease and the best way to design targeted drug therapies. This is the first time a machine learning algorithm has been applied to biomolecular dynamics in this way, and the method’s success provides insights that can also help advance artificial intelligence (AI). A research paper on this work was published on October 9, 2020, in the journal Nature Communications.
    “Here we show the same AI architectures used to complete sentences when writing emails can be used to uncover a language spoken by the molecules of life,” said the paper’s senior author, Pratyush Tiwary, an assistant professor in UMD’s Department of Chemistry and Biochemistry and Institute for Physical Science and Technology. “We show that the movement of these molecules can be mapped into an abstract language, and that AI techniques can be used to generate biologically truthful stories out of the resulting abstract words.”
    Biological molecules are constantly in motion, jiggling around in their environment. Their shape is determined by how they are folded and twisted. They may remain in a given shape for seconds or days before suddenly springing open and refolding into a different shape or structure. The transition from one shape to another occurs much like the stretching of a tangled coil that opens in stages. As different parts of the coil release and unfold, the molecule assumes different intermediary conformations.
    But the transition from one form to another occurs in picoseconds (trillionths of a second) or faster, which makes it difficult for experimental methods such as high-powered microscopes and spectroscopy to capture exactly how the unfolding happens, what parameters affect the unfolding and what different shapes are possible. The answers to those questions form the biological story that Tiwary’s new method can reveal.
    Tiwary and his team applied Newton’s laws of motion — which can predict the movement of atoms within a molecule — with powerful supercomputers, including UMD’s Deepthought2, to develop statistical physics models that simulate the shape, movement and trajectory of individual molecules.
    Then they fed those models into a machine learning algorithm, like the one Gmail uses to automatically complete sentences as you type. The algorithm approached the simulations as a language in which each molecular movement forms a letter that can be strung together with other movements to make words and sentences. By learning the rules of syntax and grammar that determine which shapes and movements follow one another and which don’t, the algorithm predicts how the protein untangles as it changes shape and the variety of forms it takes along the way.
    To demonstrate that their method works, the team applied it to a small biomolecule called riboswitch, which had been previously analyzed using spectroscopy. The results, which revealed the various forms the riboswitch could take as it was stretched, matched the results of the spectroscopy studies.
    “One of the most important uses of this, I hope, is to develop drugs that are very targeted,” Tiwary said. “You want to have potent drugs that bind very strongly, but only to the thing that you want them to bind to. We can achieve that if we can understand the different forms that a given biomolecule of interest can take, because we can make drugs that bind only to one of those specific forms at the appropriate time and only for as long as we want.”
    An equally important part of this research is the knowledge gained about the language processing system Tiwary and his team used, which is generally called a recurrent neural network, and in this specific instance a long short-term memory network. The researchers analyzed the mathematics underpinning the network as it learned the language of molecular motion. They found that the network used a kind of logic that was similar to an important concept from statistical physics called path entropy. Understanding this opens opportunities for improving recurrent neural networks in the future.
    “It is natural to ask if there are governing physical principles making AI tools successful,” Tiwary said. “Here we discover that, indeed, it is because the AI is learning path entropy. Now that we know this, it opens up more knobs and gears we can tune to do better AI for biology and perhaps, ambitiously, even improve AI itself. Anytime you understand a complex system such as AI, it becomes less of a black-box and gives you new tools for using it more effectively and reliably.” More

  • in

    New model may explain rarity of certain malaria-blocking mutations

    A new computational model suggests that certain mutations that block infection by the most dangerous species of malaria have not become widespread in people because of the parasite’s effects on the immune system. Bridget Penman of the University of Warwick, U.K., and Sylvain Gandon of the CNRS and Montpellier University, France, present these findings in the open-access journal PLOS Computational Biology.
    Malaria is a potentially lethal, mosquito-borne disease caused by parasites of the Plasmodium genus. Several protective adaptations to malaria have spread widely among humans, such as the sickle-cell mutation. Laboratory experiments suggest that certain other mutations could be highly protective against the most dangerous human-infecting malaria species, Plasmodium falciparum. However, despite being otherwise benign, these mutations have not become widespread.
    To help clarify why some protective mutations may remain rare, Penman and colleagues developed a computational model that simulates the epidemiology of malaria infection, as well the evolution of protective mutations. Importantly, the model also incorporates mechanisms of adaptive immunity, in which the immune system “learns” to recognize and attack specific pathogens, such as P. falciparum.
    Analysis of the model’s predictions suggests that if people rapidly gain adaptive immunity to the severe effects of P. falciparum malaria, mutations capable of blocking P. falciparum infection are unlikely to spread among the population. The fewer the number of infections it takes for people to become immune to the severe effects of malaria, the less likely it is that malaria infection-blocking mutations will arise.
    “Understanding why a potential human malaria adaptation has not succeeded could be just as important as understanding those which have succeeded,” Penman says. “Our results highlight the need for further detailed genetic studies of populations living in regions impacted by malaria in order to better understand malaria-human interactions.”
    Ultimately, understanding how humans have adapted to malaria could help open up new avenues for treatment.

    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    Engineering team develops novel miniaturized organic semiconductor

    Field Effect Transistors (FET) are the core building blocks of modern electronics such as integrated circuits, computer CPUs and display backplanes. Organic Field Effect Transistors (OFETs), which use organic semiconductor as a channel for current flows, have the advantage of being flexible when compared with their inorganic counterparts like silicon.
    OFETs, given their high sensitivity, mechanical flexibility, biocompatibility, property tunability and low-cost fabrication, are considered to have great potential in new applications in wearable electronics, conformal health monitoring sensors, and bendable displays etc. Imagine TV screens that can be rolled up; or smart wearable electronic devices and clothing worn close to the body to collect vital body signals for instant biofeedback; or mini-robots made of harmless organic materials working inside the body for diseases diagnosis, target drug transportations, mini-surgeries and other medications and treatments.
    Until now, the main limitation on enhanced performance and mass production of OFETs lies in the difficulty in miniaturising them. Products currently using OFETs in the market are still in their primitive forms, in terms of product flexibility and durability.
    An engineering team led by Dr Paddy Chan Kwok Leung at the Department of Mechanical Engineering of the University of Hong Kong (HKU) has made an important breakthrough in developing the staggered structure monolayer Organic Field Effect Transistors, which sets a major cornerstone to reduce the size of OFETs. The result has been published in the academic journal Advanced Materials. A US patent has been filed for the innovation.
    The major problem now confronting scientists in reducing the size of OFETs is that the performance of the transistor will drop significantly with a reduction in size, partly due to the problem of contact resistance, i.e. resistance at interfaces which resists current flows. When the device gets smaller, its contact resistance will become a dominating factor in significantly downgrading the device’s performance.
    The staggered structure monolayer OFETs created by Dr Chan’s team demonstrate a record low normalized contact resistance of 40 ? -cm. Compared with conventional devices with a contact resistance of 1000 ? -cm, the new device can save 96% of power dissipation at contact when running the device at the same current level. More importantly, apart from energy saving, the excessive heat generated in the system, a common problem which causes semiconductors to fail, can be greatly reduced.
    “On the basis of our achievement, we can further reduce the dimensions of OFETs and push them to a sub-micrometer scale, a level compatible with their inorganic counterparts, while can still function effectively to exhibit their unique organic properties. This is critical for meeting the requirement for commercialisation of related research.” Dr Chan said.
    “If flexible OFET works, many traditional rigid based electronics such as display panels, computers and cell phones would transform to become flexible and foldable. These future devices would be much lighter in weight, and with low production cost.”
    “Moreover, given their organic nature, they are more likely to be biocompatible for advanced medical applications such as sensors in tracking brain activities or neural spike sensing, and in precision diagnosis of brain related illness such as epilepsy.” Dr Chan added.
    Dr Chan’s team is currently working with researchers at the HKU Faculty of Medicine and biomedical engineering experts at CityU to integrate the miniaturised OFETs into a flexible circuit onto a polymer microprobe for neural spike detections in-vivo on a mouse brain under different external stimulations. They also plan to integrate the OFETs onto surgical tools such as catheter tube, and then put it inside animals’ brains for direct brain activities sensing to locate abnormal activation in brain.
    “Our OFETs provide a much better signal to noise ratio. Therefore, we expect we can pick up some weak signals which cannot be detected before using the conventional bare electrode for sensing.”
    “It has been our goal to connect applied research with fundamental science. Our research achievement would hopefully open a blue ocean for OFETs research and applications. We believe that the setting and achievement on OFETs are now ready for applications in large area display backplane and surgical tools.” Dr Chan concluded. More

  • in

    Large-scale changes in Earth’s climate may originate in the Pacific

    The retreat of North America’s ice sheets in the latter years of the last ice age may have begun with “catastrophic” losses of ice into the North Pacific Ocean along the coast of modern-day British Columbia and Alaska, scientists say. 
    In a new study published October 1 in Science, researchers find that these pulses of rapid ice loss from what’s known as the western Cordilleran ice sheet contributed to, and perhaps triggered, the massive calving of the Laurentide ice sheet into the North Atlantic Ocean thousands of years ago. That collapse of the Laurentide ice sheet, which at one point covered large swaths of Canada and parts of the United States, ultimately led to major disturbances in the global climate (SN: 11/5/12).
    The new findings cast doubt on the long-held assumption that hemispheric-scale changes in Earth’s climate originate in the North Atlantic (SN: 1/31/19). The study suggests that the melting of Alaska’s remaining glaciers into the North Pacific, though less extreme than purges of the past, could have far-ranging effects on global ocean circulation and the climate in coming centuries.
    “People typically think that the Atlantic is where all the action is, and everything else follows,” says Alan Mix, a paleoclimatologist at Oregon State University in Corvallis. “We’re saying it’s the other way around.” The Cordilleran ice sheet fails earlier in the chain of reaction, “and then that signal is transmitted [from the Pacific] around the world like falling dominoes.”

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    In 2013, Mix and colleagues pulled sediment cores from the seafloor of the Gulf of Alaska in the hope of figuring out how exactly the Cordilleran ice sheet had changed prior to the end of the last ice age. These cores contained distinct layers of sand and silt deposited by the ice sheet’s calved icebergs during four separate occasions over the last 42,000 years. The team then used radiocarbon dating to determine the chronology of events, finding that the Cordilleran’s ice purges “surprisingly” preceded the Laurentide’s periods of abrupt ice loss, known as “Heinrich events,” by 1,000 to 1,500 years every single time.
    “We’ve long known that these Heinrich events are a big deal,” says coauthor Maureen Walczak, a paleoceanographer also at Oregon State University. “They have global climate consequences associated with increases in atmospheric CO2, warming in Antarctica … and the weakening of the Asian monsoon in the Pacific. But we’ve not known why they happened.”  
    Though scientists can now point the finger at the North Pacific, the exact mechanism remains unclear. Mix proposes several theories for how Cordilleran ice loss ultimately translated to mass calving of ice along North America’s east coast. It’s possible, he says, that the freshwater deposited in the North Pacific traveled northward through the Bering Strait, across the Arctic and down into the North Atlantic. There, the buoyant freshwater served as a “cap” on the ocean’s denser saltwater, preventing it from overturning. This process could have led to the water getting warmer, destabilizing the adjacent ice sheet.
    Another theory posits that the lower elevation of the diminished Cordilleran ice sheet altered how surface winds entered North America. Normally, the ice sheet would act like a fence, diverting winds and their water vapor southward as they entered North America. Without this barrier, the transport of heat and freshwater between the Pacific and Atlantic Ocean basins is disrupted, changing the salinity of the Atlantic waters and ultimately delivering more heat to the ice there.
    Today, Alaska’s glaciers serve as the last remnants of the Cordilleran ice sheet. Many are in a state of rapid retreat due to climate change. This melting ice, too, drains into the Pacific and Arctic oceans, raising sea levels and interfering with normal ocean mixing processes. “Knowing the failure of ice in the North Pacific seemed to presage really rapid ice loss in the North Atlantic, that’s kind of concerning,” Walczak says.
    If the ice melt into the North Pacific follows similar patterns to the past, it could yield significant global climate events, the researchers suggest. But Mix cautions that the amount of freshwater runoff needed to trigger changes elsewhere in the global ocean, and climate, is unknown. “We know enough to say that such things happened in the past, ergo, they are real and could happen again.”
    It’s not clear, though, what the timing of such global changes would be. If the ice losses in the Atlantic occurred in the past due to a change in deep ocean dynamics triggered by Pacific melting, that signal would likely take hundreds of years to reach the other remaining ice sheets. If, however, those losses were triggered by a change in sea levels or winds, other ice sheets could be affected a bit faster, though still not this century.
    The Laurentide ice sheet is, of course, long gone. But two others remain, in Greenland and Antarctica (SN: 9/30/20, 9/23/20). Both have numerous glaciers that terminate in the ocean and drain the interior of the ice sheets. This makes the ice sheets susceptible to both warmer ocean water and sea level rise.
    Alaska’s melting glaciers have already fueled about 30 percent of global sea level rise. “One of the hypotheses we have is that sea level rise is going to destabilize the ice shelves at the mouths of those glaciers, which will break off like champagne corks,” Walczak explains. When that happens, the idea goes, the ice sheets will start collapsing faster and faster.
    Records of climate change in the Pacific, like the one Walczak and colleagues have compiled, have been hard to come by, says Richard Alley, a glaciologist at Pennsylvania State University who wasn’t involved with the study. “These new data may raise more questions than they answer,” he says. “But by linking North Pacific Ocean circulation … to the global template of climate oscillations, the new paper gives us a real advance in understanding all of this.” More

  • in

    Study uses mathematical modeling to identify an optimal school return approach

    In a recent study, NYU Abu Dhabi Professor of Practice in Mathematics Alberto Gandolfi has developed a mathematical model to identify the number of days students could attend school to allow them a better learning experience while mitigating infections of COVID-19.
    Published in Physicsa D journal, the study shows that blended models, with almost periodic alternations of in-class and remote teaching days or weeks, would be ideal. In a prototypical example, the optimal strategy results in the school opening 90 days out of 200, with the number of COVID-19 cases among the individuals related to the school increasing by about 66 percent, instead of the almost 250 percent increase, which is predicted should schools fully reopen.
    The study features five different groups; these include students susceptible to infection, students exposed to infection, students displaying symptoms, asymptomatic students, and recovered students. In addition, Gandolfi’s study models other factors, including a seven hour school day as the window for transmission, and the risk of students getting infected outside of school.
    Speaking on the development of this model, Gandolfi commented: “The research comes as over one billion students around the world are using remote learning models in the face of the global pandemic, and educators are in need of plans for the upcoming 2020 — 2021 academic year. Given that children come in very close contact within the classrooms, and that the incubation period lasts several days, the study shows that full re-opening of the classrooms is not a viable possibility in most areas. On the other hand, with the development of a vaccine still in its formative stages, studies have placed the potential impact of COVID-19 on children as losing 30 percent of usual progress in reading and 50 percent or more in math.”
    He added: “The approach aims to provide a viable solution for schools that are planning activities ahead of the 2020 — 2021 academic year. Each school, or group thereof, can adapt the study to its current situation in terms of local COVID-19 diffusion and relative importance assigned to COVID-19 containment versus in-class teaching; it can then compute an optimal opening strategy. As these are mixed solutions in most cases, other aspects of socio-economic life in the area could then be built around the schools’ calendar. This way, children can benefit as much as possible from a direct, in class experience, while ensuring that the spread of infection is kept under control.”
    Using the prevalence of active COVID-19 cases in a region as a proxy for the chance of getting infected, the study gives a first indication, for each country, of the possibilities for school reopening: schools can fully reopen in a few countries, while in most others blended solutions can be attempted, with strict physical distancing, and frequent, generalized, even if not necessarily extremely reliable, testing.

    Story Source:
    Materials provided by New York University. Note: Content may be edited for style and length. More

  • in

    Biochip innovation combines AI and nanoparticle printing for cancer cell analysis

    Electrical engineers, computer scientists and biomedical engineers at the University of California, Irvine have created a new lab-on-a-chip that can help study tumor heterogeneity to reduce resistance to cancer therapies.
    In a paper published today in Advanced Biosystems, the researchers describe how they combined artificial intelligence, microfluidics and nanoparticle inkjet printing in a device that enables the examination and differentiation of cancers and healthy tissues at the single-cell level.
    “Cancer cell and tumor heterogeneity can lead to increased therapeutic resistance and inconsistent outcomes for different patients,” said lead author Kushal Joshi, a former UCI graduate student in biomedical engineering. The team’s novel biochip addresses this problem by allowing precise characterization of a variety of cancer cells from a sample.
    “Single-cell analysis is essential to identify and classify cancer types and study cellular heterogeneity. It’s necessary to understand tumor initiation, progression and metastasis in order to design better cancer treatment drugs,” said co-author Rahim Esfandyarpour, UCI assistant professor of electrical engineering & computer science as well as biomedical engineering. “Most of the techniques and technologies traditionally used to study cancer are sophisticated, bulky, expensive, and require highly trained operators and long preparation times.”
    He said his group overcame these challenges by combining machine learning techniques with accessible inkjet printing and microfluidics technology to develop low-cost, miniaturized biochips that are simple to prototype and capable of classifying various cell types.
    In the apparatus, samples travel through microfluidic channels with carefully placed electrodes that monitor differences in the electrical properties of diseased versus healthy cells in a single pass. The UCI researchers’ innovation was to devise a way to prototype key parts of the biochip in about 20 minutes with an inkjet printer, allowing for easy manufacturing in diverse settings. Most of the materials involved are reusable or, if disposable, inexpensive.
    Another aspect of the invention is the incorporation of machine learning to manage the large amount of data the tiny system produces. This branch of AI accelerates the processing and analysis of large datasets, finding patterns and associations, predicting precise outcomes, and aiding in rapid and efficient decision-making.
    By including machine learning in the biochip’s workflow, the team has improved the accuracy of analysis and reduced the dependency on skilled analysts, which can also make the technology appealing to medical professionals in the developing world, Esfandyarpour said.
    “The World Health Organization says that nearly 60 percent of deaths from breast cancer happen because of a lack of early detection programs in countries with meager resources,” he said. “Our work has potential applications in single-cell studies, in tumor heterogeneity studies and, perhaps, in point-of-care cancer diagnostics — especially in developing nations where cost, constrained infrastructure and limited access to medical technologies are of the utmost importance.”

    Story Source:
    Materials provided by University of California – Irvine. Note: Content may be edited for style and length. More

  • in

    Diamonds are a quantum scientist's best friend

    Diamonds have a firm foothold in our lexicon. Their many properties often serve as superlatives for quality, clarity and hardiness. Aside from the popularity of this rare material in ornamental and decorative use, these precious stones are also highly valued in industry where they are used to cut and polish other hard materials and build radiation detectors.
    More than a decade ago, a new property was uncovered in diamonds when high concentrations of boron are introduced to it — superconductivity. Superconductivity occurs when two electrons with opposite spin form a pair (called a Cooper pair), resulting in the electrical resistance of the material being zero. This means a large supercurrent can flow in the material, bringing with it the potential for advanced technological applications. Yet, little work has been done since to investigate and characterise the nature of a diamond’s superconductivity and therefore its potential applications.
    New research led by Professor Somnath Bhattacharyya in the Nano-Scale Transport Physics Laboratory (NSTPL) in the School of Physics at the University of the Witwatersrand in Johannesburg, South Africa, details the phenomenon of what is called “triplet superconductivity” in diamond. Triplet superconductivity occurs when electrons move in a composite spin state rather than as a single pair. This is an extremely rare, yet efficient form of superconductivity that until now has only been known to occur in one or two other materials, and only theoretically in diamonds.
    “In a conventional superconducting material such as aluminium, superconductivity is destroyed by magnetic fields and magnetic impurities, however triplet superconductivity in a diamond can exist even when combined with magnetic materials. This leads to more efficient and multifunctional operation of the material,” explains Bhattacharyya.
    The team’s work has recently been published in an article in the New Journal of Physics, titled “Effects of Rashba-spin-orbit coupling on superconducting boron-doped nanocrystalline diamond films: evidence of interfacial triplet superconductivity.” This research was done in collaboration with Oxford University (UK) and Diamond Light Source (UK). Through these collaborations, beautiful atomic arrangement of diamond crystals and interfaces that have never been seen before could be visualised, supporting the first claims of ‘triplet’ superconductivity.
    Practical proof of triplet superconductivity in diamonds came with much excitement for Bhattacharyya and his team. “We were even working on Christmas day, we were so excited,” says Davie Mtsuko. “This is something that has never been before been claimed in diamond,” adds Christopher Coleman. Both Mtsuko and Coleman are co-authors of the paper.
    Despite diamonds’ reputation as a highly rare and expensive resource, they can be manufactured in a laboratory using a specialised piece of equipment called a vapour deposition chamber. The Wits NSTPL has developed their own plasma deposition chamber which allows them to grow diamonds of a higher than normal quality — making them ideal for this kind of advanced research.
    This finding expands the potential uses of diamond, which is already well-regarded as a quantum material. “All conventional technology is based on semiconductors associated with electron charge. Thus far, we have a decent understanding of how they interact, and how to control them. But when we have control over quantum states such as superconductivity and entanglement, there is a lot more physics to the charge and spin of electrons, and this also comes with new properties,” says Bhattacharyya. “With the new surge of superconducting materials such as diamond, traditional silicon technology can be replaced by cost effective and low power consumption solutions.”
    The induction of triplet superconductivity in diamond is important for more than just its potential applications. It speaks to our fundamental understanding of physics. “Thus far, triplet superconductivity exists mostly in theory, and our study gives us an opportunity to test these models in a practical way,” says Bhattacharyya.

    Story Source:
    Materials provided by University of the Witwatersrand. Note: Content may be edited for style and length. More

  • in

    Faster COVID-19 testing with simple algebraic equations

    A mathematician from Cardiff University has developed a new method for processing large volumes of COVID-19 tests which he believes could lead to significantly more tests being performed at once and results being returned much quicker.
    Dr Usama Kadri, from the University’s School of Mathematics, believes the new technique could allow many more patients to be tested using the same amount of tests tubes and with a lower possibility of false negatives occurring.
    Dr Kadri’s technique, which has been published in the journal Health Systems, uses simple algebraic equations to identify positive samples in tests and takes advantage of a testing technique known as ‘pooling’.
    Pooling involves grouping a large number of samples from different patients into one test tube and performing a single test on that tube.
    If the tube is returned negative then you know that everybody from that group does not have the virus.
    Pooling can be applied by laboratories to test more samples in a shorter space of time, and works well when the overall infection rate in a certain population is expected to be low. If a tube is returned positive then each person within that group needs to be tested once again, this time individually, to determine who has the virus.

    advertisement

    In this instance, and particularly when it is known that infection rates in the population are high, the savings from the pooling technique in terms of time and cost become less significant.
    However, Dr Kadri’s new technique removes the need to perform a second round of tests once a batch is returned positive and can identify the individuals who have the virus using simple equations.
    The technique works with a fixed number of individuals and test tubes, for example 200 individuals and 10 test tubes, and starts by taking a fixed number of samples from a single individual, for example 5, and distributing these into 5 of the 10 test tubes.
    Another 5 samples are taken from the second individual and these are distributed into a different combination of 5 of the 10 tubes.
    This is then repeated for each of the 200 individuals in the group so that no individual shares the same combination of tubes.

    advertisement

    Each of the 10 test tubes is then sent for testing and any tube that returns negative indicates that all patients that have samples in that tube must be negative.
    If only one individual has the virus, then the combinations of the tubes that return positive, which is unique to the individual, will directly indicate that individual.
    However, if the number of positive tubes is larger than the number of samples from each individual, in this example 5, then there should be at least two individuals with the virus.
    The individuals that have all of their test tubes return positive are then selected.
    The method assumes that each individual that is positive should have the same quantity of virus in each tube, and that each of the individuals testing positive will have a unique quantity of virus in their sample which is different to the others.
    From this, the method then assumes that there are exactly two individuals with the virus and, for every two suspected individuals, a computer is used to calculate any combination of virus quantity that would return the actual overall quantity of virus that was measured in the tests.
    If the right combination is found then the selected two individuals have to be positive and no one else. Otherwise, the procedure is repeated but with an additional suspected individual, and so on until the right combination is found.
    “Applying the proposed method allows testing many more patients using the same number of testing tubes, where all positives are identified with no false negatives, and no need for a second round of independent testing, with the effective testing time reduced drastically,” Dr Kadri said.
    So far, the method has been assessed using simulations of testing scenarios and Dr Kadri acknowledges that lab testing will need to be carried out to increase confidence in the proposed method.
    Moreover, for clinical use, additional factors need to be considered including sample types, viral load, prevalence, and inhibitor substances. More