More stories

  • in

    Researchers create 3D model for rare neuromuscular disorders, setting stage for clinical trial

    A scientific team supported by the National Institutes of Health has created a tiny, bioengineered 3-D model that mimics the biology of chronic inflammatory demyelinating polyneuropathy and multifocal motor neuropathy, a pair of rare, devastating neuromuscular diseases. The researchers used the organ-on-a-chip, or “tissue chip,” model to show how a drug could potentially treat the diseases. They provided key preclinical data for a drug company to submit to the U.S. Food and Drug Administration to get authorization for testing in a clinical trial.
    This work provides one of the first examples of scientists using primarily tissue chip data for an FDA Investigational New Drug application to test the efficacy of a candidate drug in people with rare diseases. The drug company Sanofi started recruiting participants into a Phase 2 clinical trial in April 2021. The drug was tested for safety previously and approved by the FDA for a different indication.
    The tissue chip research was led by Hesperos, Inc., an Orlando-based company partially funded by a Small Business Innovation Research grant from NIH’s National Center for Advancing Translational Sciences (NCATS). This study could open the door to studying and developing new therapies for other rare diseases by establishing a new avenue for repurposing existing drugs for rare diseases. Most of the known 7,000 rare diseases do not have effective treatments. Researchers often lack animal models for studying rare disease biology and testing potential drugs.
    “This marks an important milestone in the evolution of the use of tissue chips,” said Lucie Low, Ph.D., scientific program manager for the NCATS Tissue Chip for Drug Screening initiative. “We know that pharmaceutical companies are using tissue chips internally. Submitting data to regulatory agencies generated from tissue chip platforms is a powerful indicator of their growing promise.”
    James Hickman, Ph.D., chief scientist at Hesperos, and his colleagues described the development of the model and their research results in Advanced Therapeutics. In these diseases, the immune system makes proteins called antibodies that damage nerve cells and slow down messages moving from the brain to the muscles. This can make it hard for people to move their arms, hands and legs. Current treatments can help, but often are inconsistent.
    The researchers developed a tissue chip model consisting of two cell types: motoneurons and Schwann cells. Motoneurons transmit messages from the brain to muscles. Schwann cells help the signals move more quickly. The model could mimic functional characteristics of the diseases, allowing the scientists to see how a drug was working by determining whether the brain’s messages to muscles were slowing down or not.
    The researchers showed that exposing the cells to blood serum from people with these rare diseases caused a shower of immune system antibodies against the cells. This made the motoneuron signals move more slowly. After treatment with TNT005, a drug that blocks the immune system reaction, the cells and the message speed returned to normal.
    “We’re confident that our system can reproduce what happens to a patient, including the disease symptoms and disease progression,” said Hickman. “It’s important to create functionally relevant patient models that will mimic what is seen in clinical trials.”
    Approximately 90% of promising therapies fail in clinical trials because animal models used in preclinical testing are not good at predicting how people will respond. To improve that success rate and help get more treatments to people who have few options, scientists are exploring the uses of tissue chips. Designed to support living human tissues and cells, tissue chips mimic the structure and function of human organs and systems, such as the lungs, heart and liver. Researchers are studying their uses in many areas, including for testing the safety and effectiveness of candidate drugs and modeling diseases.
    The potential clinical uses of tissue chip data are growing. Recently, an NIH-supported research team at Harvard University’s Wyss Institute reported using a tissue chip model to generate data on the effectiveness of a repurposed drug for treating lung damage from COVID-19 infection. In the NCATS-funded Clinical Trials on a Chip program, several projects examine how tissue chip data can help researchers design more useful clinical trials. This might include using such data to predict which patients in a trial are most likely to respond to a therapy.
    “Creating a platform that can predict human responses to a drug in a rare disease could lead to exciting new opportunities in research,” said Low. “If tissue chip data can be generated that inform the decisions made before early human trials, this could reduce the risks to vulnerable populations.”
    Funding for this research was provided by True North Therapeutics (now Sanofi), NCATS (SBIR 2R44TR001326-03) and internal Hesperos development funds. More

  • in

    Researchers take step toward developing 'electric eye'

    Georgia State University researchers have successfully designed a new type of artificial vision device that incorporates a novel vertical stacking architecture and allows for greater depth of color recognition and scalability on a micro-level. The new research is published in the top journal ACS Nano.
    “This work is the first step toward our final destination-to develop a micro-scale camera for microrobots,” says assistant professor of Physics Sidong Lei, who led the research. “We illustrate the fundamental principle and feasibility to construct this new type of image sensor with emphasis on miniaturization.”
    Lei’s team was able to lay the groundwork for the biomimetic artificial vision device, which uses synthetic methods to mimic biochemical processes, using nanotechnology.
    “It is well-known that more than 80 percent of the information is captured by vision in research, industry, medication, and our daily life,” he says. “The ultimate purpose of our research is to develop a micro-scale camera for microrobots that can enter narrow spaces that are intangible by current means, and open up new horizons in medical diagnosis, environmental study, manufacturing, archaeology, and more.”
    This biomimetic “electric eye” advances color recognition, the most critical vision function, which is missed in the current research due to the difficulty of downscaling the prevailing color sensing devices. Conventional color sensors typically adopt a lateral color sensing channel layout and consume a large amount of physical space and offer less accurate color detection.
    Researchers developed the unique stacking technique which offers a novel approach to the hardware design. He says the van der Waals semiconductor-empowered vertical color sensing structure offers precise color recognition capability which can simplify the design of the optical lens system for the downscaling of the artificial vision systems. More

  • in

    Guiding a superconducting future with graphene quantum magic

    Superconductors are materials that conduct electrical current with practically no electrical resistance at all. This ability makes them extremely interesting and attractive for a plethora of applications such as loss-less power cables, electric motors and generators, as well as powerful electromagnets that can be used for MRI imaging and for magnetic levitating trains. Now, researchers from Nagoya University have detailed the superconducting nature of a new class of superconducting material, magic-angle twisted bilayer graphene.
    For a material to behave as a superconductor, low temperatures are required. Most materials only enter the superconducting phase at extremely low temperatures, such as -270°C, lower than those measured in outer space! This severely limits their practical applications because such extensive cooling requires very expensive and specialized liquid helium cooling equipment. This is the main reason superconducting technologies are still in their infancy. High temperature superconductors (HTS), such as some iron and copper-based ones, enter the superconducting phase above -200°C, a temperature that is more readily achievable using liquid nitrogen which cools down a system to ?195.8°C. However, the industrial and commercial applications of HTS have been thus far limited. Currently known and available HTS materials are brittle ceramic materials that are not malleable into useful shapes like wires. In addition, they are notoriously difficult and expensive to manufacture. This makes the search for new superconducting materials critical, and a strong focus of research for physicists like Prof. Hiroshi Kontani and Dr. Seiichiro Onari from the Department of Physics, Nagoya University.
    Recently, a new material has been proposed as a potential superconductor called magic-angle twisted bilayer graphene (MATBG). In MATBG, two layers of graphene, essentially single two-dimensional layers of carbon arranged in a honeycomb lattice, are offset by a magic angle (about 1.1 degrees) that leads to the breakage of rotational symmetry and the formation of a high-order symmetry known as SU(4). As temperature changes, the system experiences quantum fluctuations, like water ripples in the atomic structure, that lead to a novel spontaneous change in the electronic structure and a reduction in symmetry. This rotational symmetry breaking is known as the nematic state and has been closely associated with superconducting properties in other materials.
    In their work published recently in Physical Review Letters, Prof. Kontani and Dr. Onari use theoretical methods to better understand and shine light on the source of this nematic state in MATBG. “Since we know that high temperature superconductivity can be induced by nematic fluctuations in strongly correlated electron systems such as iron-based superconductors, clarifying the mechanism and origin of this nematic order can lead to the design and emergence of higher temperature superconductors,” explains Dr. Onari.
    The researchers found that nematic order in MATBG originates from the interference between the fluctuations of a novel degree-of-freedom that combines the valley degrees of freedom and the spin degrees of freedom, something that has not been reported from conventional strongly correlated electron systems. The superconducting transition temperature of twisted bilayer graphene is very low, at 1K (-272°C), but the nematic state manages to increase it by several degrees. Their results also show that although MATBG behaves in some ways like an iron-based high temperature superconductor, it also has some distinct properties that are quite exciting, such as a net charge loop current giving rise to a magnetic field in a valley polarized state, while the loop current is canceled out by each valley in the nematic state. Besides, the malleability of graphene can also play an important role in increasing the practical applications of these superconductors. With a better understanding of the underlying mechanisms of superconductivity, science and technology inch closer to a conducting future that is indeed super.
    Story Source:
    Materials provided by Nagoya University. Note: Content may be edited for style and length. More

  • in

    How to print a robot from scratch: Combining liquids, solids could lead to faster, more flexible 3D creations

    Imagine a future in which you could 3D-print an entire robot or stretchy, electronic medical device with the press of a button — no tedious hours spent assembling parts by hand.
    That possibility may be closer than ever thanks to a recent advancement in 3D-printing technology led by engineers at the University of Colorado Boulder. In a new study, the team lays out a strategy for using currently-available printers to create materials that meld solid and liquid components — a tricky feat if you don’t want your robot to collapse.
    “I think there’s a future where we could, for example, fabricate a complete system like a robot using this process,” said Robert MacCurdy, senior author of the study and assistant professor in the Paul M. Rady Department of Mechanical Engineering.
    MacCurdy, along with doctoral students Brandon Hayes and Travis Hainsworth, published their results April 14 in the journal Additive Manufacturing.
    3D printers have long been the province of hobbyists and researchers working in labs. They’re pretty good at making plastic dinosaurs or individual parts for machines, such as gears or joints. But MacCurdy believes that they can do a lot more: By mixing solids and liquids, 3D printers could churn out devices that are more flexible, dynamic and potentially more useful. They include wearable electronic devices with wires made of liquid contained within solid substrates, or even models that mimic the squishiness of real human organs.
    The engineer compares the advancement to traditional printers that print in color, not just black-and-white. More

  • in

    AI reduces miss rate of precancerous polyps in colorectal cancer screening

    Artificial intelligence reduced by twofold the rate at which precancerous polyps were missed in colorectal cancer screening, reported a team of international researchers led by Mayo Clinic. The study is published in Gastroenterology.
    Most colon polyps are harmless, but some over time develop into colon or rectal cancer, which can be fatal if found in its later stages. Colorectal cancer is the second most deadly cancer in the world, with an estimated 1.9 million cases and 916,000 deaths worldwide in 2020, according to the World Health Organization. A colonoscopy is an exam used to detect changes or abnormalities in the large intestine (colon) and rectum.
    Between February 2020 and May 2021, 230 study participants each underwent two back-to-back colonoscopies on the same day at eight hospitals and community clinics in the U.S., U.K. and Italy. One colonoscopy used AI; the other, a standard colonoscopy, did not.
    The rate at which precancerous colorectal polyps is missed has been estimated to be 25%. In this study, the miss rate was 15.5% in the group that had the AI colonoscopy first. The miss rate was 32.4 % in the group that had standard colonoscopy first. The AI colonoscopy detected more polyps that were smaller, flatter and in the proximal and distal colon.
    “Colorectal cancer is almost entirely preventable with proper screening,” says senior author Michael B. Wallace, M.D., division chair of gastroenterology and hepatology at Sheikh Shakhbout Medical City in Abu Dhabi, United Arab Emirates and the Fred C. Andersen Professor of Medicine at Mayo Clinic in Jacksonville, Fla. “Using artificial intelligence to detect colon polyps and potentially save lives is welcome and promising news for patients and their families.”
    In addition, false negative rates were 6.8% in the group that had the AI colonoscopy first. It was 29.6% in the group that had standard colonoscopy first. A false-negative result indicates that you do not have a particular condition, when in fact you do.
    The study’s senior author and principal investigator is Michael B. Wallace, M.D., of Sheikh Shakhbout Medical City in Abu Dhabi, UAE and Mayo Clinic in Jacksonville, Fla. Co-authors include Cesare Hassan, M.D., Ph.D, of Nuovo Regina Margherita Hospital in Rome, Italy; James East, M.D., of John Radcliffe Hospital in Oxford, U.K., and Mayo Clinic Healthcare in London; Frank Lukens, M.D., of Mayo Clinic in Jacksonville, Fla.; Genci Babameto, M.D., of Mayo Clinic Health System in La Crosse, Wis.; Daisy Batista, M.D., of Mayo Clinic Health System in La Crosse, Wis.; Davinder Singh, M.D., of Mayo Clinic Health System in La Crosse, Wis.; William Palmer, M.D. of Mayo Clinic in Jacksonville, Fla.; Francisco C. Ramirez, M.D., of Mayo Clinic in Scottsdale, Ariz.; Tisha Lunsford, M.D., of Mayo Clinic in Scottsdale, Ariz.; Kevin Ruff, M.D., of Mayo Clinic in Scottsdale, Ariz.; David Cangemi, M.D., of Mayo Clinic in Jacksonville, Fla.; Gregory Derfus, M.D., of Mayo Clinic Health System in Eau Claire, Wis. Victor Ciofoaia, M.D., another co-author, was affiliated with Mayo during the study, but has since left Mayo.
    Cosmo Artificial Intelligence-AI Ltd. funded the study.
    Dr. Wallace has financial interests in Verily, Cosmo Pharmaceuticals, Fujifilm, Olympus and Virgo.
    Story Source:
    Materials provided by Mayo Clinic. Original written by Rhoda Madson. Note: Content may be edited for style and length. More

  • in

    Study shows simple, computationally-light model can simulate complex brain cell responses

    The brain is arguably the single most important organ in the human body. It controls how we move, react, think and feel, and enables us to have complex emotions and memories. The brain is composed of approximately 86 billion neurons that form a complex network. These neurons receive, process, and transfer information using chemical and electrical signals.
    Learning how neurons respond to different signals can further the understanding of cognition and development and improve the management of disorders of the brain. But experimentally studying neuronal networks is a complex and occasionally invasive procedure. Mathematical models provide a non-invasive means to accomplish the task of understanding neuronal networks, but most current models are either too computationally intensive, or they cannot adequately simulate the different types of complex neuronal responses. In a recent study, published in Nonlinear Theory and Its Applications, IEICE, a research team led by Prof. Tohru Ikeguchi of Tokyo University of Science, has analyzed some of the complex responses of neurons in a computationally simple neuron model, the Izhikevich neuron model. “My laboratory is engaged in research on neuroscience and this study analyzes the basic mathematical properties of a neuron model. While we analyzed a single neuron model in this study, this model is often used in computational neuroscience, and not all of its properties have been clarified. Our study fills that gap,” explains Prof. Ikeguchi. The research team also comprised Mr. Yota Tsukamoto and PhD student Ms. Honami Tsushima, also from Tokyo University of Science.
    The responses of a neuron to a sinusoidal input (a signal shaped like a sine wave, which oscillates smoothly and periodically) have been clarified experimentally. These responses can be either periodic, quasi-periodic, or chaotic. Previous work on the Izhikevich neuron model has demonstrated that it can simulate the periodic responses of neurons. “In this work, we analyzed the dynamical behavior of the Izhikevich neuron model in response to a sinusoidal signal and found that it exhibited not only periodic responses, but non-periodic responses as well,” explains Prof. Ikeguchi.
    The research team then quantitatively analyzed how many different types of ‘inter-spike intervals’ there were in the dataset and then used it to distinguish between periodic and non-periodic responses. When a neuron receives a sufficient amount of stimulus, it emits ‘spikes,’ thereby conducting a signal to the next neuron. The inter-spike interval refers to the interval time between two consecutive spikes.
    They found that neurons provided periodic responses to signals that had larger amplitudes than a certain threshold value and that signals below this value induced non-periodic responses. They also analyzed the response of the Izhikevich neuron model in detail using a technique called ‘stroboscopic observation points,’ which helped them identify that the non-periodic responses of the Izhikevich neuron model were actually quasi-periodic responses.
    When asked about the future implications of this study, Prof. Ikeguchi says, “This study was limited to the model of a single neuron. In the future, we will prepare many such models and combine them to clarify how a neural network works. We will also prepare two types of neurons, excitatory and inhibitory neurons, and use them to mimic the actual brain, which will help us understand principles of information processing in our brain.”
    The use of a simple model for accurate simulations of neuronal response is a significant step forward in this exciting field of research and illuminates the way towards the future understanding of cognitive and developmental disorders.
    Story Source:
    Materials provided by Tokyo University of Science. Note: Content may be edited for style and length. More

  • in

    DIY digital archaeology: New methods for visualizing small objects and artifacts

    The ability to visually represent artefacts, whether inorganics like stone, ceramic and metal, or organics such as bone and plant material, has always been of great importance to the field of anthropology and archaeology. For researchers, educators, students and the public, the ability to see the past, not only read about it, offers invaluable insights into the production of cultural materials and the populations who made and used them.
    Digital photography is the most commonly used method of visual representation, but despite its speed and efficiency, it often fails to faithfully represent the artefact being studied. In recent years, 3-D scanning has emerged as an alternative source of high-quality visualizations, but the cost of the equipment and the time needed to produce a model are often prohibitive.
    Now, a paper published in PLOS ONE presents two new methods for producing high-resolution visualizations of small artefacts, each achievable with basic software and equipment. Using expertise from fields which include archaeological science, computer graphics and video game development, the methods are designed to allow anyone to produce high-quality images and models with minimal effort and cost.
    The first method, Small Object and Artefact Photography or SOAP, deals with the photographic application of modern digital techniques. The protocol guides users through small object and artefact photography from the initial set up of the equipment to the best methods for camera handling and functionality and the application of post-processing software.
    The second method, High Resolution Photogrammetry or HRP, is used for the photographic capturing, digital reconstruction and three-dimensional modelling of small objects. This method aims to give a comprehensive guide for the development of high-resolution 3D models, merging well-known techniques used in academic and computer graphic fields, allowing anyone to independently produce high resolution and quantifiable models.
    “These new protocols combine detailed, concise, and user-friendly workflows covering photographic acquisition and processing, thereby contributing to the replicability and reproducibility of high-quality visualizations,” says Jacopo Niccolò Cerasoni, lead author of the paper. “By clearly explaining every step of the process, including theoretical and practical considerations, these methods will allow users to produce high-quality, publishable two- and three-dimensional visualisations of their archaeological artefacts independently.”
    The SOAP and HRP protocols were developed using Adobe Camera Raw, Adobe Photoshop, RawDigger, DxO Photolab, and RealityCapture and take advantage of native functions and tools that make image capture and processing easier and faster. Although most of these softwares are readily available in academic environments, SOAP and HRP can be applied to any other non-subscription based softwares with similar features. This enables researchers to use free or open-access software as well, albeit with minor changes to some of the presented steps.
    Both the SOAP protocol and the HRP protocol are published openly on protocols.io.
    “Because visual communication is so important to understanding past behavior, technology and culture, the ability to faithfully represent artefacts is vital for the field of archaeology,” says co-author Felipe do Nascimento Rodrigues, from the University of Exeter.
    Even as new technologies revolutionize the field of archaeology, practical instruction on archaeological photography and three-dimensional reconstructions are lacking. The authors of the new paper hope to fill this gap, providing researchers, educators and enthusiasts with step-by-step instructions for creating high quality visualizations of artefacts. More

  • in

    A novel computing approach to recognizing chaos

    Chaos isn’t always harmful to technology, in fact, it can have several useful applications if it can be detected and identified.
    Chaos and its chaotic dynamics are prevalent throughout nature and through manufactured devices and technology. Though chaos is usually considered a negative, something to be removed from systems to ensure their optimal operation, there are circumstances in which chaos can be a benefit and can even have important applications. Hence a growing interest in the detection and classification of chaos in systems.
    A new paper published in EPJ B authored by Dagobert Wenkack Liedji and Jimmi Hervé Talla Mbé of the Research unit of Condensed Matter, Electronics and Signal Processing, Department of Physics, University of Dschang, Cameroon, and Godpromesse Kenné, from Laboratoire d’ Automatique et d’Informatique Appliquée, Department of Electrical Engineering, IUT-FV Bandjoun, University of Dschang, Cameroon, proposes using the single nonlinear node delay-based reservoir computer to identify chaotic dynamics.
    In the paper, the authors show that the classification capabilities of this system are robust with an accuracy of over 99 per cent. Examining the effect of the length of the time series on the performance of the method they found higher accuracy achieved when the single nonlinear node delay-based reservoir computer was used with short time series.
    Several quantifiers have been developed to distinguish chaotic dynamics in the past, prominently the largest Lyapunov exponent (LLE), which is highly reliable and helps display numerical values that help to decide on the dynamical state of the system.
    The team overcame issues with the LLE like expense, need for the mathematical modelling of the system, and long-processing times by studying several deep learning models finding these models obtained poor classification rates. The exception to this was a large kernel size convolutional neural network (LKCNN) which could classify chaotic and nonchaotic time series with high accuracy.
    Thus, using the Mackey-Glass (MG) delay-based reservoir computer system to classify nonchaotic and chaotic dynamical behaviours, the authors showed the ability of the system to act as an efficient and robust quantifier for classifying non-chaotic and chaotic signals.
    They listed the advantages of the system they used as not necessarily requiring the knowledge of the set of equations, instead, describing the dynamics of a system but only data from the system, and the fact that neuromorphic implementation using an analogue reservoir computer enables the real-time detection of dynamical behaviours from a given oscillator.
    The team concludes that future research will be devoted to deep reservoir computers to explore their performances in classifications of more complex dynamics.
    Story Source:
    Materials provided by Springer. Note: Content may be edited for style and length. More