More stories

  • in

    How to print a robot from scratch: Combining liquids, solids could lead to faster, more flexible 3D creations

    Imagine a future in which you could 3D-print an entire robot or stretchy, electronic medical device with the press of a button — no tedious hours spent assembling parts by hand.
    That possibility may be closer than ever thanks to a recent advancement in 3D-printing technology led by engineers at the University of Colorado Boulder. In a new study, the team lays out a strategy for using currently-available printers to create materials that meld solid and liquid components — a tricky feat if you don’t want your robot to collapse.
    “I think there’s a future where we could, for example, fabricate a complete system like a robot using this process,” said Robert MacCurdy, senior author of the study and assistant professor in the Paul M. Rady Department of Mechanical Engineering.
    MacCurdy, along with doctoral students Brandon Hayes and Travis Hainsworth, published their results April 14 in the journal Additive Manufacturing.
    3D printers have long been the province of hobbyists and researchers working in labs. They’re pretty good at making plastic dinosaurs or individual parts for machines, such as gears or joints. But MacCurdy believes that they can do a lot more: By mixing solids and liquids, 3D printers could churn out devices that are more flexible, dynamic and potentially more useful. They include wearable electronic devices with wires made of liquid contained within solid substrates, or even models that mimic the squishiness of real human organs.
    The engineer compares the advancement to traditional printers that print in color, not just black-and-white. More

  • in

    AI reduces miss rate of precancerous polyps in colorectal cancer screening

    Artificial intelligence reduced by twofold the rate at which precancerous polyps were missed in colorectal cancer screening, reported a team of international researchers led by Mayo Clinic. The study is published in Gastroenterology.
    Most colon polyps are harmless, but some over time develop into colon or rectal cancer, which can be fatal if found in its later stages. Colorectal cancer is the second most deadly cancer in the world, with an estimated 1.9 million cases and 916,000 deaths worldwide in 2020, according to the World Health Organization. A colonoscopy is an exam used to detect changes or abnormalities in the large intestine (colon) and rectum.
    Between February 2020 and May 2021, 230 study participants each underwent two back-to-back colonoscopies on the same day at eight hospitals and community clinics in the U.S., U.K. and Italy. One colonoscopy used AI; the other, a standard colonoscopy, did not.
    The rate at which precancerous colorectal polyps is missed has been estimated to be 25%. In this study, the miss rate was 15.5% in the group that had the AI colonoscopy first. The miss rate was 32.4 % in the group that had standard colonoscopy first. The AI colonoscopy detected more polyps that were smaller, flatter and in the proximal and distal colon.
    “Colorectal cancer is almost entirely preventable with proper screening,” says senior author Michael B. Wallace, M.D., division chair of gastroenterology and hepatology at Sheikh Shakhbout Medical City in Abu Dhabi, United Arab Emirates and the Fred C. Andersen Professor of Medicine at Mayo Clinic in Jacksonville, Fla. “Using artificial intelligence to detect colon polyps and potentially save lives is welcome and promising news for patients and their families.”
    In addition, false negative rates were 6.8% in the group that had the AI colonoscopy first. It was 29.6% in the group that had standard colonoscopy first. A false-negative result indicates that you do not have a particular condition, when in fact you do.
    The study’s senior author and principal investigator is Michael B. Wallace, M.D., of Sheikh Shakhbout Medical City in Abu Dhabi, UAE and Mayo Clinic in Jacksonville, Fla. Co-authors include Cesare Hassan, M.D., Ph.D, of Nuovo Regina Margherita Hospital in Rome, Italy; James East, M.D., of John Radcliffe Hospital in Oxford, U.K., and Mayo Clinic Healthcare in London; Frank Lukens, M.D., of Mayo Clinic in Jacksonville, Fla.; Genci Babameto, M.D., of Mayo Clinic Health System in La Crosse, Wis.; Daisy Batista, M.D., of Mayo Clinic Health System in La Crosse, Wis.; Davinder Singh, M.D., of Mayo Clinic Health System in La Crosse, Wis.; William Palmer, M.D. of Mayo Clinic in Jacksonville, Fla.; Francisco C. Ramirez, M.D., of Mayo Clinic in Scottsdale, Ariz.; Tisha Lunsford, M.D., of Mayo Clinic in Scottsdale, Ariz.; Kevin Ruff, M.D., of Mayo Clinic in Scottsdale, Ariz.; David Cangemi, M.D., of Mayo Clinic in Jacksonville, Fla.; Gregory Derfus, M.D., of Mayo Clinic Health System in Eau Claire, Wis. Victor Ciofoaia, M.D., another co-author, was affiliated with Mayo during the study, but has since left Mayo.
    Cosmo Artificial Intelligence-AI Ltd. funded the study.
    Dr. Wallace has financial interests in Verily, Cosmo Pharmaceuticals, Fujifilm, Olympus and Virgo.
    Story Source:
    Materials provided by Mayo Clinic. Original written by Rhoda Madson. Note: Content may be edited for style and length. More

  • in

    Study shows simple, computationally-light model can simulate complex brain cell responses

    The brain is arguably the single most important organ in the human body. It controls how we move, react, think and feel, and enables us to have complex emotions and memories. The brain is composed of approximately 86 billion neurons that form a complex network. These neurons receive, process, and transfer information using chemical and electrical signals.
    Learning how neurons respond to different signals can further the understanding of cognition and development and improve the management of disorders of the brain. But experimentally studying neuronal networks is a complex and occasionally invasive procedure. Mathematical models provide a non-invasive means to accomplish the task of understanding neuronal networks, but most current models are either too computationally intensive, or they cannot adequately simulate the different types of complex neuronal responses. In a recent study, published in Nonlinear Theory and Its Applications, IEICE, a research team led by Prof. Tohru Ikeguchi of Tokyo University of Science, has analyzed some of the complex responses of neurons in a computationally simple neuron model, the Izhikevich neuron model. “My laboratory is engaged in research on neuroscience and this study analyzes the basic mathematical properties of a neuron model. While we analyzed a single neuron model in this study, this model is often used in computational neuroscience, and not all of its properties have been clarified. Our study fills that gap,” explains Prof. Ikeguchi. The research team also comprised Mr. Yota Tsukamoto and PhD student Ms. Honami Tsushima, also from Tokyo University of Science.
    The responses of a neuron to a sinusoidal input (a signal shaped like a sine wave, which oscillates smoothly and periodically) have been clarified experimentally. These responses can be either periodic, quasi-periodic, or chaotic. Previous work on the Izhikevich neuron model has demonstrated that it can simulate the periodic responses of neurons. “In this work, we analyzed the dynamical behavior of the Izhikevich neuron model in response to a sinusoidal signal and found that it exhibited not only periodic responses, but non-periodic responses as well,” explains Prof. Ikeguchi.
    The research team then quantitatively analyzed how many different types of ‘inter-spike intervals’ there were in the dataset and then used it to distinguish between periodic and non-periodic responses. When a neuron receives a sufficient amount of stimulus, it emits ‘spikes,’ thereby conducting a signal to the next neuron. The inter-spike interval refers to the interval time between two consecutive spikes.
    They found that neurons provided periodic responses to signals that had larger amplitudes than a certain threshold value and that signals below this value induced non-periodic responses. They also analyzed the response of the Izhikevich neuron model in detail using a technique called ‘stroboscopic observation points,’ which helped them identify that the non-periodic responses of the Izhikevich neuron model were actually quasi-periodic responses.
    When asked about the future implications of this study, Prof. Ikeguchi says, “This study was limited to the model of a single neuron. In the future, we will prepare many such models and combine them to clarify how a neural network works. We will also prepare two types of neurons, excitatory and inhibitory neurons, and use them to mimic the actual brain, which will help us understand principles of information processing in our brain.”
    The use of a simple model for accurate simulations of neuronal response is a significant step forward in this exciting field of research and illuminates the way towards the future understanding of cognitive and developmental disorders.
    Story Source:
    Materials provided by Tokyo University of Science. Note: Content may be edited for style and length. More

  • in

    DIY digital archaeology: New methods for visualizing small objects and artifacts

    The ability to visually represent artefacts, whether inorganics like stone, ceramic and metal, or organics such as bone and plant material, has always been of great importance to the field of anthropology and archaeology. For researchers, educators, students and the public, the ability to see the past, not only read about it, offers invaluable insights into the production of cultural materials and the populations who made and used them.
    Digital photography is the most commonly used method of visual representation, but despite its speed and efficiency, it often fails to faithfully represent the artefact being studied. In recent years, 3-D scanning has emerged as an alternative source of high-quality visualizations, but the cost of the equipment and the time needed to produce a model are often prohibitive.
    Now, a paper published in PLOS ONE presents two new methods for producing high-resolution visualizations of small artefacts, each achievable with basic software and equipment. Using expertise from fields which include archaeological science, computer graphics and video game development, the methods are designed to allow anyone to produce high-quality images and models with minimal effort and cost.
    The first method, Small Object and Artefact Photography or SOAP, deals with the photographic application of modern digital techniques. The protocol guides users through small object and artefact photography from the initial set up of the equipment to the best methods for camera handling and functionality and the application of post-processing software.
    The second method, High Resolution Photogrammetry or HRP, is used for the photographic capturing, digital reconstruction and three-dimensional modelling of small objects. This method aims to give a comprehensive guide for the development of high-resolution 3D models, merging well-known techniques used in academic and computer graphic fields, allowing anyone to independently produce high resolution and quantifiable models.
    “These new protocols combine detailed, concise, and user-friendly workflows covering photographic acquisition and processing, thereby contributing to the replicability and reproducibility of high-quality visualizations,” says Jacopo Niccolò Cerasoni, lead author of the paper. “By clearly explaining every step of the process, including theoretical and practical considerations, these methods will allow users to produce high-quality, publishable two- and three-dimensional visualisations of their archaeological artefacts independently.”
    The SOAP and HRP protocols were developed using Adobe Camera Raw, Adobe Photoshop, RawDigger, DxO Photolab, and RealityCapture and take advantage of native functions and tools that make image capture and processing easier and faster. Although most of these softwares are readily available in academic environments, SOAP and HRP can be applied to any other non-subscription based softwares with similar features. This enables researchers to use free or open-access software as well, albeit with minor changes to some of the presented steps.
    Both the SOAP protocol and the HRP protocol are published openly on protocols.io.
    “Because visual communication is so important to understanding past behavior, technology and culture, the ability to faithfully represent artefacts is vital for the field of archaeology,” says co-author Felipe do Nascimento Rodrigues, from the University of Exeter.
    Even as new technologies revolutionize the field of archaeology, practical instruction on archaeological photography and three-dimensional reconstructions are lacking. The authors of the new paper hope to fill this gap, providing researchers, educators and enthusiasts with step-by-step instructions for creating high quality visualizations of artefacts. More

  • in

    A novel computing approach to recognizing chaos

    Chaos isn’t always harmful to technology, in fact, it can have several useful applications if it can be detected and identified.
    Chaos and its chaotic dynamics are prevalent throughout nature and through manufactured devices and technology. Though chaos is usually considered a negative, something to be removed from systems to ensure their optimal operation, there are circumstances in which chaos can be a benefit and can even have important applications. Hence a growing interest in the detection and classification of chaos in systems.
    A new paper published in EPJ B authored by Dagobert Wenkack Liedji and Jimmi Hervé Talla Mbé of the Research unit of Condensed Matter, Electronics and Signal Processing, Department of Physics, University of Dschang, Cameroon, and Godpromesse Kenné, from Laboratoire d’ Automatique et d’Informatique Appliquée, Department of Electrical Engineering, IUT-FV Bandjoun, University of Dschang, Cameroon, proposes using the single nonlinear node delay-based reservoir computer to identify chaotic dynamics.
    In the paper, the authors show that the classification capabilities of this system are robust with an accuracy of over 99 per cent. Examining the effect of the length of the time series on the performance of the method they found higher accuracy achieved when the single nonlinear node delay-based reservoir computer was used with short time series.
    Several quantifiers have been developed to distinguish chaotic dynamics in the past, prominently the largest Lyapunov exponent (LLE), which is highly reliable and helps display numerical values that help to decide on the dynamical state of the system.
    The team overcame issues with the LLE like expense, need for the mathematical modelling of the system, and long-processing times by studying several deep learning models finding these models obtained poor classification rates. The exception to this was a large kernel size convolutional neural network (LKCNN) which could classify chaotic and nonchaotic time series with high accuracy.
    Thus, using the Mackey-Glass (MG) delay-based reservoir computer system to classify nonchaotic and chaotic dynamical behaviours, the authors showed the ability of the system to act as an efficient and robust quantifier for classifying non-chaotic and chaotic signals.
    They listed the advantages of the system they used as not necessarily requiring the knowledge of the set of equations, instead, describing the dynamics of a system but only data from the system, and the fact that neuromorphic implementation using an analogue reservoir computer enables the real-time detection of dynamical behaviours from a given oscillator.
    The team concludes that future research will be devoted to deep reservoir computers to explore their performances in classifications of more complex dynamics.
    Story Source:
    Materials provided by Springer. Note: Content may be edited for style and length. More

  • in

    Exposure assessment for Deepwater Horizon oil spill: Health outcomes

    Nearly 12 years after the Deepwater Horizon oil spill, scientists are still examining the potential health effects on workers and volunteers who experienced oil-related exposures.
    To help shape future prevention efforts, one West Virginia University researcher — Caroline Groth, assistant professor in the School of Public Health’s Department of Epidemiology and Biostatistics — has developed novel statistical methods for assessing airborne exposure. Working with collaborators from multiple institutions, Groth has made it possible for researchers to characterize oil spill exposures in greater detail than has ever been done before.
    With very few Ph.D. biostatisticians in the area of occupational health, there were few appropriate statistical methodologies for the assessment of inhalation exposures for the GuLF STUDY, a study launched by the National Institute of Environmental Health Sciences shortly after the Deepwater Horizon oil spill. The purpose of the study, which is the largest ever following an oil spill: examine the health of persons involved in the response and clean-up efforts. Groth was part of the exposure assessment team tasked with characterizing worker exposures and led by Patricia Stewart and Mark Stenzel.
    Groth’s statistical methods, which she began in 2012, laid the framework for a crucial step for determining whether there are associations between exposures and health outcomes from the oil spill and clean-up work, which involved over 9,000 vessels deployed in the Gulf of Mexico waters across Alabama, Florida, Louisiana and Mississippi and tens of thousands of workers on the water and on land.
    The Deepwater Horizon oil spill is considered the largest marine oil spill in the history of the U.S.
    “Workers were exposed differently based on their activities, time of exposure, etc., and our research team’s goal was to develop exposure estimates for each of those scenarios and then link them to the participants’ work history through an ‘exposure matrix,'” Groth said. More

  • in

    Predicting the most stable boron nitride structure with quantum simulations

    Boron nitride (BN) is a versatile material with applications in a variety of engineering and scientific fields. This is largely due to an interesting property of BN called “polymorphism,” characterized by the ability to crystallize into more than one type of structure. This generally occurs as a response to changes in temperature, pressure, or both. Furthermore, the different structures, called “polymorphs,” differ remarkably in their physical properties despite having the same chemical formula. As a result, polymorphs play an important role in material design, and a knowledge of how to selectively favor the formation of the desired polymorph is crucial in this regard.
    However, BN polymorphs pose a particular problem. Despite conducting several experiments to assess the relative stabilities of BN polymorphs, a consensus has not emerged on this topic. While computational methods are often the go-to approach for these problems, BN polymorphs have posed serious challenges to standard computation techniques due to the weak “van der Waals (vdW) interactions” between their layers, which is not accounted for in these computations. Moreover, the four stable BN polymorphs, namely rhombohedral (rBN), hexagonal (hBN), wurtzite (wBN), and zinc-blende (cBN), manifest within a narrow energy range, making the capture of small energy differences together with vdW interactions even more challenging.
    Fortunately, an international research team led by Assistant Professor Kousuke Nakano from Japan Advanced Institute of Science and Technology (JAIST) has now provided evidence to settle the debate. In their study, they addressed the issue with a state-of-the-art first principles calculations framework, namely fixed-node diffusion Monte Carlo (FNDMC) simulations. FNDMC represents a step in the popular quantum Monte Carlo simulations method, in which a parametrized many-body quantum “wavefunction” is first optimized to attain the ground state and then supplied to the FNDMC.
    Additionally, the team also computed the Gibbs energy (the useful work obtainable from a system at constant pressure and temperature) of BN polymorphs for different temperatures and pressures using density functional theory (DFT) and phonon calculations. This paper was made available online on March 24, 2022 published in The Journal of Physical Chemistry C.
    According to the FNDMC results, hBN was the most stable structure, followed by rBN, cBN, and wBN. These results were consistent at both 0 K and 300 K (room temperature). However, the DFT estimations yielded conflicting results for two different approximations. Dr. Nakano explains these contradictory findings: “Our results reveal that the estimation of relative stabilities is greatly influenced by the exchange correlational functional, or the approximation used in the DFT calculation. As a result, a quantitative conclusion cannot be reached using DFT findings, and a more accurate approach, such as FNDMC, is required.”
    Notably, the FNDMC results were in agreement with that generated by other refined computation methods, such as “coupled cluster,” suggesting that FNDMC is an effective tool for dealing with polymorphs, especially those governed by vdW forces. The team also showed that it can provide other important information, such as reliable reference energies, when experimental data is unavailable.
    Dr. Nakano is excited about the future prospects of the method in the area of materials science. “Our study demonstrates the ability of FNDMC to detect tiny energy changes involving vdW forces, which will stimulate the use of this method for other van der Waals materials,” he says. “Moreover, molecular simulations based on this accurate and reliable method could empower material designs, enabling the development of medicines and catalysts.” More

  • in

    Hybrid quantum bit based on topological insulators

    With their superior properties, topological qubits could help achieve a breakthrough in the development of a quantum computer designed for universal applications. So far, no one has yet succeeded in unambiguously demonstrating a quantum bit, or qubit for short, of this kind in a lab. However, scientists from Forschungszentrum Jülich have now gone some way to making this a reality. For the first time, they succeeded in integrating a topological insulator into a conventional superconducting qubit. Just in time for “World Quantum Day” on 14 April, their novel hybrid qubit made it to the cover of the latest issue of the journal Nano Letters.
    Quantum computers are regarded as the computers of the future. Using quantum effects, they promise to deliver solutions for highly complex problems that cannot be processed by conventional computers in a realistic time frame. However, the widespread use of such computers is still a long way off. Current quantum computers generally contain only a small number of qubits. The main problem is that they are highly prone to error. The bigger the system, the more difficult it is to fully isolate it from its environment.
    Many hopes are therefore pinned on a new type of quantum bit — the topological qubit. This approach is being pursued by several research groups as well as companies such as Microsoft. This type of qubit exhibits the special feature that it is topologically protected; the particular geometric structure of the superconductors as well as their special electronic material properties ensure that quantum information is retained. Topological qubits are therefore considered to be particularly robust and largely immune to external sources of decoherence. They also appear to enable fast switching times comparable to those achieved by the conventional superconducting qubits used by Google and IBM in current quantum processors.
    However, it is not yet clear whether we will ever succeed in actually producing topological qubits. This is because a suitable material basis is still lacking to experimentally generate the special quasiparticles required for this without any doubt. These quasiparticles are also known as Majorana states. Until now, they could only be unambiguously demonstrated in theory, but not in experiments. Hybrid qubits, as they have now been constructed for the first time by the research group led by Dr. Peter Schüffelgen at the Peter Grünberg Institute (PGI-9) of Forschungszentrum Jülich, are now opening up new possibilities in this area. They already contain topological materials at crucial points. Therefore, this novel type of hybrid qubit provides researchers with a new experimental platform to test the behaviour of topological materials in highly sensitive quantum circuits.
    Story Source:
    Materials provided by Forschungszentrum Juelich. Note: Content may be edited for style and length. More