More stories

  • in

    Neural networks predict forces in jammed granular solids

    Granular matter is all around us. Examples include sand, rice, nuts, coffee and even snow. These materials are made of solid particles that are large enough not to experience thermal fluctuations. Instead, their state is determined by mechanical influences: shaking produces “granular gases” whilst by compression one gets “granular solids.” An unusual feature of such solids is that forces within the material concentrate along essentially linear paths called force chains whose shape resembles that of lightning. Apart from granular solids, other complex solids such as dense emulsions, foams and even groups of cells can exhibit these force chains. Researchers led by the University of Göttingen used machine learning and computer simulations to predict the position of force chains. The results were published in Nature Communications.
    The formation of force chains is highly sensitive to the way the individual grains interact. This makes it very difficult to predict where force chains will form. Combining computer simulations with tools from artificial intelligence, researchers at the Institute for Theoretical Physics, University of Göttingen, and at Ghent University tackled this challenge by developing a novel tool for predicting the formation of force chains in both frictionless and frictional granular matter. The approach uses a machine learning method known as a graph neural network (GNN). The researchers have demonstrated that GNNs can be trained in a supervised approach to predict the position of force chains that arise while deforming a granular system, given an undeformed static structure.
    “Understanding force chains is crucial in describing the mechanical and transport properties of granular solids and this applies in a wide range of circumstances — for example how sound propagates or how sand or a pack of coffee grains respond to mechanical deformation,” explains Dr Rituparno Mandal, Institute for Theoretical Physics, University of Göttingen. Mandal adds, “A recent study even suggests that living creatures such as ants exploit the effects of force chain networks when removing grains of soil for efficient tunnel excavation.”
    “We experimented with different machine learning-based tools and realised that a trained GNN can generalize remarkably well from training data, allowing it to predict force chains in new undeformed samples,” says Mandal. “We were fascinated by just how robust the method is: it works exceptionally well for many types of computer generated granular materials. We are currently planning to extend this to experimental systems in the lab,” added Corneel Casert, joint first author Ghent University. Senior author, Professor Peter Sollich, Institute for Theoretical Physics, University of Göttingen, explains: “The efficiency of this new method is surprisingly high for different scenarios with varying system size, particle density, and composition of different particles types. This means it will be useful in understanding force chains for many types of granular matter and systems.”
    Story Source:
    Materials provided by University of Göttingen. Note: Content may be edited for style and length. More

  • in

    Why 'erasure' could be key to practical quantum computing

    Researchers have discovered a new method for correcting errors in the calculations of quantum computers, potentially clearing a major obstacle to a powerful new realm of computing.
    In conventional computers, fixing errors is a well-developed field. Every cellphone requires checks and fixes to send and receive data over messy airwaves. Quantum computers offer enormous potential to solve certain complex problems that are impossible for conventional computers, but this power depends on harnessing extremely fleeting behaviors of subatomic particles. These computing behaviors are so ephemeral that even looking in on them to check for errors can cause the whole system to collapse.
    In a theoretical paper published Aug. 9 in Nature Communications, an interdisciplinary team led by Jeff Thompson, an associate professor of electrical and computer engineering at Princeton, and collaborators Yue Wu and Shruti Puri at Yale University and Shimon Kolkowitz at the University of Wisconsin-Madison, showed that they could dramatically improve a quantum computer’s tolerance for faults, and reduce the amount of redundant information needed to isolate and fix errors. The new technique increases the acceptable error rate four-fold, from 1% to 4%, which is practical for quantum computers currently in development.
    “The fundamental challenge to quantum computers is that the operations you want to do are noisy,” said Thompson, meaning that calculations are prone to myriad modes of failure.
    In a conventional computer, an error can be as simple as a bit of memory accidentally flipping from a 1 to a 0, or as messy as one wireless router interfering with another. A common approach for handling such faults is to build in some redundancy, so that each piece of data is compared with duplicate copies. However, that approach increases the amount of data needed and creates more possibilities for errors. Therefore, it only works when the vast majority of information is already correct. Otherwise, checking wrong data against wrong data leads deeper into a pit of error.
    “If your baseline error rate is too high, redundancy is a bad strategy,” Thompson said. “Getting below that threshold is the main challenge.”
    Rather than focusing solely on reducing the number of errors, Thompson’s team essentially made errors more visible. The team delved deeply into the actual physical causes of error, and engineered their system so that the most common source of error effectively eliminates, rather than simply corrupting, the damaged data. Thompson said this behavior represents a particular kind of error known as an “erasure error,” which is fundamentally easier to weed out than data that is corrupted but still looks like all the other data. More

  • in

    Push, pull or swirl: The many movements of cilia

    Cilia are tiny, hair-like structures on cells throughout our bodies that beat rhythmically to serve a variety of functions when they are working properly, including circulating cerebrospinal fluid in brains and transporting eggs in fallopian tubes.
    Defective cilia can lead to disorders including situs inversus — a condition where a person’s organs develop on the side opposite of where they usually are.
    Researchers know about many of cilia’s roles, but not exactly how they beat in the first place. This knowledge would be a step toward better understanding, and ultimately being able to treat, cilia-related diseases.
    A team of McKelvey School of Engineering researchers at Washington University in St. Louis, led by Louis Woodhams, senior lecturer, and Philip V. Bayly, the Lee Hunter Distinguished Professor and chair of the Department of Mechanical Engineering & Materials Science, have developed a mathematical model of the cilium in which beating arises from a mechanical instability due to steady forces generated by the cilium motor protein, dynein.
    Results of the research appeared on the cover of the August issue of Journal of the Royal Society Interface.
    Bayly’s lab has been working with cilia as a model to study vibration, wave motion and instability in mechanical and biomedical systems. As intricate nanomachines in their own right, cilia could inspire similarly propelled machines that can do useful tasks on the tiniest scales, maybe even for chemical sensing or drug delivery in the human body.
    The new model will allow the team to explore what happens when the motor protein exerts different forces, or when internal structures are more or less stiff, as a result of genetic or environmental factors.
    Story Source:
    Materials provided by Washington University in St. Louis. Original written by Beth Miller. Note: Content may be edited for style and length. More

  • in

    CT-derived body composition with deep learning predicts cardiovascular events

    According to ARRS’ American Journal of Roentgenology (AJR), fully automated and normalized body composition analysis of abdominal CT has promise to augment traditional cardiovascular risk prediction models.
    “Visceral fat area from fully automated and normalized analysis of abdominal CT examinations predicts subsequent myocardial infarction or stroke in Black and White patients, independent of traditional weight metrics, and should be considered as an adjunct to BMI in risk models,” wrote first author Kirti Magudia, MD, PhD, currently from the department of radiology at Duke University School of Medicine.
    Dr. Magudia and colleagues’ retrospective study numbered 9,752 outpatients (5,519 women, 4,233 men; 890 self-reported Black, 8,862 self-reported White; mean age, 53.2 years) who underwent routine abdominal CT at Brigham and Women’s Hospital or Massachusetts General Hospital from January-December 2012, sans a major cardiovascular or oncologic diagnosis within 3 months of examination. Fully automated deep learning body composition analysis was performed at the L3 vertebral level to determinate three body composition areas: skeletal muscle area, visceral fat area, and subcutaneous fat area. Subsequent myocardial infarction or stroke was established via electronic health records.
    Ultimately, after normalization for age, sex, and race, visceral fat area derived from routine CT was associated with risk of myocardial infarction (HR 1.31 [1.03-1.67], p=.04 for overall effect) and stroke (HR 1.46 [1.07-2.00], p=.04 for overall effect) in multivariable models in Black and White patients; normalized weight, BMI, skeletal muscle area, and subcutaneous fat area were not.
    Noting that their large study demonstrates a pipeline for body composition analysis and age-, sex-, and race-specific reference values to add prognostic utility to clinical practice, “we anticipate that fully automated body composition analysis using machine learning could be widely adopted to harness latent value from routine imaging studies,” the authors of this AJR article concluded.
    Story Source:
    Materials provided by American Roentgen Ray Society. Note: Content may be edited for style and length. More

  • in

    Simple technique ushers in long-sought class of semiconductors

    Breakthroughs in modern microelectronics depend on understanding and manipulating the movement of electrons in metal. Reducing the thickness of metal sheets to the order of nanometers can enable exquisite control over how the metal’s electrons move. In so doing, one can impart properties that aren’t seen in bulk metals, such as ultrafast conduction of electricity. Now, researchers from Osaka University and collaborating partners have synthesized a novel class of nanostructured superlattices. This study enables an unusually high degree of control over the movement of electrons within metal semiconductors, which promises to enhance the functionality of everyday technologies.
    Precisely tuning the architecture of metal nanosheets, and thus facilitating advanced microelectronics functionalities, remains an ongoing line of work worldwide. In fact, several Nobel prizes have been awarded on this topic. Researchers conventionally synthesize nanostructured superlattices — regularly alternating layers of metals, sandwiched together — from materials of the same dimension; for example, sandwiched 2D sheets. A key aspect of the present researchers’ work is its facile fabrication of hetero-dimensional superlattices; for example, 1D nanoparticle chains sandwiched within 2D nanosheets.
    “Nanoscale hetero-dimensional superlattices are typically challenging to prepare, but can exhibit valuable physical properties, such as anisotropic electrical conductivity,” explains Yung-Chang Lin, senior author. “We developed a versatile means of preparing such structures, and in so doing we will inspire synthesis of a wide range of custom superstructures.”
    The researchers used chemical vapor deposition — a common nanofabrication technique in industry — to prepare vanadium-based superlattices. These magnetic semiconductors exhibit what is known as an anisotropic anomalous Hall effect (AHE): meaning directionally focused charge accumulation under in-plane magnetic field conditions (in which the conventional Hall effect isn’t observed). Usually, the AHE is observed only at ultra-low temperatures. In the present research, the AHE was observed at room temperature and higher, up to around at least the boiling point of water. Generation of the AHE at practical temperatures will facilitate its use in everyday technologies.
    “A key promise of nanotechnology is its provision of functionalities that you can’t get from bulk materials,” states Lin. “Our demonstration of an unconventional anomalous Hall effect at room temperature and above opens up a wealth of possibilities for future semiconductor technology, all accessible by conventional nanofabrication processes.”
    The present work will help improve the density of data storage, the efficiency of lighting, and the speed of electronic devices. By precisely controlling the nanoscale architecture of metals that are commonly used in industry, researchers will fabricate uniquely versatile technology that surpasses the functionality of natural materials.
    Story Source:
    Materials provided by Osaka University. Note: Content may be edited for style and length. More

  • in

    Artificial intelligence model outperforms clinicians in diagnosing pediatric ear infections

    An artificial-intelligence (AI) model built at Mass Eye and Ear was shown to be significantly more accurate than doctors at diagnosing pediatric ear infections in the first head-to-head evaluation of its kind, a research team working to develop the model for clinical use reported.
    According to a new study published August 16 in Otolaryngology-Head and Neck Surgery, the model, called OtoDX, was more than 95 percent accurate in diagnosing an ear infection in a set of 22 test images compared to 65 percent accuracy among a group of clinicians consisting of ENTs, pediatricians and primary care doctors, who reviewed the same images.
    When tested in a dataset of more than 600 inner ear images, the AI model had a diagnostic accuracy of more than 80 percent, representing a significant leap over the average accuracy of clinicians reported in medical literature.
    The model utilizes a type of AI called deep learning and was built from hundreds of photographs collected from children prior to undergoing surgery at Mass Eye and Ear for recurrent ear infections or fluid in the ears. The results signify a major step towards the development of a diagnostic tool that can one day be deployed to clinics to assist doctors during patient evaluations, according to the authors. An AI-based diagnostic tool can give providers, like pediatricians and urgent care clinics, an additional test to better inform their clinical decision-making.
    “Ear infections are incredibly common in children yet frequently misdiagnosed, leading to delays in care or unnecessary antibiotic prescriptions,” said lead study author Matthew Crowson, MD, an otolaryngologist and artificial intelligence researcher at Mass Eye and Ear, and assistant professor of Otolaryngology-Head and Neck Surgery at Harvard Medical School. “This model won’t replace the judgment of clinicians but can serve to supplement their expertise and help them be more confident in their treatment decisions.”
    Difficult to diagnose common condition
    Ear infections occur from a buildup of bacteria inside the middle ear. According to the National Institute on Deafness and Other Communication Disorders, at least five out of six children in the United States have had at least one ear infection before the age of three. When left untreated, ear infections can lead to hearing loss, developmental delays, complications like meningitis, and, in some developing nations, death. Conversely, overtreating children when they don’t have an ear infection can lead to antibiotic resistance and render the medications ineffective against future infections. This latter problem is of significant public health importance. More

  • in

    New algorithm uncovers the secrets of cell factories

    Drug molecules and biofuels can be made to order by living cell factories, where biological enzymes do the job. Now researchers at Chalmers University of Technology have developed a computer model that can predict how fast enzymes work, making it possible to find the most efficient living factories, as well as to study difficult diseases.
    Enzymes are proteins found in all living cells. Their job is to act as catalysts that increase the rate of specific chemical reactions that take place in the cells. The enzymes thus play a crucial role in making life on earth work and can be compared to nature’s small factories. They are also used in detergents, and to manufacture, among other things, sweeteners, dyes and medicines. The potential uses are almost endless, but are hindered by the fact that it is expensive and time consuming to study the enzymes.
    “To study every natural enzyme with experiments in a laboratory would be impossible, they are simply too many. But with our algorithm, we can predict which enzymes are most promising just by looking at the sequence of amino acids they are made up of,” says Eduard Kerkhoven, researcher in systems biology at Chalmers University of Technology and the study’s lead author.
    Only the most promising enzymes need to be tested
    The enzyme turnover number or kcat value, describes how fast and efficient an enzyme works and is essential for understanding a cell’s metabolism. In the new study, Chalmers researchers have developed a computer model that can quickly calculate the kcat value. The only information needed is the order of the amino acids that build up the enzyme — something that is often widely available in open databases. After the model makes a first selection, only the most promising enzymes need to be tested in the lab.
    Given the number of naturally occurring enzymes, the researchers believe that the new calculation model may be of great importance. More

  • in

    PetTrack lets owners know exactly where their dog is

    The pandemic gave people a lot more time with their dogs and cats, but return to the office has disrupted that connection. Pet cameras can help but are needed in every room and don’t really tell owners what their furry friend has been up to without reviewing all footage. Now, researchers at the Georgia Institute of Technology have created a new device that can put pet owners at ease.
    PetTrack uses a combination of sensors to give the accurate, real-time indoor location of an animal. Ultra-wideband (UWB) radio wireless sensors locate the pet and accelerometers determine if it’s sitting or moving regardless of objects or walls in the way, giving owners more detail on what their pet is doing than a camera or GPS. All of this is located on a small sensor that can be put on a collar for minimal invasiveness and can be viewed via a compatible smartphone app.
    “PetTrack comprises two things: one is knowing the pet’s indoor location and second is trying to understand their activity,” said Ashutosh Dhekne, an assistant professor in the School of Computer Science (SCS).
    Dhekne and his students presented the research in the paper “PetTrack: Tracking Pet Location and Activity Indoors”at BodySys 2022 in July, a workshop on body-centric computing systems that was part of MobiSys 2022 in Portland, Oregon.
    How PetTrack Works
    PetTrack’s innovative combination of sensors makes it unique compared to other pet-monitoring devices. The UWB radio wireless signal locates where the pet is in the home from up to 100 feet away, while the accelerometer acts as an inertial sensor that can track the pet’s pose. This means owners can learn whether their pet is standing, sitting, or even lying down. More