More stories

  • in

    Blocking the buzz: MXene composite could eliminate electromagnetic interference by absorbing it

    A recent discovery by materials science researchers in Drexel University’s College of Engineering might one day prevent electronic devices and components from going haywire when they’re too close to one another. A special coating that they developed, using a type of two-dimensional material called MXene, has shown to be capable of absorbing and disbursing the electromagnetic fields that are the source of the problem.
    Buzzing, feedback or static are the noticeable manifestations of electromagnetic interference, a collision of the electromagnetic fields generated by electronics devices. Aside from the sounds, this phenomenon can also diminish the performance of the devices and lead to overheating and malfunctions if left unchecked.
    While researchers and technologists have progressively reduced this problem with each generation of devices, their strategy thus far has been to encase vital components with a shielding that deflects electromagnetic waves. But according to the Drexel team, this isn’t a sustainable solution.
    “Because the number of electronics devices will continue to grow, deflecting the electromagnetic waves they produce is really just a short-term solution,” said Yury Gogotsi, PhD, Distinguished University and Bach professor in the College of Engineering, who led the research. “To truly solve this problem, we need to develop materials that will absorb and dissipate the interference. We believe we have found just such a material.”
    In the recent edition of Cell Reports Physical Science, Gogotsi’s team reported that combining MXene, a two-dimensional material they discovered more than a decade ago, with a conductive element called vanadium in a polymer solution, produces a coating that can absorb electromagnetic waves.
    While researchers have previously demonstrated that MXenes are highly effective at warding off electromagnetic interference by reflecting it, adding vanadium carbide in a polymer matrix enhances two key characteristics of the material that improve its shielding performance.
    According to the researchers, adding vanadium to MXene structure — a material known for its durability and corrosion-resistant properties, that is used in steel alloys for space vehicles and nuclear reactors — causes layers of the Mxene to form in sort of electrochemical grid that is perfect for trapping ions. Using microwave-transparent polymer, makes the material also more permeable to the electromagnetic waves.
    Combined, these properties produce a coating that can absorb, entrap and dissipate the energy of electromagnetic waves at greater than 90% efficiency, according to the research.
    “Remarkably, combining polyurethane, a common polymer used in common wall paint, with a tiny amount of MXene filler — about one part MXene in 50 parts polyurethane — can absorb more than 90% of incident electromagnetic waves covering the entire band of radar frequencies — known as X-band frequencies,” said Meikang Han, PhD, who participated in the research as a post-doctoral researcher at Drexel. “Radio waves just disappear inside the MXene-polymer composite film — of course, nothing disappears completely, the energy of the waves is transformed to a very small amount of heat which is easily dissipated by the material.”
    A thin coating of the vanadium-based MXene material — less than the width of a human hair — could render a material impermeable to any electromagnetic waves in the X-band spectrum, which includes microwave radiation and is the most common frequency produced by devices. Gogotsi predicts that this development could be important for high-stakes applications such as medical and military settings when maintaining technological performance is crucial.
    “Our results show that vanadium-based MXenes could play a key role in the expansion of Internet of Things technology and 5G and 6G communications.” Gogotsi said. “This study provides a new director for the development of thin, highly absorbent, MXene-based electromagnetic interference protection materials.”
    Story Source:
    Materials provided by Drexel University. Note: Content may be edited for style and length. More

  • in

    Artificial intelligence answers the call for quail information

    When states want to gauge quail populations, the process can be grueling, time-consuming and expensive.
    It means spending hours in the field listening for calls. Or leaving a recording device in the field to catch what sounds are made — only to spend hours later listening to that audio. Then, repeating this process until there’s enough information to start making population estimates.
    But a new model developed by researchers at the University of Georgia aims to streamline this process. By using artificial intelligence to analyze terabytes of recordings for quail calls, the process gives wildlife managers the ability to gather the data they need in a matter of minutes.
    “The model is very accurate, picking up between 80% and 100% of all calls even in the noisiest recordings. So, you could take a recording, put it through our model and it will tell you how many quail calls that the recorder heard,” said James Martin, an associate professor at the UGA Warnell School of Forestry and Natural Resources who has been working on the project, in collaboration with the Georgia Department of Natural Resources, for about five years. “This new model allows you to analyze terabytes of data in seconds, and what that will allow us to do is scale up monitoring, so you can literally put hundreds of these devices out and cover a lot more area and do so with a lot less effort than in the past.”
    The software represents about five years of work by Martin, postdoctoral researcher Victoria Nolan and numerous key contributors who have worked with a code writer to create the model. It’s also part of a larger shift taking place in the field of wildlife research, where computer algorithms are now assisting with work that once took humans thousands of hours to complete.
    Increasingly, computers are getting smarter at, for example, identifying specific noises or certain traits in photos and sound recordings. For researchers such as Martin, it means hours once spent on tasks such as listening to audio or looking at game camera images can now be done by a computer, freeing up valuable time to focus on other aspects of a project.
    The new tool can also be a valuable resource for state and federal agencies looking for information on their quail populations, but with limited funds to spend on any one project. “So, I think this is something states might jump on as far as replacing their current monitoring with acoustic recording devices,” added Martin.
    The software’s success was recently documented by the Journal of Remote Sensing in Ecology and Conservation.
    As the software gets more use and is exposed to sounds from new geographic areas, Martin said, it gets even “smarter.” As it is, quail offer several different kinds of calls. But when the software is exposed to a variety of sounds that aren’t quail, he said, it’s better able to distinguish the correct calls from the ambient noises of the grasses and trees around them.
    Over time, the software will grow more discerning.
    “So that’s why you have to keep giving it training data, and when you move geographies, you encounter new sounds that you didn’t train the model for,” he added. “It’s always about adaption.”
    Story Source:
    Materials provided by University of Georgia. Original written by Kristen Morales. Note: Content may be edited for style and length. More

  • in

    AI takes guesswork out of lateral flow testing

    An artificial intelligence app to read COVID-19 lateral flow tests helped to reduce false results in a new trial out today.
    Published in Cell Reports Medicine, a team of researchers from the University of Birmingham, Durham University and Oxford University tested whether a machine learning algorithm could improve the accuracy of results from antigen lateral flow devices for COVID-19.
    The LFD AI Consortium team worked at UK Health Security Agency assisted test centres and with health care workers conducting self-testing to trial the AI app. More than 100,000 images were submitted as part of the study, and the team found that the algorithm was able to increase the sensitivity of results, determining between a true positive and false negative, from 92% to 97.6% accuracy.
    Professor Andrew Beggs, Professor of Cancer Genetics & Surgery at the University of Birmingham and lead author of the study said:
    “The widespread use of antigen lateral flow devices was a significant moment not just during the pandemic, but has also introduced diagnostic testing to many more people in society. One of the drawbacks with LFD testing for Covid, pregnancy and any other future use is the ‘faint line’ question — where we can’t quite tell if it’s a positive or not.
    “The study looked at the feasibility of using machine learning to take the guesswork out of the faint line tests, and we’re pleased to see that the app saw an increase in sensitivity of the tests, reducing the numbers of false negatives. The promise of this type of technology could be used in lots of applications, both to reduce uncertainty about test results and provide a crucial support for visually impaired people.”
    Professor Camila Caiado, Professor of Statistics at Durham University and chief statistician on the project, said:
    “The increase in sensitivity and overall accuracy is significant and it shows the potential of this app by reducing the number of false negatives and future infections. Crucially, the method can also be easily adapted to the evaluation of other digital readers for lateral flow type devices.”
    Story Source:
    Materials provided by University of Birmingham. Note: Content may be edited for style and length. More

  • in

    Introducing FathomNet: New open-source image database unlocks the power of AI for ocean exploration

    A new collaborative effort between MBARI and other research institutions is leveraging the power of artificial intelligence and machine learning to accelerate efforts to study the ocean.
    In order to manage impacts from climate change and other threats, researchers urgently need to learn more about the ocean’s inhabitants, ecosystems, and processes. As scientists and engineers develop advanced robotics that can visualize marine life and environments to monitor changes in the ocean’s health, they face a fundamental problem: The collection of images, video, and other visual data vastly exceeds researchers’ capacity for analysis.
    FathomNet is an open-source image database that uses state-of-the-art data processing algorithms to help process the backlog of visual data. Using artificial intelligence and machine learning will alleviate the bottleneck for analyzing underwater imagery and accelerate important research around ocean health.
    “A big ocean needs big data. Researchers are collecting large quantities of visual data to observe life in the ocean. How can we possibly process all this information without automation? Machine learning provides a pathway forwards, however these approaches rely on massive datasets for training. FathomNet has been built to fill this gap,” said MBARI Principal Engineer Kakani Katija.
    Project co-founders Katija, Katy Croff Bell (Ocean Discovery League), and Ben Woodward (CVision AI), along with members of the extended FathomNet team, detailed the development of this new image database in a recent research publication in Scientific Reports.
    Recent advances in machine learning enable fast, sophisticated analysis of visual data, but the use of artificial intelligence in ocean research has been limited by the lack of a standard set of existing images that could be used to train the machines to recognize and catalog underwater objects and life. FathomNet addresses this need by aggregating images from multiple sources to create a publicly available, expertly curated underwater image training database. More

  • in

    A new AI model can accurately predict human response to novel drug compounds

    The journey between identifying a potential therapeutic compound and Food and Drug Administration approval of a new drug can take well over a decade and cost upwards of a billion dollars. A research team at the CUNY Graduate Center has created an artificial intelligence model that could significantly improve the accuracy and reduce the time and cost of the drug development process. Described in a newly published paper in Nature Machine Intelligence, the new model, called CODE-AE, can screen novel drug compounds to accurately predict efficacy in humans. In tests, it was also able to theoretically identify personalized drugs for over 9,000 patients that could better treat their conditions. Researchers expect the technique to significantly accelerate drug discovery and precision medicine.
    Accurate and robust prediction of patient-specific responses to a new chemical compound is critical to discover safe and effective therapeutics and select an existing drug for a specific patient. However, it is unethical and infeasible to do early efficacy testing of a drug in humans directly. Cell or tissue models are often used as a surrogate of the human body to evaluate the therapeutic effect of a drug molecule. Unfortunately, the drug effect in a disease model often does not correlate with the drug efficacy and toxicity in human patients. This knowledge gap is a major factor in the high costs and low productivity rates of drug discovery.
    “Our new machine learning model can address the translational challenge from disease models to humans,” said Lei Xie, a professor of computer science, biology and biochemistry at the CUNY Graduate Center and Hunter College and the paper’s senior author. “CODE-AE uses biology-inspired design and takes advantage of several recent advances in machine learning. For example, one of its components uses similar techniques in Deepfake image generation.”
    The new model can provide a workaround to the problem of having sufficient patient data to train a generalized machine learning model, said You Wu, a CUNY Graduate Center Ph.D. student and co-author of the paper. “Although many methods have been developed to utilize cell-line screens for predicting clinical responses, their performances are unreliable due to data incongruity and discrepancies,” Wu said. “CODE-AE can extract intrinsic biological signals masked by noise and confounding factors and effectively alleviated the data-discrepancy problem.”
    As a result, CODE-AE significantly improves accuracy and robustness over state-of-the-art methods in predicting patient-specific drug responses purely from cell-line compound screens.
    The research team’s next challenge in advancing the technology’s use in drug discovery is developing a way for CODE-AE to reliably predict the effect of a new drug’s concentration and metabolization in human bodies. The researchers also noted that the AI model could potentially be tweaked to accurately predict human side effects to drugs.
    This work was supported by the National Institute of General Medical Sciences and the National Institute on Aging.
    Story Source:
    Materials provided by The Graduate Center, CUNY. Note: Content may be edited for style and length. More

  • in

    Deep learning tool identifies bacteria in micrographs

    Omnipose, a deep learning software, is helping to solve the challenge of identifying varied and miniscule bacteria in microscopy images. It has gone beyond this initial goal to identify several other types of tiny objects in micrographs.
    The UW Medicine microbiology lab of Joseph Mougous and the University of Washington physics and bioengineering lab of Paul A. Wiggins tested the tool. It was developed by University of Washington physics graduate student Kevin J. Cutler and his team.
    Mougous said that Cutler, as a physics student, “demonstrated an unusual interest in immersing himself in a biology environment so that he could learn first-hand about problems in need of solution in this field. He came over to my lab and quickly found one that he solved in spectacular fashion.”
    Their results are reported in the Oct. 17 edition of Nature Methods.
    The scientists found that Omnipose, trained on a large database of bacterial images, performed well in characterizing and quantifying the myriad of bacteria in mixed microbial cultures and eliminated some of the errors that can occur in its predecessor, Cellpose.
    Moreover, the software wasn’t easily fooled by extreme changes in a cell’s shape due to antibiotic treatment or antagonism by chemicals produced during interbacterial aggression. In fact, the program showed that it could even detect cell intoxication in a trial using E. coli.
    In addition, Omnipose did well in overcoming recognition problems due to differences in the optical characteristics across diverse bacteria. More

  • in

    New approach would improve user access to electric vehicle charging stations

    Researchers from North Carolina State University have developed a dynamic computational tool to help improve user access to electric vehicle (EV) charging stations, with the goal of making EVs more attractive for drivers.
    “We already know that there is a need for EV charging networks that are flexible, in order to support the adoption of EVs,” says Leila Hajibabai, corresponding author of a paper on the work and an assistant professor in NC State’s Fitts Department of Industrial and Systems Engineering. “That’s because there is tremendous variability in when and where people want to charge their vehicles, how much time they can spend at a charging station, how long it takes to charge their vehicles, and so on.
    “The fundamental question we wanted to address with this work is: What is the best way to manage existing charging station infrastructure in order to best meet the demands of electric vehicle users?”
    To answer that question, the researchers wanted to take the user’s perspective, so focused on questions that are important to EV drivers. How long will it take me to reach a charging station? What is the cost of using the charging station? How long might I have to wait to access a charging station? And what sort of fines are there if I stay at a charging station beyond the time limit?
    The researchers developed a technique that accounts for all of these factors in a complex computational model that makes use of a game theory framework.
    The technique does two things. First, it helps users find the nearest charging facility that meets their needs. Second, it has a dynamic system that charging station operators can use to determine how long vehicles can spend at a charging station before they need to make way for the next vehicle.
    “These outcomes are themselves dynamic — they evolve as additional data comes in about how users are making use of charging facilities,” Hajibabai says.
    For example, a user’s nearest available charging facility may change, depending on whether any spaces are available. And the amount of time users can spend at a charging station may change from day to day to reflect the reality of how people are using different charging facilities.
    “There’s no clear real-world benchmark that we can use to assess the extent to which our technique would improve user access to charging facilities,” Hajibabai says. “But in simulations, the technique did improve user access. The simulations also suggest that flexibility in when charging station slots are available was a key predictor of which stations users would visit.
    “A next step would be to work with existing charging station networks to pilot the technique and assess its performance in a real-world setting.”
    Story Source:
    Materials provided by North Carolina State University. Original written by Matt Shipman. Note: Content may be edited for style and length. More

  • in

    Some screen time better than none during children's concussion recovery

    Too much screen time can slow children’s recovery from concussions, but new research from UBC and the University of Calgary suggests that banning screen time is not the answer.
    The researchers looked for links between the self-reported screen time of more than 700 children aged 8-16 in the first 7-10 days following an injury, and symptoms reported by them and their caregivers over the following six months.
    The children whose concussion symptoms cleared up the fastest had engaged in a moderate amount of screen time. “We’ve been calling this the ‘Goldilocks’ group, because it appears that spending too little or too much time on screens isn’t ideal for concussion recovery,” said Dr. Molly Cairncross, an assistant professor at Simon Fraser University who conducted the research while a postdoctoral fellow working with associate professor Dr. Noah Silverberg in UBC’s psychology department. “Our findings show that the common recommendation to avoid smartphones, computers and televisions as much as possible may not be what’s best for kids.”
    The study was part of a larger concussion project called Advancing Concussion Assessment in Pediatrics (A-CAP) led by psychology professor Dr. Keith Yeates at the University of Calgary and funded by the Canadian Institutes of Health Research. The data came from participants aged 8-16 who had suffered either a concussion or an orthopaedic injury, such as a sprained ankle or broken arm, and sought care at one of five emergency departments in Canada.
    The purpose of including children who had orthopaedic injuries was to compare their recoveries with the group who had concussions.
    Patients in the concussion group generally had relatively worse symptoms than their counterparts with orthopaedic injuries, but within the concussion group it was not simply a matter of symptoms worsening with more screen time. Children with minimal screen time recovered more slowly, too. More