More stories

  • in

    New shortcut enables faster creation of spin pattern in magnet

    Physicists have discovered a much faster approach to create a pattern of spins in a magnet. This ‘shortcut’ opens a new chapter in topology research. Interestingly, this discovery also offers an additional method to achieve more efficient magnetic data storage. The research will be published on 5 October in Nature Materials.
    Physicists previously demonstrated that laser light can create a pattern of magnetic spins. Now they have discovered a new route that enables this to be done much more quickly, in less than 300 picoseconds (a picosecond is one millionth of a millionth of a second). This is much faster than was previously thought possible.
    Useful for data storage: skyrmions
    Magnets consist of many small magnets, which are called spins. Normally, all the spins point in the same direction, which determines the north and south poles of the magnet. But the directions of the spins together sometimes form vortex-like configurations known as skyrmions.
    “These skyrmions in magnets could be used as a new type of data storage,” explains Johan Mentink, physicist at Radboud University. For a number of years, Radboud scientists have been looking for optimal ways to control magnetism with laser light and ultimately use it for more efficient data storage. In this technique, very short pulses of light are fired at a magnetic material. This reverses the magnetic spins in the material, which changes a bit from a 0 to a 1.
    “Once the magnetic spins take the vortex-like shape of a skyrmion, this configuration is hard to erase,” says Mentink. “Moreover, these skyrmions are only a few nanometers (one billionth of a meter) in size, so you can store a lot of data on a very small piece of material.”
    Shortcut
    The phase transition between these two states in a magnet — all the spins pointing in one direction to a skyrmion — is comparable to a road over a high mountain. The researchers have discovered that you can take a ‘shortcut’ through the mountain by heating the material very quickly with a laser pulse. Thereby, the threshold for the phase transition becomes lower for a very short time.
    A remarkable aspect of this new approach is that the material is first brought into a very chaotic state, in which the topology — which can be seen as the number of skyrmions in the material — fluctuates strongly. The researchers discovered this approach by combining X-rays generated by the European free electron laser in Hamburg with extremely advanced electron microscopy and spin dynamics simulations. “This research therefore involved an enormous team effort,” Mentink emphasises.
    New possibilities
    This fundamental discovery has opened a new chapter in topology research. Mentink expects that many more scientists will now start to look for similar ways to ‘take a shortcut through the mountain’ in other materials.
    This discovery also enables new approaches to create faster and more efficient data storage. There is an increasing need for this, for example due to the gigantic, energy-guzzling data centres that are required for massive data storage in the cloud. Magnetic skyrmions can provide a solution to this problem. Because they are very small and can be created very quickly with light, a lot of information can potentially be stored very quickly and efficiently on a small area.

    Story Source:
    Materials provided by Radboud University Nijmegen. Note: Content may be edited for style and length. More

  • in

    Deep learning gives drug design a boost

    When you take a medication, you want to know precisely what it does. Pharmaceutical companies go through extensive testing to ensure that you do.
    With a new deep learning-based technique created at Rice University’s Brown School of Engineering, they may soon get a better handle on how drugs in development will perform in the human body.
    The Rice lab of computer scientist Lydia Kavraki has introduced Metabolite Translator, a computational tool that predicts metabolites, the products of interactions between small molecules like drugs and enzymes.
    The Rice researchers take advantage of deep-learning methods and the availability of massive reaction datasets to give developers a broad picture of what a drug will do. The method is unconstrained by rules that companies use to determine metabolic reactions, opening a path to novel discoveries.
    “When you’re trying to determine if a compound is a potential drug, you have to check for toxicity,” Kavraki said. “You want to confirm that it does what it should, but you also want to know what else might happen.”
    The research by Kavraki, lead author and graduate student Eleni Litsa and Rice alumna Payel Das of IBM’s Thomas J. Watson Research Center, is detailed in the Royal Society of Chemistry journal Chemical Science.
    The researchers trained Metabolite Translator to predict metabolites through any enzyme, but measured its success against the existing rules-based methods that are focused on the enzymes in the liver. These enzymes are responsible for detoxifying and eliminating xenobiotics, like drugs, pesticides and pollutants. However, metabolites can be formed through other enzymes as well.

    advertisement

    “Our bodies are networks of chemical reactions,” Litsa said. “They have enzymes that act upon chemicals and may break or form bonds that change their structures into something that could be toxic, or cause other complications. Existing methodologies focus on the liver because most xenobiotic compounds are metabolized there. With our work, we’re trying to capture human metabolism in general.
    “The safety of a drug does not depend only on the drug itself but also on the metabolites that can be formed when the drug is processed in the body,” Litsa said.
    The rise of machine learning architectures that operate on structured data, such as chemical molecules, make the work possible, she said. Transformer was introduced in 2017 as a sequence translation method that has found wide use in language translation.
    Metabolite Translator is based on SMILES (for “simplified molecular-input line-entry system”), a notation method that uses plain text rather than diagrams to represent chemical molecules.
    “What we’re doing is exactly the same as translating a language, like English to German,” Litsa said.

    advertisement

    Due to the lack of experimental data, the lab used transfer learning to develop Metabolite Translator. They first pre-trained a Transformer model on 900,000 known chemical reactions and then fine-tuned it with data on human metabolic transformations.
    The researchers compared Metabolite Translator results with those from several other predictive techniques by analyzing known SMILES sequences of 65 drugs and 179 metabolizing enzymes. Though Metabolite Translator was trained on a general dataset not specific to drugs, it performed as well as commonly used rule-based methods that have been specifically developed for drugs. But it also identified enzymes that are not commonly involved in drug metabolism and were not found by existing methods.
    “We have a system that can predict equally well with rule-based systems, and we didn’t put any rules in our system that require manual work and expert knowledge,” Kavraki said. “Using a machine learning-based method, we are training a system to understand human metabolism without the need for explicitly encoding this knowledge in the form of rules. This work would not have been possible two years ago.”
    Kavraki is the Noah Harding Professor of Computer Science, a professor of bioengineering, mechanical engineering and electrical and computer engineering and director of Rice’s Ken Kennedy Institute. Rice University and the Cancer Prevention and Research Institute of Texas supported the research. More

  • in

    Efficient pollen identification

    From pollen forecasting, honey analysis and climate-related changes in plant-pollinator interactions, analysing pollen plays an important role in many areas of research. Microscopy is still the gold standard, but it is very time consuming and requires considerable expertise. In cooperation with Technische Universität (TU) Ilmenau, scientists from the Helmholtz Centre for Environmental Research (UFZ) and the German Centre for Integrative Biodiversity Research (iDiv) have now developed a method that allows them to efficiently automate the process of pollen analysis. Their study has been published in the specialist journal New Phytologist.
    Pollen is produced in a flower’s stamens and consists of a multitude of minute pollen grains, which contain the plant’s male genetic material necessary for its reproduction. The pollen grains get caught in the tiny hairs of nectar-feeding insects as they brush past and are thus transported from flower to flower. Once there, in the ideal scenario, a pollen grain will cling to the sticky stigma of the same plant species, which may then result in fertilisation. “Although pollinating insects perform this pollen delivery service entirely incidentally, its value is immeasurably high, both ecologically and economically,” says Dr. Susanne Dunker, head of the working group on imaging flow cytometry at the Department for Physiological Diversity at UFZ and iDiv. “Against the background of climate change and the accelerating loss of species, it is particularly important for us to gain a better understanding of these interactions between plants and pollinators.” Pollen analysis is a critical tool in this regard.
    Each species of plant has pollen grains of a characteristic shape, surface structure and size. When it comes to identifying and counting pollen grains — measuring between 10 and 180 micrometres — in a sample, microscopy has long been considered the gold standard. However, working with a microscope requires a great deal of expertise and is very time-consuming. “Although various approaches have already been proposed for the automation of pollen analysis, these methods are either unable to differentiate between closely related species or do not deliver quantitative findings about the number of pollen grains contained in a sample,” continues UFZ biologist Dr. Dunker. Yet it is precisely this information that is critical to many research subjects, such as the interaction between plants and pollinators.
    In their latest study, Susanne Dunker and her team of researchers have developed a novel method for the automation of pollen analysis. To this end they combined the high throughput of imaging flow cytometry — a technique used for particle analysis — with a form of artificial intelligence (AI) known as deep learning to design a highly efficient analysis tool, which makes it possible to both accurately identify the species and quantify the pollen grains contained in a sample. Imaging flow cytometry is a process that is primarily used in the medical field to analyse blood cells but is now also being repurposed for pollen analysis. “A pollen sample for examination is first added to a carrier liquid, which then flows through a channel that becomes increasingly narrow,” says Susanne Dunker, explaining the procedure. “The narrowing of the channel causes the pollen grains to separate and line up as if they are on a string of pearls, so that each one passes through the built-in microscope element on its own and images of up to 2,000 individual pollen grains can be captured per second.” Two normal microscopic images are taken plus ten fluorescence microscopic images per grain of pollen. When excited with light radiated at certain wavelengths by a laser, the pollen grains themselves emit light. “The area of the colour spectrum in which the pollen fluoresces — and at which precise location — is sometimes very specific. This information provides us with additional traits that can help identify the individual plant species,” reports Susanne Dunker. In the deep learning process, an algorithm works in successive steps to abstract the original pixels of an image to a greater and greater degree in order to finally extract the species-specific characteristics. “Microscopic images, fluorescence characteristics and high throughput have never been used in combination for pollen analysis before — this really is an absolute first.” Where the analysis of a relatively straightforward sample takes, for example, four hours under the microscope, the new process takes just 20 minutes. UFZ has therefore applied for a patent for the novel high-throughput analysis method, with its inventor, Susanne Dunker, receiving the UFZ Technology Transfer Award in 2019.
    The pollen samples examined in the study came from 35 species of meadow plants, including yarrow, sage, thyme and various species of clover such as white, mountain and red clover. In total, the researchers prepared around 430,000 images, which formed the basis for a data set. In cooperation with TU Ilmenau, this data set was then transferred using deep learning into a highly efficient tool for pollen identification. In subsequent analyses, the researchers tested the accuracy of their new method, comparing unknown pollen samples from the 35 plant species against the data set. “The result was more than satisfactory — the level of accuracy was 96 per cent,” says Susanne Dunker. Even species that are difficult to distinguish from one another, and indeed present experts with a challenge under the microscope, could be reliably identified. The new method is therefore not only extremely fast but also highly precise.
    In the future, the new process for automated pollen analysis will play a key role in answering critical research questions about interactions between plants and pollinators. How important are certain pollinators like bees, flies and bumblebees for particular plant species? What would be the consequences of losing a species of pollinating insect or a plant? “We are now able to evaluate pollen samples on a large scale, both qualitatively and- at the same time — quantitatively. We are constantly expanding our pollen data set of insect-pollinated plants for that purpose,” comments Susanne Dunker. She aims to expand the data set to include at least those 500 plant species whose pollen is significant as a food source for honeybees. More

  • in

    Virtual follow-up care is more convenient and just as beneficial to surgical patients

    Surgical patients who participate in virtual follow-up visits after their operations spend a similar amount of time with surgical team members as those who meet face-to-face. Moreover, these patients benefit by spending less time waiting at and traveling to the clinic for in-person appointments, according to research findings presented at the virtual American College of Surgeons Clinical Congress 2020.
    “I think it’s really valuable for patients to understand that, in the virtual space scenario, they are still going to get quality time with their surgical team,” said lead study author Caroline Reinke, MD, FACS, associate professor of surgery at Atrium Health in Charlotte, N.C. “A virtual appointment does not shorten that time, and there is still an ability to answer questions, connect, and address ongoing medical care.”
    Due to the Coronavirus Disease 2019 (COVID-19) pandemic and the widespread adoption of technology, many surgical patients are being offered virtual appointments in place of traditional in-person visits. The researchers say this is one of the first studies to look at how patients spend their time in post-operative virtual visits compared with face-to-face consultations.
    The study design was a non-inferiority, randomized controlled trial that involved more than 400 patients who underwent laparoscopic appendectomy or cholecystectomy at two hospitals in Charlotte, N.C. and were randomized 2:1 to a post-discharge virtual visit or to an in-person visit. The study began in August 2017 but was put on hold in March 2020 due to COVID-19.
    “Other studies have looked at the total visit time, but they haven’t been able to break down the specific amount of time the patient spends with the provider. And we wanted to know if that was the same or different between a virtual visit and an in-person visit,” Dr. Reinke said. “We wanted to get down to the nitty gritty of how much face time was actually being spent between the surgical team member and the patient.”
    Researchers tracked total time the patients spent checking in, waiting in the waiting room and exam room, meeting with the surgical team member, and being discharged after the exam. For in-person visits, on-site waiting time and an estimated drive time was factored into the overall time commitment.

    advertisement

    Just 64 percent of patients completed the follow-up visit. “Sometimes, patients are doing so well after minimally invasive surgery that about 30 percent of these patients don’t show up for a post-operative visit,” Dr. Reinke said.
    Overall, results showed that the total clinic time was longer for in-person visits than virtual visits (58 minutes vs. 19 minutes). However, patients in both groups spent the same amount of face time with a member of their surgical team (8.3 minutes vs. 8.2 minutes) discussing their post-operative recovery.
    “I was pleasantly surprised that the amount of time patients spent with the surgical team member was the same, because one of the main concerns with virtual visits is that patients feel disconnected and that there isn’t as much value in it,” Dr. Reinke said.
    Importantly, patients placed a high value on convenience and flexibility. “We received overwhelmingly positive responses to this patient-centered care option.” Dr. Reinke said. “Patients were able to do the post-operative visit at work or at home while caring for children, without having to disrupt their day in such a significant way.”
    The researchers also found that patients embraced the virtual scenario. The satisfaction rate between both groups of patients was similar (94 percent vs. 98 percent).

    advertisement

    In addition, wait time was much less for patients who got virtual care. “Even for virtual visits, the amount of time the patients spent checking in and waiting was about 55 percent of total time. Because virtual visits have the same regulations as in-person visits, even if you take out the components of waiting room and patient flow within the clinic, patients are still spending about half of their time on the logistics of check in,” Dr. Reinke. “Yet, with virtual visits, there is still much less time spent waiting, about 80 percent less time.”
    Still, some patients are not comfortable with the technology. The number of patients who couldn’t or didn’t want to do a virtual visit was higher than expected, according to the authors.
    “I think there are some patients that would really just rather come in and shake someone’s hand,” Dr. Reinke said. “I think for surgery it’s a little bit different, because with surgical care there are incisions to check on. However, we were able to check on incisions pretty easily, having patients show us their incisions virtually on the video screen.”
    This research was supported by the American College of Surgeons Franklin H. Martin Faculty Research Fellowship. “FACS” designates that a surgeon is a Fellow of the American College of Surgeons.
    Citation: The Value of Time: Analysis of Surgical Post-Discharge Virtual vs. In-Person Visits. Scientific Forum, American College of Surgeons Clinical Congress 2020, October 3-7, 2020. More

  • in

    New model examines how societal influences affect U.S. political opinions

    Northwestern University researchers have developed the first quantitative model that captures how politicized environments affect U.S. political opinion formation and evolution.
    Using the model, the researchers seek to understand how populations change their opinions when exposed to political content, such as news media, campaign ads and ordinary personal exchanges. The math-based framework is flexible, allowing future data to be incorporated as it becomes available.
    “It’s really powerful to understand how people are influenced by the content that they see,” said David Sabin-Miller, a Northwestern graduate student who led the study. “It could help us understand how populations become polarized, which would be hugely beneficial.”
    “Quantitative models like this allow us to run computational experiments,” added Northwestern’s Daniel Abrams, the study’s senior author. “We could simulate how various interventions might help fix extreme polarization to promote consensus.”
    The paper will be published on Thursday (Oct. 1) in the journal Physical Review Research.
    Abrams is an associate professor of engineering sciences and applied mathematics in Northwestern’s McCormick School of Engineering. Sabin-Miller is a graduate student in Abrams’ laboratory.

    advertisement

    Researchers have been modeling social behavior for hundreds of years. But most modern quantitative models rely on network science, which simulates person-to-person human interactions.
    The Northwestern team takes a different, but complementary, approach. They break down all interactions into perceptions and reactions. A perception takes into account how people perceive a politicized experience based on their current ideology. A far-right Republican, for example, likely will perceive the same experience differently than a far-left Democrat.
    After perceiving new ideas or information, people might change their opinions based on three established psychological effects: attraction/repulsion, tribalism and perceptual filtering. Northwestern’s quantitative model incorporates all three of these and examines their impact.
    “Typically, ideas that are similar to your beliefs can be convincing or attractive,” Sabin-Miller said. “But once ideas go past a discomfort point, people start rejecting what they see or hear. We call this the ‘repulsion distance,’ and we are trying to define that limit through modeling.”
    People also react differently depending on whether or not the new idea or information comes from a trusted source. Known as tribalism, people tend to give the benefit of the doubt to a perceived ally. In perceptual filtering, people — either knowingly through direct decisions or unknowingly through algorithms that curate content — determine what content they see.
    “Perceptual filtering is the ‘media bubble’ that people talk about,” Abrams explained. “You’re more likely to see things that are consistent with your existing beliefs.”
    Abrams and Sabin-Miller liken their new model to thermodynamics in physics — treating individual people like gas molecules that distribute around a room.
    “Thermodynamics does not focus on individual particles but the average of a whole system, which includes many, many particles,” Abrams said. “We hope to do the same thing with political opinions. Even though we can’t say how or when one individual’s opinion might change, we can look at how the whole population changes, on average.”

    Story Source:
    Materials provided by Northwestern University. Original written by Amanda Morris. Note: Content may be edited for style and length. More

  • in

    New tool shows main highways of disease development

    As people get older they often jump from disease to disease and carry the burden of more chronic diseases at once. But is there a system in the way diseases follow each other? Danish researchers have for the past six years developed a comprehensive tool, the Danish Disease Trajectory Browser, that utilizes 25 years of public health data from Danish patients to explore what they call the main highways of disease development.
    “A lot of research focus is on investigating one disease at a time. We try to add a time perspective and look at multiple diseases following each other to discover where are the most common trajectories — what are the disease highways that we as people encounter,” says professor Søren Brunak from the Novo Nordisk Foundation Center for Protein Research at University of Copenhagen.
    To illustrate the use of the tool the research group looked at data for Down Syndrome patients and showed, as expected, that these patients in general are diagnosed with Alzheimer’s Disease at an earlier age that others. Other frequent diseases are displayed as well.
    The Danish Disease Trajectory Browser is published in Nature Communications.
    Making health data accessible for research
    In general, there is a barrier for working with health data in research. Both in terms of getting approval from authorities to handle patient data and the fact that researchers need specific technical skills to extract meaningful information from the data.

    advertisement

    “We wanted to make an easily accessible tool for researchers and health professionals where they don’t necessarily need to know all the details. The statistical summary data on disease to disease jumps in the tool are not person-sensitive. We compute statistics over many patients and have boiled it down to data points that visualize how often patients with one disease get a specific other disease at a later point. So we are focusing on the sequence of diseases,” says Søren Brunak.
    The Danish Disease Trajectory Browser is freely available for the scientific community and uses WHO’s disease codes. Even though there are regional differences in disease patterns the tool is highly relevant in an international context to compare i.e. how fast diseases progress in different countries.
    Disease trajectories can help in personalized medicine
    For Søren Brunak the tool has a great potential in personalized medicine.
    “In personalized medicine a part of the job is to divide patients into subgroups that will benefit most from a specific treatment. By knowing the disease trajectories you can create subgroups of patients not just by their current disease, but based on their previous conditions and expected future conditions as well. In that way you find different subgroups of patients that may need different treatment strategies,” Søren Brunak explains.
    Currently the Disease Trajectory Browser contains data from 1994 to 2018 and will continuously be updated with new data.
    The Danish Disease Trajectory Browser is freely accessible here: http://dtb.cpr.ku.dk

    Story Source:
    Materials provided by University of Copenhagen The Faculty of Health and Medical Sciences. Note: Content may be edited for style and length. More

  • in

    Tool helps clear biases from computer vision

    Researchers at Princeton University have developed a tool that flags potential biases in sets of images used to train artificial intelligence (AI) systems. The work is part of a larger effort to remedy and prevent the biases that have crept into AI systems that influence everything from credit services to courtroom sentencing programs.
    Although the sources of bias in AI systems are varied, one major cause is stereotypical images contained in large sets of images collected from online sources that engineers use to develop computer vision, a branch of AI that allows computers to recognize people, objects and actions. Because the foundation of computer vision is built on these data sets, images that reflect societal stereotypes and biases can unintentionally influence computer vision models.
    To help stem this problem at its source, researchers in the Princeton Visual AI Lab have developed an open-source tool that automatically uncovers potential biases in visual data sets. The tool allows data set creators and users to correct issues of underrepresentation or stereotypical portrayals before image collections are used to train computer vision models. In related work, members of the Visual AI Lab published a comparison of existing methods for preventing biases in computer vision models themselves, and proposed a new, more effective approach to bias mitigation.
    The first tool, called REVISE (REvealing VIsual biaSEs), uses statistical methods to inspect a data set for potential biases or issues of underrepresentation along three dimensions: object-based, gender-based and geography-based. A fully automated tool, REVISE builds on earlier work that involved filtering and balancing a data set’s images in a way that required more direction from the user. The study was presented Aug. 24 at the virtual European Conference on Computer Vision.
    REVISE takes stock of a data set’s content using existing image annotations and measurements such as object counts, the co-occurrence of objects and people, and images’ countries of origin. Among these measurements, the tool exposes patterns that differ from median distributions.
    For example, in one of the tested data sets, REVISE showed that images including both people and flowers differed between males and females: Males more often appeared with flowers in ceremonies or meetings, while females tended to appear in staged settings or paintings. (The analysis was limited to annotations reflecting the perceived binary gender of people appearing in images.)
    Once the tool reveals these sorts of discrepancies, “then there’s the question of whether this is a totally innocuous fact, or if something deeper is happening, and that’s very hard to automate,” said Olga Russakovsky, an assistant professor of computer science and principal investigator of the Visual AI Lab. Russakovsky co-authored the paper with graduate student Angelina Wang and Arvind Narayanan, an associate professor of computer science.

    advertisement

    For example, REVISE revealed that objects including airplanes, beds and pizzas were more likely to be large in the images including them than a typical object in one of the data sets. Such an issue might not perpetuate societal stereotypes, but could be problematic for training computer vision models. As a remedy, the researchers suggest collecting images of airplanes that also include the labels mountain, desert or sky.
    The underrepresentation of regions of the globe in computer vision data sets, however, is likely to lead to biases in AI algorithms. Consistent with previous analyses, the researchers found that for images’ countries of origin (normalized by population), the United States and European countries were vastly overrepresented in data sets. Beyond this, REVISE showed that for images from other parts of the world, image captions were often not in the local language, suggesting that many of them were captured by tourists and potentially leading to a skewed view of a country.
    Researchers who focus on object detection may overlook issues of fairness in computer vision, said Russakovsky. “However, this geography analysis shows that object recognition can still can be quite biased and exclusionary, and can affect different regions and people unequally,” she said.
    “Data set collection practices in computer science haven’t been scrutinized that thoroughly until recently,” said co-author Angelina Wang, a graduate student in computer science. She said images are mostly “scraped from the internet, and people don’t always realize that their images are being used [in data sets]. We should collect images from more diverse groups of people, but when we do, we should be careful that we’re getting the images in a way that is respectful.”
    “Tools and benchmarks are an important step … they allow us to capture these biases earlier in the pipeline and rethink our problem setup and assumptions as well as data collection practices,” said Vicente Ordonez-Roman, an assistant professor of computer science at the University of Virginia who was not involved in the studies. “In computer vision there are some specific challenges regarding representation and the propagation of stereotypes. Works such as those by the Princeton Visual AI Lab help elucidate and bring to the attention of the computer vision community some of these issues and offer strategies to mitigate them.”
    A related study from the Visual AI Lab examined approaches to prevent computer vision models from learning spurious correlations that may reflect biases, such as overpredicting activities like cooking in images of women, or computer programming in images of men. Visual cues such as the fact that zebras are black and white, or basketball players often wear jerseys, contribute to the accuracy of the models, so developing effective models while avoiding problematic correlations is a significant challenge in the field.

    advertisement

    In research presented in June at the virtual International Conference on Computer Vision and Pattern Recognition, electrical engineering graduate student Zeyu Wang and colleagues compared four different techniques for mitigating biases in computer vision models.
    They found that a popular technique known as adversarial training, or “fairness through blindness,” harmed the overall performance of image recognition models. In adversarial training, the model cannot consider information about the protected variable — in the study, the researchers used gender as a test case. A different approach, known as domain-independent training, or “fairness through awareness,” performed much better in the team’s analysis.
    “Essentially, this says we’re going to have different frequencies of activities for different genders, and yes, this prediction is going to be gender-dependent, so we’re just going to embrace that,” said Russakovsky.
    The technique outlined in the paper mitigates potential biases by considering the protected attribute separately from other visual cues.
    “How we really address the bias issue is a deeper problem, because of course we can see it’s in the data itself,” said Zeyu Wang. “But in in the real world, humans can still make good judgments while being aware of our biases” — and computer vision models can be set up to work in a similar way, he said. More

  • in

    AI can detect COVID-19 in the lungs like a virtual physician, new study shows

    A University of Central Florida researcher is part of a new study showing that artificial intelligence can be nearly as accurate as a physician in diagnosing COVID-19 in the lungs.
    The study, recently published in Nature Communications, shows the new technique can also overcome some of the challenges of current testing.
    Researchers demonstrated that an AI algorithm could be trained to classify COVID-19 pneumonia in computed tomography (CT) scans with up to 90 percent accuracy, as well as correctly identify positive cases 84 percent of the time and negative cases 93 percent of the time.
    CT scans offer a deeper insight into COVID-19 diagnosis and progression as compared to the often-used reverse transcription-polymerase chain reaction, or RT-PCR, tests. These tests have high false negative rates, delays in processing and other challenges.
    Another benefit to CT scans is that they can detect COVID-19 in people without symptoms, in those who have early symptoms, during the height of the disease and after symptoms resolve.
    However, CT is not always recommended as a diagnostic tool for COVID-19 because the disease often looks similar to influenza-associated pneumonias on the scans.

    advertisement

    The new UCF co-developed algorithm can overcome this problem by accurately identifying COVID-19 cases, as well as distinguishing them from influenza, thus serving as a great potential aid for physicians, says Ulas Bagci, an assistant professor in UCF’s Department of Computer Science.
    Bagci was a co-author of the study and helped lead the research.
    “We demonstrated that a deep learning-based AI approach can serve as a standardized and objective tool to assist healthcare systems as well as patients,” Bagci says. “It can be used as a complementary test tool in very specific limited populations, and it can be used rapidly and at large scale in the unfortunate event of a recurrent outbreak.”
    Bagci is an expert in developing AI to assist physicians, including using it to detect pancreatic and lung cancers in CT scans.
    He also has two large, National Institutes of Health grants exploring these topics, including $2.5 million for using deep learning to examine pancreatic cystic tumors and more than $2 million to study the use of artificial intelligence for lung cancer screening and diagnosis.

    advertisement

    To perform the study, the researchers trained a computer algorithm to recognize COVID-19 in lung CT scans of 1,280 multinational patients from China, Japan and Italy.
    Then they tested the algorithm on CT scans of 1,337 patients with lung diseases ranging from COVID-19 to cancer and non-COVID pneumonia.
    When they compared the computer’s diagnoses with ones confirmed by physicians, they found that the algorithm was extremely proficient in accurately diagnosing COVID-19 pneumonia in the lungs and distinguishing it from other diseases, especially when examining CT scans in the early stages of disease progression.
    “We showed that robust AI models can achieve up to 90 percent accuracy in independent test populations, maintain high specificity in non-COVID-19 related pneumonias, and demonstrate sufficient generalizability to unseen patient populations and centers,” Bagci says.
    The UCF researcher is a longtime collaborator with study co-authors Baris Turkbey and Bradford J. Wood. Turkbey is an associate research physician at the NIH’s National Cancer Institute Molecular Imaging Branch, and Wood is the director of NIH’s Center for Interventional Oncology and chief of interventional radiology with NIH’s Clinical Center.
    This research was supported with funds from the NIH Center for Interventional Oncology and the Intramural Research Program of the National Institutes of Health, intramural NIH grants, the NIH Intramural Targeted Anti-COVID-19 program, the National Cancer Institute and NIH.
    Bagci received his doctorate in computer science from the University of Nottingham in England and joined UCF’s Department of Computer Science, part of the College of Engineering and Computer Science, in 2015. He is the Science Applications International Corp (SAIC) chair in UCF’s Department of Computer Science and a faculty member of UCF’s Center for Research in Computer Vision. SAIC is a Virginia-based government support and services company. More