More stories

  • in

    Computer vision researcher develops privacy software for surveillance videos

    Computer vision can be a valuable tool for anyone tasked with analyzing hours of footage because it can speed up the process of identifying individuals. For example, law enforcement may use it to perform a search for individuals with a simple query, such as “Locate anyone wearing a red scarf over the past 48 hours.”
    With video surveillance becoming more and more ubiquitous, Assistant Professor Yogesh Rawat, a researcher at the UCF Center for Research in Computer Vision (CRCV), is working to address privacy issues with advanced software installed on video cameras. His work is supported by $200,000 in funding from the U.S. National Science Foundation’s Accelerating Research Translation (NSF ART) program.
    “Automation allows us to watch a lot of footage, which is not possible by humans,” Rawat says. “Surveillance is important for society, but there are always privacy concerns. This development will enable surveillance with privacy preservation.”
    His video monitoring software protects the privacy of those recorded by obscuring select elements, such as faces or clothing, both in recordings and in real time. Rawat explains that his software adds perturbations to the RGB pixels in the video feed — the red, green and blue colors of light — so that human eyes are unable to recognize them.
    “Mainly we are interested in any identifiable information that we can visually interpret,” Rawat says. “For example, for a person’s face, I can say ‘This is that individual,’ just by identifying the face. It could be the height as well, maybe hair color, hair style, body shape — all those things that can be used to identify any person. All of this is private information.”
    Since Rawat aims to have the technology available in edge devices, devices that are not dependent on an outside server such as drones and public surveillance cameras, he and his team are also working on developing the technology so that it’s fast enough to analyze the feed as it is received. This poses the additional challenge of developing algorithms that can process the data as quickly as possible, so that graphics processing units (GPUs) and central processing units (CPUs) can handle the workload of analyzing footage as it is captured.
    To that end, his main considerations in implementing the software are speed and size.

    “We want to do this very efficiently and very quickly in real time,” Rawat says. “We don’t want to wait for a year, a month or days. We also don’t want to take a lot of computing power. We don’t have a lot of computing power in very small GPUs or very small CPUs. We are not working with large computers there, but very small devices.”
    The funding from the NSF ART program will allow Rawat to identify potential users of the technology, including nursing homes, childcare centers and authorities using surveillance cameras. Rawat is one of two UCF researchers to have projects initially funded through the $6 million grant awarded to the university earlier this year. Four more projects will be funded over the next four years.
    His work builds on several previous projects spearheaded by other CRCV members, including founder Mubarak Shah and researcher Chen Chen, including extensive work that allows analysis of untrimmed security videos, training artificial intelligence models to operate on a smaller scale and a patent on software that allows for the detection of multiple actions, persons and objects of interest. Funding sources for these works include $3.9 million from the IARPA Biometric Recognition and Identification at Altitude and Range program, $2.8 million from Intelligence Advanced Research Projects Activity (IARPA) Deep Intermodal Video Analysis, and $475,000 from the U.S Combating Terrorism Technical Support Office.
    Rawat says his work in computer vision is motivated by a drive to improve our world.
    “I’m really interested in understanding how we can easily navigate in this world as humans,” he says. “Visual perception is something I’m very interested in studying, including how we can bring it to machines and make things easy for us as humans and as a society.” More

  • in

    IRIS beamline at BESSY II extended with nanomicroscopy

    The IRIS infrared beamline at the BESSY II storage ring now offers a fourth option for characterising materials, cells and even molecules on different length scales. The team has extended the IRIS beamline with an end station for nanospectroscopy and nanoimaging that enables spatial resolutions down to below 30 nanometres. The instrument is also available to external user groups.
    The infrared beamline IRIS at the BESSY II storage ring is the only infrared beamline in Germany that is also available to external user groups and is therefore in great demand. Dr Ulrich Schade, in charge of the beamline, and his team continue to develop the instruments to enable unique, state-of-the-art experimental techniques in IR spectroscopy.
    As part of a recent major upgrade to the beamline, the team, together with the Institute of Chemistry at Humboldt University Berlin, has built an additional infrared near-field microscope.
    “With the nanoscope, we can resolve structures smaller than a thousandth of the diameter of a human hair and thus reach the innermost structures of biological systems, catalysts, polymers and quantum materials,” says Dr Alexander Veber, who led this extension.
    The new nanospectroscopy end station is based on a scanning optical microscope and enables imaging and spectroscopy with infrared light with a spatial resolution of more than 30 nm. To demonstrate the performance of the new end station, Veber analysed individual cellulose microfibrils and imaged cell structures. All end stations are available to national and international user groups. More

  • in

    AI in medicine: The causality frontier

    Machines can learn not only to make predictions, but also to handle causal relationships. An international research team shows how this could make therapies safer, more efficient, and more individualized.
    Artificial intelligence is making progress in the medical arena. When it comes to imaging techniques and the calculation of health risks, there is a plethora of AI methods in development and testing phases. Wherever it is a matter of recognizing patterns in large data volumes, it is expected that machines will bring great benefit to humanity. Following the classical model, the AI compares information against learned examples, draws conclusions, and makes extrapolations.
    Now an international team led by Professor Stefan Feuerriegel, Head of the Institute of Artificial Intelligence (AI) in Management at LMU, is exploring the potential of a comparatively new branch of AI for diagnostics and therapy. Can causal machine learning (ML) estimate treatment outcomes — and do so better than the ML methods generally used to date? Yes, says a landmark study by the group, which has been published in the journal Nature Medicine: causal ML can improve the effectiveness and safety of treatments.
    In particular, the new machine learning variant offers “an abundance of opportunities for personalizing treatment strategies and thus individually improving the health of patients,” write the researchers, who hail from Munich, Cambridge (United Kingdom), and Boston (United States) and include Stefan Bauer and Niki Kilbertus, professors of computer science at the Technical University of Munich (TUM) and group leaders at Helmholtz AI.
    As regards machine assistance in therapy decisions, the authors anticipate a decisive leap forward in quality. Classical ML recognizes patterns and discovers correlations, they argue. However, the causal principle of cause and effect remains closed to machines as a rule; they cannot address the question of why. And yet many questions that arise when making therapy decisions contain causal problems within them. The authors illustrate this with the example of diabetes: Classical ML would aim to predict how probable a disease is for a given patient with a range of risk factors. With causal ML, it would ideally be possible to answer how the risk changes if the patient gets an anti-diabetes drug; that is, gauge the effect of a cause (prescription of medication). It would also be possible to estimate whether another treatment plan would be better, for example, than the commonly prescribed medication, metformin.
    To be able to estimate the effect of a — hypothetical — treatment, however, “the AI models must learn to answer questions of a ‘What if?’ nature,” says Jonas Schweisthal, doctoral candidate in Feuerriegel’s team. “We give the machine rules for recognizing the causal structure and correctly formalizing the problem,” says Feuerriegel. Then the machine has to learn to recognize the effects of interventions and understand, so to speak, how real-life consequences are mirrored in the data that has been fed into the computers.
    Even in situations for which reliable treatment standards do not yet exist or where randomized studies are not possible for ethical reasons because they always contain a placebo group, machines could still gauge potential treatment outcomes from the available patient data and thus form hypotheses for possible treatment plans, so the researchers hope. With such real-world data, it should generally be possible to describe the patient cohorts with ever greater precision in the estimates, thereby bringing individualized therapy decisions that much closer. Naturally, there would still be the challenge of ensuring the reliability and robustness of the methods.
    “The software we need for causal ML methods in medicine doesn’t exist out of the box,” says Feuerriegel. Rather, “complex modeling” of the respective problem is required, involving “close collaboration between AI experts and doctors.” Like his TUM colleagues Stefan Bauer and Niki Kilbertus, Feuerriegel also researches questions relating to AI in medicine, decision-making, and other topics at the Munich Center for Machine Learning (MCML) and the Konrad Zuse School of Excellence in Reliable AI. In other fields of application, such as marketing, explains Feuerriegel, the work with causal ML has already been in the testing phase for some years now. “Our goal is to bring the methods a step closer to practice. The paper describes the direction in which things could move over the coming years.” More

  • in

    Using AI to improve diagnosis of rare genetic disorders

    Diagnosing rare Mendelian disorders is a labor-intensive task, even for experienced geneticists. Investigators at Baylor College of Medicine are trying to make the process more efficient using artificial intelligence. The team developed a machine learning system called AI-MARRVEL (AIM) to help prioritize potentially causative variants for Mendelian disorders. The study is published today in NEJM AI.
    Researchers from the Baylor Genetics clinical diagnostic laboratory noted that AIM’s module can contribute to predictions independent of clinical knowledge of the gene of interest, helping to advance the discovery of novel disease mechanisms. “The diagnostic rate for rare genetic disorders is only about 30%, and on average, it is six years from the time of symptom onset to diagnosis. There is an urgent need for new approaches to enhance the speed and accuracy of diagnosis,” said co-corresponding author Dr. Pengfei Liu, associate professor of molecular and human genetics and associate clinical director at Baylor Genetics.
    AIM is trained using a public database of known variants and genetic analysis called Model organism Aggregated Resources for Rare Variant ExpLoration (MARRVEL) previously developed by the Baylor team. The MARRVEL database includes more than 3.5 million variants from thousands of diagnosed cases. Researchers provide AIM with patients’ exome sequence data and symptoms, and AIM provides a ranking of the most likely gene candidates causing the rare disease.
    Researchers compared AIM’s results to other algorithms used in recent benchmark papers. They tested the models using three data cohorts with established diagnoses from Baylor Genetics, the National Institutes of Health-funded Undiagnosed Diseases Network (UDN) and the Deciphering Developmental Disorders (DDD) project. AIM consistently ranked diagnosed genes as the No. 1 candidate in twice as many cases than all other benchmark methods using these real-world data sets.
    “We trained AIM to mimic the way humans make decisions, and the machine can do it much faster, more efficiently and at a lower cost. This method has effectively doubled the rate of accurate diagnosis,” said co-corresponding author Dr. Zhandong Liu, associate professor of pediatrics — neurology at Baylor and investigator at the Jan and Dan Duncan Neurological Research Institute (NRI) at Texas Children’s Hospital.
    AIM also offers new hope for rare disease cases that have remained unsolved for years. Hundreds of novel disease-causing variants that may be key to solving these cold cases are reported every year; however, determining which cases warrant reanalysis is challenging because of the high volume of cases. The researchers tested AIM’s clinical exome reanalysis on a dataset of UDN and DDD cases and found that it was able to correctly identify 57% of diagnosable cases.
    “We can make the reanalysis process much more efficient by using AIM to identify a high-confidence set of potentially solvable cases and pushing those cases for manual review,” Zhandong Liu said. “We anticipate that this tool can recover an unprecedented number of cases that were not previously thought to be diagnosable.”
    Researchers also tested AIM’s potential for discovery of novel gene candidates that have not been linked to a disease. AIM correctly predicted two newly reported disease genes as top candidates in two UDN cases.

    “AIM is a major step forward in using AI to diagnose rare diseases. It narrows the differential genetic diagnoses down to a few genes and has the potential to guide the discovery of previously unknown disorders,” said co-corresponding author Dr. Hugo Bellen, Distinguished Service Professor in molecular and human genetics at Baylor and chair in neurogenetics at the Duncan NRI.
    “When combined with the deep expertise of our certified clinical lab directors, highly curated datasets and scalable automated technology, we are seeing the impact of augmented intelligence to provide comprehensive genetic insights at scale, even for the most vulnerable patient populations and complex conditions,” said senior author Dr. Fan Xia, associate professor of molecular and human genetics at Baylor and vice president of clinical genomics at Baylor Genetics. “By applying real-world training data from a Baylor Genetics cohort without any inclusion criteria, AIM has shown superior accuracy. Baylor Genetics is aiming to develop the next generation of diagnostic intelligence and bring this to clinical practice.”
    Other authors of this work include Dongxue Mao, Chaozhong Liu, Linhua Wang, Rami AI-Ouran, Cole Deisseroth, Sasidhar Pasupuleti, Seon Young Kim, Lucian Li, Jill A.Rosenfeld, Linyan Meng, Lindsay C. Burrage, Michael Wangler, Shinya Yamamoto, Michael Santana, Victor Perez, Priyank Shukla, Christine Eng, Brendan Lee and Bo Yuan. They are affiliated with one or more of the following institutions: Baylor College of Medicine, Jan and Dan Duncan Neurological Research Institute at Texas Children’s Hospital, Al Hussein Technical University, Baylor Genetics and the Human Genome Sequencing Center at Baylor.
    This work was supported by the Chang Zuckerberg Initiative and the National Institute of Neurological Disorders and Stroke (3U2CNS132415). More

  • in

    Artificial intelligence helps scientists engineer plants to fight climate change

    The Intergovernmental Panel on Climate Change (IPCC) declared that removing carbon from the atmosphere is now essential to fighting climate change and limiting global temperature rise. To support these efforts, Salk scientists are harnessing plants’ natural ability to draw carbon dioxide out of the air by optimizing their root systems to store more carbon for a longer period of time.
    To design these climate-saving plants, scientists in Salk’s Harnessing Plants Initiative are using a sophisticated new research tool called SLEAP — an easy-to-use artificial intelligence (AI) software that tracks multiple features of root growth. Created by Salk Fellow Talmo Pereira, SLEAP was initially designed to track animal movement in the lab. Now, Pereira has teamed up with plant scientist and Salk colleague Professor Wolfgang Busch to apply SLEAP to plants.
    In a study published in Plant Phenomics on April 12, 2024, Busch and Pereira debut a new protocol for using SLEAP to analyze plant root phenotypes — how deep and wide they grow, how massive their root systems become, and other physical qualities that, prior to SLEAP, were tedious to measure. The application of SLEAP to plants has already enabled researchers to establish the most extensive catalog of plant root system phenotypes to date.
    What’s more, tracking these physical root system characteristics helps scientists find genes affiliated with those characteristics, as well as whether multiple root characteristics are determined by the same genes or independently. This allows the Salk team to determine what genes are most beneficial to their plant designs.
    “This collaboration is truly a testament to what makes Salk science so special and impactful,” says Pereira. “We’re not just ‘borrowing’ from different disciplines — we’re really putting them on equal footing in order to create something greater than the sum of its parts.”
    Prior to using SLEAP, tracking the physical characteristics of both plants and animals required a lot of labor that slowed the scientific process. If researchers wanted to analyze an image of a plant, they would need to manually flag the parts of the image that were and weren’t plant — frame-by-frame, part-by-part, pixel-by-pixel. Only then could older AI models be applied to process the image and gather data about the plant’s structure.
    What sets SLEAP apart is its unique use of both computer vision (the ability for computers to understand images) and deep learning (an AI approach for training a computer to learn and work like the human brain). This combination allows researchers to process images without moving pixel-by-pixel, instead skipping this intermediate labor-intensive step to jump straight from image input to defined plant features.

    “We created a robust protocol validated in multiple plant types that cuts down on analysis time and human error, while emphasizing accessibility and ease-of-use — and it required no changes to the actual SLEAP software,” says first author Elizabeth Berrigan, a bioinformatics analyst in Busch’s lab.
    Without modifying the baseline technology of SLEAP, the researchers developed a downloadable toolkit for SLEAP called sleap-roots (available as open-source software here). With sleap-roots, SLEAP can process biological traits of root systems like depth, mass, and angle of growth. The Salk team tested the sleap-roots package in a variety of plants, including crop plants like soybeans, rice, and canola, as well as the model plant species Arabidopsis thaliana — a flowering weed in the mustard family. Across the variety of plants trialed, they found the novel SLEAP-based method outperformed existing practices by annotating 1.5 times faster, training the AI model 10 times faster, and predicting plant structure on new data 10 times faster, all with the same or better accuracy than before.
    Together with massive genome sequencing efforts for elucidating the genotype data in large numbers of crop varieties, these phenotypic data, such as a plant’s root system growing especially deep in soil, can be extrapolated to understand the genes responsible for creating that especially deep root system.
    This step — connecting phenotype and genotype — is crucial in Salk’s mission to create plants that hold on to more carbon and for longer, as those plants will need root systems designed to be deeper and more robust. Implementing this accurate and efficient software will allow the Harnessing Plants Initiative to connect desirable phenotypes to targetable genes with groundbreaking ease and speed.
    “We have already been able to create the most extensive catalogue of plant root system phenotypes to date, which is really accelerating our research to create carbon-capturing plants that fight climate change,” says Busch, the Hess Chair in Plant Science at Salk. “SLEAP has been so easy to apply and use, thanks to Talmo’s professional software design, and it’s going to be an indispensable tool in my lab moving forward.”
    Accessibility and reproducibility were at the forefront of Pereira’s mind when creating both SLEAP and sleap-roots. Because the software and sleap-roots toolkit are free to use, the researchers are excited to see how sleap-roots will be used around the world. Already, they have begun discussions with NASA scientists hoping to utilize the tool not only to help guide carbon-sequestering plants on Earth, but also to study plants in space.
    At Salk, the collaborative team is not yet ready to disband — they are already embarking on a new challenge of analyzing 3D data with SLEAP. Efforts to refine, expand, and share SLEAP and sleap-roots will continue for years to come, but its use in Salk’s Harnessing Plants Initiative is already accelerating plant designs and helping the Institute make an impact on climate change.
    Other authors include Lin Wang, Hannah Carrillo, Kimberly Echegoyen, Mikayla Kappes, Jorge Torres, Angel Ai-Perreira, Erica McCoy, Emily Shane, Charles Copeland, Lauren Ragel, Charidimos Georgousakis, Sanghwa Lee, Dawn Reynolds, Avery Talgo, Juan Gonzalez, Ling Zhang, Ashish Rajurkar, Michel Ruiz, Erin Daniels, Liezl Maree, and Shree Pariyar of Salk.
    The work was supported by the Bezos Earth Fund, the Hess Corporation, the TED Audacious Project, and the National Institutes of Health (RF1MH132653). More

  • in

    Scientists tune the entanglement structure in an array of qubits

    Entanglement is a form of correlation between quantum objects, such as particles at the atomic scale. This uniquely quantum phenomenon cannot be explained by the laws of classical physics, yet it is one of the properties that explains the macroscopic behavior of quantum systems.
    Because entanglement is central to the way quantum systems work, understanding it better could give scientists a deeper sense of how information is stored and processed efficiently in such systems.
    Qubits, or quantum bits, are the building blocks of a quantum computer. However, it is extremely difficult to make specific entangled states in many-qubit systems, let alone investigate them. There are also a variety of entangled states, and telling them apart can be challenging.
    Now, MIT researchers have demonstrated a technique to efficiently generate entanglement among an array of superconducting qubits that exhibit a specific type of behavior.
    Over the past years, the researchers at the Engineering Quantum Systems (EQuS) group have developed techniques using microwave technology to precisely control a quantum processor composed of superconducting circuits. In addition to these control techniques, the methods introduced in this work enable the processor to efficiently generate highly entangled states and shift those states from one type of entanglement to another — including between types that are more likely to support quantum speed-up and those that are not.
    “Here, we are demonstrating that we can utilize the emerging quantum processors as a tool to further our understanding of physics. While everything we did in this experiment was on a scale which can still be simulated on a classical computer, we have a good roadmap for scaling this technology and methodology beyond the reach of classical computing,” says Amir H. Karamlou ’18, MEng ’18, PhD ’23, the lead author of the paper.
    The senior author is William D. Oliver, the Henry Ellis Warren professor of electrical engineering and computer science and of physics, director of the Center for Quantum Engineering, leader of the EQuS group, and associate director of the Research Laboratory of Electronics. Karamlou and Oliver are joined by Research Scientist Jeff Grover, postdoc Ilan Rosen, and others in the departments of Electrical Engineering and Computer Science and of Physics at MIT, at MIT Lincoln Laboratory, and at Wellesley College and the University of Maryland. The research appears in Nature.

    Assessing entanglement
    In a large quantum system comprising many interconnected qubits, one can think about entanglement as the amount of quantum information shared between a given subsystem of qubits and the rest of the larger system.
    The entanglement within a quantum system can be categorized as area-law or volume-law, based on how this shared information scales with the geometry of subsystems. In volume-law entanglement, the amount of entanglement between a subsystem of qubits and the rest of the system grows proportionally with the total size of the subsystem.
    On the other hand, area-law entanglement depends on how many shared connections exist between a subsystem of qubits and the larger system. As the subsystem expands, the amount of entanglement only grows along the boundary between the subsystem and the larger system.
    In theory, the formation of volume-law entanglement is related to what makes quantum computing so powerful.
    “While have not yet fully abstracted the role that entanglement plays in quantum algorithms, we do know that generating volume-law entanglement is a key ingredient to realizing a quantum advantage,” says Oliver.

    However, volume-law entanglement is also more complex than area-law entanglement and practically prohibitive at scale to simulate using a classical computer.
    “As you increase the complexity of your quantum system, it becomes increasingly difficult to simulate it with conventional computers. If I am trying to fully keep track of a system with 80 qubits, for instance, then I would need to store more information than what we have stored throughout the history of humanity,” Karamlou says.
    The researchers created a quantum processor and control protocol that enable them to efficiently generate and probe both types of entanglement.
    Their processor comprises superconducting circuits, which are used to engineer artificial atoms. The artificial atoms are utilized as qubits, which can be controlled and read out with high accuracy using microwave signals.
    The device used for this experiment contained 16 qubits, arranged in a two-dimensional grid. The researchers carefully tuned the processor so all 16 qubits have the same transition frequency. Then, they applied an additional microwave drive to all of the qubits simultaneously.
    If this microwave drive has the same frequency as the qubits, it generates quantum states that exhibit volume-law entanglement. However, as the microwave frequency increases or decreases, the qubits exhibit less volume-law entanglement, eventually crossing over to entangled states that increasingly follow an area-law scaling.
    Careful control
    “Our experiment is a tour de force of the capabilities of superconducting quantum processors. In one experiment, we operated the processor both as an analog simulation device, enabling us to efficiently prepare states with different entanglement structures, and as a digital computing device, needed to measure the ensuing entanglement scaling,” says Rosen.
    To enable that control, the team put years of work into carefully building up the infrastructure around the quantum processor.
    By demonstrating the crossover from volume-law to area-law entanglement, the researchers experimentally confirmed what theoretical studies had predicted. More importantly, this method can be used to determine whether the entanglement in a generic quantum processor is area-law or volume-law.
    In the future, scientists could utilize this technique to study the thermodynamic behavior of complex quantum systems, which is too complex to be studied using current analytical methods and practically prohibitive to simulate on even the world’s most powerful supercomputers.
    “The experiments we did in this work can be used to characterize or benchmark larger-scale quantum systems, and we may also learn something more about the nature of entanglement in these many-body systems,” says Karamlou.
    Additional co-authors of the study are Sarah E. Muschinske, Cora N. Barrett, Agustin Di Paolo, Leon Ding, Patrick M. Harrington, Max Hays, Rabindra Das, David K. Kim, Bethany M. Niedzielski, Meghan Schuldt, Kyle Serniak, Mollie E. Schwartz, Jonilyn L. Yoder, Simon Gustavsson, and Yariv Yanay.
    This research is funded, in part, by the U.S. Department of Energy, the U.S. Defense Advanced Research Projects Agency, the U.S. Army Research Office, the National Science Foundation, the STC Center for Integrated Quantum Materials, the Wellesley College Samuel and Hilda Levitt Fellowship, NASA, and the Oak Ridge Institute for Science and Education. More

  • in

    Artificial intelligence can develop treatments to prevent ‘superbugs’

    Cleveland Clinic researchers developed an artficial intelligence (AI) model that can determine the best combination and timeline to use when prescribing drugs to treat a bacterial infection, based solely on how quickly the bacteria grow given certain perturbations. A team led by Jacob Scott, MD, PhD, and his lab in the Theory Division of Translational Hematology and Oncology, recently published their findings in PNAS.
    Antibiotics are credited with increasing the average US lifespan by almost ten years. Treatment lowered fatality rates for health issues we now consider minor — like some cuts and injuries. But antibiotics aren’t working as well as they used to, in part because of widespread use.
    “Health agencies worldwide agree that we’re entering a post-antibiotic era,” explains Dr. Scott. “If we don’t change how we go after bacteria, more people will die from antibiotic-resistant infections than from cancer by 2050.”
    Bacteria replicate quickly, producing mutant offspring. Overusing antibiotics gives bacteria a chance to practice making mutations that resist treatment. Over time, the antibiotics kill all the susceptible bacteria, leaving behind only the stronger mutants that the antibiotics can’t kill.
    One strategy physicians are using to modernize the way we treat bacterial infections is antibiotic cycling. Healthcare providers rotate between different antibiotics over specific time periods. Changing between different drugs gives bacteria less time to evolve resistance to any one class of antibiotic. Cycling can even make bacteria more susceptible to other antibiotics.
    “Drug cycling shows a lot of promise in effectively treating diseases,” says study first author and medical student Davis Weaver, PhD. “The problem is that we don’t know the best way to do it. Nothing’s standardized between hospitals for which antibiotic to give, for how long and in what order.”
    Study co-author Jeff Maltas, PhD, a postdoctoral fellow at Cleveland Clinic, uses computer models to predict how a bacterium’s resistance to one antibiotic will make it weaker to another. He teamed up with Dr. Weaver to see if data-driven models could predict drug cycling regimens that minimize antibiotic resistance and maximize antibiotic susceptibility, despite the random nature of how bacteria evolve.

    Dr. Weaver led the charge to apply reinforcement learning to the drug cycling model, which teaches a computer to learn from its mistakes and successes to determine the best strategy to complete a task. This study is among the first to apply reinforcement learning to antibiotic cycling regiments, Drs. Weaver and Maltas say.
    “Reinforcement learning is an ideal approach because you just need to know how quickly the bacteria are growing, which is relatively easy to determine,” explains Dr. Weaver. “There’s also room for human variations and errors. You don’t need to measure the growth rates perfectly down to the exact millisecond every time.”
    The research team’s AI was able to figure out the most efficient antibiotic cycling plans to treat multiple strains of E. coli and prevent drug resistance. The study shows that AI can support complex decision-making like calculating antibiotic treatment schedules, Dr. Maltas says.
    Dr. Weaver explains that in addition to managing an individual patient’s infection, the team’s AI model can inform how hospitals treat infections across the board. He and his research team are also working to expand their work beyond bacterial infections into other deadly diseases.
    “This idea isn’t limited to bacteria, it can be applied to anything that can evolve treatment resistance,” he says. “In the future we believe these types of AI can be used to to manage drug-resistant cancers, too.” More

  • in

    New study reveals how AI can enhance flexibility, efficiency for customer service centers

    Whenever you call a customer service contact center, the team on the other end of the line typically has three goals: to reduce their response time, solve your problem and do it within the shortest service time possible.
    However, resolving your problem might entail a significant time investment, potentially clashing with an overarching business objective to keep service duration to a minimum. These conflicting priorities can be commonplace for customer service contact centers, which often rely on the latest technology to meet customers’ needs.
    To pursue those conflicting demands, these organizations practice what’s referred to as ambidexterity, and there are three different modes to achieve it: structural separation, behavioral integration and sequential alternation. So, what role might artificial intelligence (AI) systems play in improving how these organizations move from one ambidexterity mode to another to accomplish their tasks?
    New research involving the School of Management at Binghamton University, State University of New York explored that question. Using data from different contact center sites, researchers examined the impact of AI systems on a customer service organization’s ability to shift across ambidexterity modes.
    The key takeaway: it’s a delicate balancing act; AI is a valuable asset, so long as it’s used properly, though these organizations shouldn’t rely on it exclusively to guide their strategies.
    Associate Professor Sumantra Sarkar, who helped conduct the research, said the study’s goal was to understand better how organizations today might use AI to guide their transition from one ambidexterity mode to another because certain structures or approaches might be more beneficial from one month to the next.
    “Customer service organizations often balance exploiting the latest technology to boost efficiency and, therefore, save money,” Sarkar said. “This dichotomy is what ambidexterity is all about, exploring new technology to gain new insights and exploiting it to gain efficiency.”
    As part of the three-year study, researchers examined the practices of five contact center sites: two global banks, one national bank in a developing country, a telecommunication Fortune 500 company in South Asia and a global infrastructure vendor in telecommunications hardware.

    While many customer service organizations have spent recent years investing in AI, assuming that not doing so could lead to customer dissatisfaction, the researchers found these organizations haven’t used AI to its full potential. They have primarily used it for self-service applications.
    Some of the AI-assisted tasks researchers tracked at those sites included: using AI systems to automatically open applications, send emails and transfer information from one system to another approving or disapproving loan applications providing personalized service based on customer’s data and contact historyResearchers determined that while it’s beneficial for customer service companies to be open to harnessing the benefits and navigating any challenges of AI systems as a guide to their business strategies, they should not do so at the expense of supporting quality professional development and ongoing learning opportunities for their staff.
    Sarkar said that to fully utilize AI’s benefits, those leading customer service organizations need to examine every customer touchpoint and identify opportunities to enhance the customer experience while boosting the operation’s efficiency.
    As a result, Sarkar said newcomers in this technology-savvy industry should learn how companies with 20 or 30 years of experience have already adapted to changes in technology, especially AI, during that time before forming their own business strategies.
    “Any business is a balancing game because what you decide to do at the start of the year based on a forecast has to be revised over and over again,” Sarkar said. Since there’s that added tension within customer service organizations of whether they want to be more efficient or explore new areas, they have to work even harder at striking that balance. Using AI in the right way effectively helps them accomplish that.” More