More stories

  • in

    Using Artificial Intelligence to Predict Life-Threatening Bacterial Disease in Dogs

    Leptospirosis, a disease that dogs can get from drinking water contaminated with Leptospira bacteria, can cause kidney failure, liver disease and severe bleeding into the lungs. Early detection of the disease is crucial and may mean the difference between life and death.
    Veterinarians and researchers at the University of California, Davis, School of Veterinary Medicine have discovered a technique to predict leptospirosis in dogs through the use of artificial intelligence. After many months of testing various models, the team has developed one that outperformed traditional testing methods and provided accurate early detection of the disease. The groundbreaking discovery was published in Journal of Veterinary Diagnostic Investigation.
    “Traditional testing for Leptospira lacks sensitivity early in the disease process,” said lead author Krystle Reagan, a board-certified internal medicine specialist and assistant professor focusing on infectious diseases. “Detection also can take more than two weeks because of the need to demonstrate a rise in the level of antibodies in a blood sample. Our AI model eliminates those two roadblocks to a swift and accurate diagnosis.”
    The research involved historical data of patients at the UC Davis Veterinary Medical Teaching Hospital that had been tested for leptospirosis. Routinely collected blood work from these 413 dogs was used to train an AI prediction model. Over the next year, the hospital treated an additional 53 dogs with suspected leptospirosis. The model correctly identified all nine dogs that were positive for leptospirosis (100% sensitivity). The model also correctly identified approximately 90% of the 44 dogs that were ultimately leptospirosis negative.
    The goal for the model is for it to become an online resource for veterinarians to enter patient data and receive a timely prediction.
    “AI-based, clinical decision making is going to be the future for many aspects of veterinary medicine,” said School of Veterinary Medicine Dean Mark Stetter. “I am thrilled to see UC Davis veterinarians and scientists leading that charge. We are committed to putting resources behind AI ventures and look forward to partnering with researchers, philanthropists, and industry to advance this science.”
    Detection model may help people
    Leptospirosis is a life-threatening zoonotic disease, meaning it can transfer from animals to humans. As the disease is also difficult to diagnose in people, Reagan hopes the technology behind this groundbreaking detection model has translational ability into human medicine.
    “My hope is this technology will be able to recognize cases of leptospirosis in near real time, giving clinicians and owners important information about the disease process and prognosis,” said Reagan. “As we move forward, we hope to apply AI methods to improve our ability to quickly diagnose other types of infections.”
    Reagan is a founding member of the school’s Artificial Intelligence in Veterinary Medicine Interest Group comprising veterinarians promoting the use of AI in the profession. This research was done in collaboration with members of UC Davis’ Center for Data Science and Artificial Intelligence Research, led by professor of mathematics Thomas Strohmer. He and his students were involved in the algorithm building.
    Reagan’s group is actively pursuing AI for prediction of outcome for other types of infections, including a prediction model for antimicrobial resistant infections, which is a growing problem in veterinary and human medicine. Previously, the group developed an AI algorithm to predict Addison’s disease with an accuracy rate greater than 99%.
    Funding support comes from the National Science Foundation.
    Story Source:
    Materials provided by University of California – Davis. Original written by Rob Warren. Note: Content may be edited for style and length. More

  • in

    'I don't even remember what I read': People enter a 'dissociative state' when using social media

    Sometimes when we are reading a good book, it’s like we are transported into another world and we stop paying attention to what’s around us.
    Researchers at the University of Washington wondered if people enter a similar state of dissociation when surfing social media, and if that explains why users might feel out of control after spending so much time on their favorite app.
    The team watched how participants interacted with a Twitter-like platform to show that some people are spacing out while they’re scrolling. Researchers also designed intervention strategies that social media platforms could use to help people retain more control over their online experiences.
    The group presented the project May 3 at the CHI 2022 conference in New Orleans.
    “I think people experience a lot of shame around social media use,” said lead author Amanda Baughan, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “One of the things I like about this framing of ‘dissociation’ rather than ‘addiction’ is that it changes the narrative. Instead of: ‘I should be able to have more self-control,’ it’s more like: ‘We all naturally dissociate in many ways throughout our day — whether it’s daydreaming or scrolling through Instagram, we stop paying attention to what’s happening around us.'”
    There are multiple types of dissociation, including trauma-based dissociation and the everyday dissociation associated with spacing out or focusing intently on a task. More

  • in

    Novel AI algorithm for digital pathology analysis

    Digital pathology is an emerging field which deals with mainly microscopy images that are derived from patient biopsies. Because of the high resolution, most of these whole slide images (WSI) have a large size, typically exceeding a gigabyte (Gb). Therefore, typical image analysis methods cannot efficiently handle them.
    Seeing a need, researchers from Boston University School of Medicine (BUSM) have developed a novel artificial intelligence (AI) algorithm based on a framework called representation learning to classify lung cancer subtype based on lung tissue images from resected tumors.
    “We are developing novel AI-based methods that can bring efficiency to assessing digital pathology data. Pathology practice is in the midst of a digital revolution. Computer-based methods are being developed to assist the expert pathologist. Also, in places where there is no expert, such methods and technologies can directly assist diagnosis,” explains corresponding author Vijaya B. Kolachalama, PhD, FAHA, assistant professor of medicine and computer science at BUSM.
    The researchers developed a graph-based vision transformer for digital pathology called Graph Transformer (GTP) that leverages a graph representation of pathology images and the computational efficiency of transformer architectures to perform analysis on the whole slide image.
    “Translating the latest advances in computer science to digital pathology is not straightforward and there is a need to build AI methods that can exclusively tackle the problems in digital pathology,” explains co-corresponding author Jennifer Beane, PhD, associate professor of medicine at BUSM.
    Using whole slide images and clinical data from three publicly available national cohorts, they then developed a model that could distinguish between lung adenocarcinoma, lung squamous cell carcinoma, and adjacent non-cancerous tissue. Over a series of studies and sensitivity analyses, they showed that their GTP framework outperforms current state-of-the-art methods used for whole slide image classification.
    They believe their machine learning framework has implications beyond digital pathology. “Researchers who are interested in the development of computer vision approaches for other real-world applications can also find our approach to be useful,” they added.
    These findings appear online in the journal IEEE Transactions on Medical Imaging.
    Funding for this study was provided by grants from the National Institutes of Health (R21-CA253498, R01-HL159620), Johnson & Johnson Enterprise Innovation, Inc., the American Heart Association (20SFRN35460031), the Karen Toffler Charitable Trust, and the National Science Foundation (1551572, 1838193)
    Story Source:
    Materials provided by Boston University School of Medicine. Note: Content may be edited for style and length. More

  • in

    New recipe for restaurant, app contracts

    A novel contract proposed by a University of Texas at Dallas researcher and his colleagues could help alleviate key sources of conflict between restaurants and food-delivery platforms.
    In a study published online March 28 in the INFORMS journal Management Science, Dr. Andrew Frazelle, assistant professor of operations management in the Naveen Jindal School of Management, and co-authors Dr. Pnina Feldman of Boston University and Dr. Robert Swinney of Duke University examined how to best structure relationships between food-delivery platforms and the restaurants with which they partner.
    Other platforms in the sharing economy, such as ride-hailing and vacation-rental services, allow people to sell access to resources that would otherwise be generating no revenue for them, Frazelle said. The interests of the resource owner and the platform are reasonably well aligned in that more transactions are good for both.
    “However, restaurant delivery is different,” Frazelle said. “Delivery orders represent incremental business on top of the restaurant’s existing dine-in operation. More business sounds good, but it comes at the cost of a commission charged by the delivery platform.”
    Platforms such as Grubhub, DoorDash and Uber Eats collect customer orders online, transmit them to restaurants and deliver the orders to customers. While this service helps restaurants expand their markets, the study found the relationship has inherent flaws.
    The most common contractual relationship between platforms and restaurants, in which the platform takes a commission, or a percentage cut, of each delivery order, has two key issues, according to the study. More

  • in

    High school students measure Earth's magnetic field from ISS

    A group of high school students used a tiny, inexpensive computer to try to measure Earth’s magnetic field from the International Space Station, showing a way to affordably explore and understand our planet.
    In the American Journal of Physics, published on behalf of the American Association of Physics Teachers by AIP Publishing, three high school students from Portugal, along with their faculty mentor, report the results of their project. The students programmed an add-on board for the Raspberry Pi computer to take measurements of Earth’s magnetic field in orbit. This add-on component, known as the Sense Hat, contained a magnetometer, gyroscope, accelerometer, and sensors for temperature, pressure, and humidity.
    The European Space Agency teamed up with the U.K.’s Raspberry Pi Foundation to hold a contest for high school students. The contest, known as the Astro Pi Challenge, required the students to program a Raspberry Pi computer with code to be run aboard the space station.
    The students used the data acquired from the space station to map out Earth’s magnetic field and compared their results to data provided by the International Geomagnetic Reference Field (IGRF), which uses measurements from observatories and satellites to compute Earth’s magnetic field.
    “I saw the Astro Pi challenge as an opportunity to broaden my knowledge and skill set, and it ended up introducing me to the complex but exciting reality of the practical world,” Lourenço Faria, co-author and one of the students involved in the project, said.
    The IGRF data is updated every five years, so the students compared their measurements, taken in April 2021, with the latest IGRF data from 2020. They found their data differed from the IGRF results by a significant, but fixed, amount. This difference could be due to a static magnetic field inside the space station.
    The students repeated their analysis using another 15 orbits worth of ISS data and found a slight improvement in results. The students thought it surprising the main features of Earth’s magnetic field could be reconstructed with only three hours’ worth of measurements from their low-cost magnetometer aboard the space station.
    Although this project was carried out aboard the space station, it could easily be adapted to ground-based measurements using laboratory equipment or magnetometer apps for smartphones.
    “Taking measurements around the globe and sharing data via the internet or social media would make for an interesting science project that could connect students in different countries,” said Nuno Barros e Sá, co-author and faculty mentor for the students.
    The article “Modeling the Earth’s magnetic field” is authored by Nuno Barros e Sá, Lourenço Faria, Bernardo Alves, and Miguel Cymbron. The article will appear in American Journal of Physics on May 23, 2022.
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Spinning is key for line-dancing electrons in iron selenide

    Rice University quantum physicists are part of an international team that has answered a puzzling question at the forefront of research into iron-based superconductors: Why do electrons in iron selenide dance to a different tune when they move right and left rather than forward and back?
    A research team led by Xingye Lu at Beijing Normal University, Pengcheng Dai at Rice and Thorsten Schmitt at the Paul Scherrer Institute (PSI) in Switzerland used resonant inelastic X-ray scattering (RIXS) to measure the behavior of electron spins in iron selenide at high energy levels.
    Spin is the property of electrons related to magnetism, and the researchers discovered spins in iron selenide begin behaving in a directionally dependent way at the same time the material begins exhibiting directionally dependent electronic behavior, or nematicity. The team’s results were published online in Nature Physics.
    Electronic nematicity is believed to be an important ingredient for bringing about superconductivity in iron selenide and similar iron-based materials. First discovered in 2008, these iron-based superconductors number in the dozens. All become superconductors at very cold temperatures, and most exhibit nematicity before they reach the critical temperature where superconductivity begins.
    Whether nematicity helps or hinders the onset of superconductivity is unclear. But the results of the high-energy spin experiments at PSI’s Swiss Light Source are a surprise because iron selenide is the only iron-based superconductor in which nematicity occurs in the absence of a long-range magnetic ordering of electron spins.
    “Iron selenide has something special going for it,” said Rice study co-author Qimiao Si, who, like Dai, is a member of the Rice Quantum Initiative. “Being nematic without long-range magnetic order provides an extra knob to access the physics of the iron-based superconductors. In this work, the experiment uncovered something truly striking, namely that high-energy spin excitations are dispersive and undamped, meaning they have a well-defined energy-versus-momentum relationship.”
    In all iron-based superconductors, iron atoms are arranged in 2D sheets that are sandwiched between top and bottom sheets of other elements, selenium in the case of iron selenide. The atoms in the 2D iron sheets are spaced in checkerboard fashion, exactly the same distance from one another in both the left-right and forward-back directions. But as the materials are cooled near the point of superconductivity, the iron sheets undergo a slight structural shift. Instead of exact squares, the atoms form oblong rhombuses like baseball diamonds, where the distance from home plate to second base is shorter than the distance from first to third base. Electronic nematicity occurs along with this shift, taking the form of increased or decreased electrical resistance or conductivity only in the direction of home-to-second or first-to-third. More

  • in

    Haptics device creates realistic virtual textures

    Technology has allowed us to immerse ourselves in a world of sights and sounds from the comfort of our home, but there’s something missing: touch.
    Tactile sensation is an incredibly important part of how humans perceive their reality. Haptics or devices that can produce extremely specific vibrations that can mimic the sensation of touch are a way to bring that third sense to life. However, as far as haptics have come, humans are incredibly particular about whether or not something feels “right,” and virtual textures don’t always hit the mark.
    Now, researchers at the USC Viterbi School of Engineering have developed a new method for computers to achieve that true texture — with the help of human beings.
    Called a preference-driven model, the framework uses our ability to distinguish between the details of certain textures as a tool in order to give these virtual counterparts a tune-up.
    The research was published in IEEE Transactions on Haptics by three USC Viterbi Ph.D. students in computer science, Shihan Lu, Mianlun Zheng and Matthew Fontaine, as well as Stefanos Nikolaidis, USC Viterbi assistant professor in computer science and Heather Culbertson, USC Viterbi WiSE Gabilan Assistant Professor in Computer Science.
    “We ask users to compare their feeling between the real texture and the virtual texture,” Lu, the first author, explained. “The model then iteratively updates a virtual texture so that the virtual texture can match the real one in the end.”
    According to Fontaine, the idea first emerged when they shared a Haptic Interfaces and Virtual Environments class back in Fall of 2019 taught by Culbertson. They drew inspiration from the art application Picbreeder, which can generate images based on a user’s preference over and over until it reaches the desired result. More

  • in

    Long-hypothesized 'next generation wonder material' created

    For over a decade, scientists have attempted to synthesize a new form of carbon called graphyne with limited success. That endeavor is now at an end, though, thanks to new research from the University of Colorado Boulder.
    Graphyne has long been of interest to scientists because of its similarities to the “wonder material” graphene — another form of carbon that is highly valued by industry whose research was even awarded the Nobel Prize in Physics in 2010. However, despite decades of work and theorizing, only a few fragments have ever been created before now.
    This research, announced last week in Nature Synthesis, fills a longstanding gap in carbon material science, potentially opening brand-new possibilities for electronics, optics and semiconducting material research.
    “The whole audience, the whole field, is really excited that this long-standing problem, or this imaginary material, is finally getting realized,” said Yiming Hu, lead author on the paper and 2022 doctoral graduate in chemistry.
    Scientists have long been interested in the construction of new or novel carbon allotropes, or forms of carbon, because of carbon’s usefulness to industry, as well as its versatility.
    There are different ways carbon allotropes can be constructed depending on how sp2, sp3 and sp hybridized carbon (or the different ways carbon atoms can bind to other elements), and their corresponding bonds, are utilized. The most well-known carbon allotropes are graphite (used in tools like pencils and batteries) and diamonds, which are created out of sp2 carbon and sp3 carbon, respectively. More