More stories

  • in

    Saving endangered species: New AI method counts manatee clusters in real time

    Manatees are endangered species volatile to the environment. Because of their voracious appetites, they often spend up to eight hours a day grazing for food within shallow waters, making them vulnerable to environmental changes and other risks.
    Accurately counting manatee aggregations within a region is not only biologically meaningful in observing their habit, but also crucial for designing safety rules for boaters and divers as well as scheduling nursing, intervention, and other plans. Nevertheless, counting manatees is challenging.
    Because manatees tend to live in herds, they often block each other when viewed from the surface. As a result, small manatees are likely to be partially or completely blocked from view. In addition, water reflections tend to make manatees invisible, and they also can be mistaken for other objects such as rocks and branches.
    While aerial survey data are used in some regions to count manatees, this method is time-consuming and costly, and the accuracy depends on factors such as observer bias, weather conditions and time of day. Moreover, it is crucial to have a low-cost method that provides a real-time count to alert ecologists of threats early to enable them to act proactively to protect manatees.
    Artificial intelligence is used in a wide spectrum of fields, and now, researchers from Florida Atlantic University’s College of Engineering and Computer Science have harnessed its powers to help save the beloved manatee. They are among the first to use a deep learning-based crowd counting approach to automatically count the number of manatees in a designated region, using images captured from CCTV cameras, which are readily available, as input.
    This pioneering study, published Scientific Reports, not only addresses the technical challenges of counting in complex outdoor environments but also offers potential ways to aid endangered species.
    To determine manatee densities and calculate their numbers, researchers used generic images captured from surveillance videos from the water surface. They then used a unique design matching to manatees’ shape — Anisotropic Gaussian Kernel (AGK) — to transform the images into manatee customized density maps, representing manatees’ unique body shapes.

    Although many methods exist for counting, most of the existing counting methods are applied to crowds to count the number of people, due to their relevance to important applications such as urban planning and public safety.
    To save labeling costs, researchers used line-label based annotation with a single straight line to mark each manatee. The goal of the study was to learn to count the number of objects within a scene and obtain labels to support counting.
    Results of the study reveal that the FAU-developed method outperformed other baselines, including the traditional Gaussian kernel-based approach. Transitioning from dot to line labeling also improved wheat head counting accuracy, an important role in crop yield estimation, suggesting broader applications for convex-shaped objects in diverse contexts. This approach worked particularly well when the image had a high density of manatees in a complicated background.
    By formatting manatee counting as a deep neural network density estimation learning task, this approach balanced the labeling costs vs. counting efficiency. As a result, this method delivers a simple and high throughput solution for manatee counting that requires very little labeling efforts. A direct impact is that state parks can leverage this method to understand the number of manatees in different regions, by using their existing CCTV cameras, in real time.
    “There are many ways to use computational methods to help save endangered species, such as detecting the presence of the species and counting them to collect information about numbers and density,” said Xingquan (Hill) Zhu, Ph.D., senior author, an IEEE Fellow and a professor in FAU’s Department of Electrical Engineering and Computer Science. “Our method considered distortions caused by the perspective between the water space and the image plane. Since the shape of the manatee is closer to an ellipse than a circle, we used AGK to best represent the manatee contour and estimate manatee density in the scene. This allows density map to be more accurate, in terms of mean absolute errors and root mean square error, than other alternatives in estimating manatees’ numbers.”
    To validate their method and facilitate further research in this domain, the researchers developed a comprehensive manatee counting dataset, along with their source code, published through GitHub for public access at github.com/yeyimilk/deep-learning-for-manatee-counting.

    “Manatees are one of the wildlife species being affected by human-related threats. Therefore, calculating their numbers and gathering patterns in real time is vital for understanding their population dynamics,” said Stella Batalama, Ph.D., dean, FAU College of Engineering and Computer Science. “The methodology developed by professor Zhu and our graduate students provides a promising trajectory for broader applications, especially for convex-shaped objects, to improve counting techniques that may foretell better ecological results from management decisions.”
    Manatees can be found from Brazil to Florida and all the way around the Caribbean islands. Some species including the Florida Manatee are considered endangered by the International Union for Conservation of Nature.
    Study co-authors are FAU graduate students Zhiqiang Wang; Yiran Pang; and Cihan Ulus, also a teaching assistant, all within the Department of Electrical Engineering and Computer Science.
    The research was sponsored by the United States National Science Foundation. More

  • in

    Can AI be too good to use?

    Much of the discussion around implementing artificial intelligence systems focuses on whether an AI application is “trustworthy”: Does it produce useful, reliable results, free of bias, while ensuring data privacy? But a new paper published Dec. 7 in Frontiers in Artificial Intelligence poses a different question: What if an AI is just too good?
    Carrie Alexander, a postdoctoral researcher at the AI Institute for Next Generation Food Systems, or AIFS, at the University of California, Davis, interviewed a wide range of food industry stakeholders, including business leaders and academic and legal experts, on the attitudes of the food industry toward adopting AI. A notable issue was whether gaining extensive new knowledge about their operations might inadvertently create new liability risks and other costs.
    For example, an AI system in a food business might reveal potential contamination with pathogens. Having that information could be a public benefit but also open the firm to future legal liability, even if the risk is very small.
    “The technology most likely to benefit society as a whole may be the least likely to be adopted, unless new legal and economic structures are adopted,” Alexander said.
    An on-ramp for AI
    Alexander and co-authors Professor Aaron Smith of the UC Davis Department of Agricultural and Resource Economics and Professor Renata Ivanek of Cornell University, argue for a temporary “on-ramp” that would allow companies to begin using AI, while exploring the benefits, risks and ways to mitigate them. This would also give the courts, legislators and government agencies time to catch up and consider how best to use the information generated by AI systems in legal, political and regulatory decisions.
    “We need ways for businesses to opt in and try out AI technology,” Alexander said. Subsidies, for example for digitizing existing records, might be helpful especially for small companies.
    “We’re really hoping to generate more research and discussion on what could be a significant issue,” Alexander said. “It’s going to take all of us to figure it out.”
    The work was supported in part by a grant from the USDA National Institute of Food and Agriculture. The AI Institute for Next Generation Food Systems is funded by a grant from USDA-NIFA and is one of 25 AI institutes established by the National Science Foundation in partnership with other agencies. More

  • in

    Artificial intelligence systems excel at imitation, but not innovation

    Artificial intelligence (AI) systems are often depicted as sentient agents poised to overshadow the human mind. But AI lacks the crucial human ability of innovation, researchers at the University of California, Berkeley have found.
    While children and adults alike can solve problems by finding novel uses for everyday objects, AI systems often lack the ability to view tools in a new way, according to findings published according to findings published in Perspectives on Psychological Science, a journal of the Association for Psychological Science.
    AI language models like ChatGPT are passively trained on data sets containing billions of words and images produced by humans. This allows AI systems to function as a “cultural technology” similar to writing that can summarize existing knowledge, Eunice Yiu, a co-author of the article, explained in an interview. But unlike humans, they struggle when it comes to innovating on these ideas, she said.
    “Even young human children can produce intelligent responses to certain questions that [language learning models] cannot,” Yiu said. “Instead of viewing these AI systems as intelligent agents like ourselves, we can think of them as a new form of library or search engine. They effectively summarize and communicate the existing culture and knowledge base to us.”
    Yiu and Eliza Kosoy, along with their doctoral advisor and senior author on the paper, developmental psychologist Alison Gopnik, tested how the AI systems’ ability to imitate and innovate differs from that of children and adults. They presented 42 children ages 3 to 7 and 30 adults with text descriptions of everyday objects. In the first part of the experiment, 88% of children and 84% of adults were able to correctly identify which objects would “go best” with another. For example, they paired a compass with a ruler instead of a teapot.
    In the next stage of the experiment, 85% of children and 95% of adults were also able to innovate on the expected use of everyday objects to solve problems. In one task, for example, participants were asked how they could draw a circle without using a typical tool such as a compass. Given the choice between a similar tool like a ruler, a dissimilar tool such as a teapot with a round bottom, and an irrelevant tool such as a stove, the majority of participants chose the teapot, a conceptually dissimilar tool that could nonetheless fulfill the same function as the compass by allowing them to trace the shape of a circle.
    When Yiu and colleagues provided the same text descriptions to five large language models, the models performed similarly to humans on the imitation task, with scores ranging from 59% for the worst-performing model to 83% for the best-performing model. The AIs’ answers to the innovation task were far less accurate, however. Effective tools were selected anywhere from 8% of the time by the worst-performing model to 75% by the best-performing model.

    “Children can imagine completely novel uses for objects that they have not witnessed or heard of before, such as using the bottom of a teapot to draw a circle,” Yiu said. “Large models have a much harder time generating such responses.”
    In a related experiment, the researchers noted, children were able to discover how a new machine worked just by experimenting and exploring. But when the researchers gave several large language models text descriptions of the evidence that the children produced, they struggled to make the same inferences, likely because the answers were not explicitly included in their training data, Yiu and colleagues wrote.
    These experiments demonstrate that AI’s reliance on statistically predicting linguistic patterns is not enough to discover new information about the world, Yiu and colleagues wrote.
    “AI can help transmit information that is already known, but it is not an innovator,” Yiu said. “These models can summarize conventional wisdom but they cannot expand, create, change, abandon, evaluate, and improve on conventional wisdom in the way a young human can.” The development of AI is still in its early days, though, and much remains to be learned about how to expand the learning capacity of AI, Yiu said. Taking inspiration from children’s curious, active, and intrinsically motivated approach to learning could help researchers design new AI systems that are better prepared to explore the real world, she said. More

  • in

    Made-to-order diagnostic tests may be on the horizon

    McGill University researchers have made a breakthrough in diagnostic technology, inventing a ‘lab on a chip’ that can be 3D-printed in just 30 minutes. The chip has the potential to make on-the-spot testing widely accessible.
    As part of a recent study, the results of which were published in the journal Advanced Materials, the McGill team developed capillaric chips that act as miniature laboratories. Unlike other computer microprocessors, these chips are single-use and require no external power source — a simple paper strip suffices. They function through capillary action — the very phenomena by which a spilled liquid on the kitchen table spontaneously wicks into the paper towel used to wipe it up.
    “Traditional diagnostics require peripherals, while ours can circumvent them. Our diagnostics are a bit what the cell phone was to traditional desktop computers that required a separate monitor, keyboard and power supply to operate,” explains Prof. David Juncker, Chair of the Department of Biomedical Engineering at McGill and senior author on the study.
    At-home testing became crucial during the COVID-19 pandemic. But rapid tests have limited availability and can only drive one liquid across the strip, meaning most diagnostics are still done in central labs. Notably, the capillaric chips can be 3D-printed for various tests, including COVID-19 antibody quantification.
    The study brings 3D-printed home diagnostics one step closer to reality, though some challenges remain, such as regulatory approvals and securing necessary test materials. The team is actively working to make their technology more accessible, adapting it for use with affordable 3D printers. The innovation aims to speed up diagnoses, enhance patient care, and usher in a new era of accessible testing.
    “This advancement has the capacity to empower individuals, researchers, and industries to explore new possibilities and applications in a more cost-effective and user-friendly manner,” says Prof. Juncker. “This innovation also holds the potential to eventually empower health professionals with the ability to rapidly create tailored solutions for specific needs right at the point-of-care.” More

  • in

    New conductive, cotton-based fiber developed for smart textiles

    A single strand of fiber developed at Washington State University has the flexibility of cotton and the electric conductivity of a polymer, called polyaniline.
    The newly developed material showed good potential for wearable e-textiles. The WSU researchers tested the fibers with a system that powered an LED light and another that sensed ammonia gas, detailing their findings in the journal Carbohydrate Polymers.
    “We have one fiber in two sections: one section is the conventional cotton: flexible and strong enough for everyday use, and the other side is the conductive material,” said Hang Liu, WSU textile researcher and the study’s corresponding author. “The cotton can support the conductive material which can provide the needed function.”
    While more development is needed, the idea is to integrate fibers like these into apparel as sensor patches with flexible circuits. These patches could be part of uniforms for firefighters, soldiers or workers who handle chemicals to detect for hazardous exposures. Other applications include health monitoring or exercise shirts that can do more than current fitness monitors.
    “We have some smart wearables, like smart watches, that can track your movement and human vital signs, but we hope that in the future your everyday clothing can do these functions as well,” said Liu. “Fashion is not just color and style, as a lot of people think about it: fashion is science.”
    In this study, the WSU team worked to overcome the challenges of mixing the conductive polymer with cotton cellulose. Polymers are substances with very large molecules that have repeating patterns. In this case, the researchers used polyaniline, also known as PANI, a synthetic polymer with conductive properties already used in applications such as printed circuit board manufacturing.
    While intrinsically conductive, polyaniline is brittle and by itself, cannot be made into a fiber for textiles. To solve this, the WSU researchers dissolved cotton cellulose from recycled t-shirts into a solution and the conductive polymer into another separate solution. These two solutions were then merged together side-by-side, and the material was extruded to make one fiber.

    The result showed good interfacial bonding, meaning the molecules from the different materials would stay together through stretching and bending.
    Achieving the right mixture at the interface of cotton cellulose and polyaniline was a delicate balance, Liu said.
    “We wanted these two solutions to work so that when the cotton and the conductive polymer contact each other they mix to a certain degree to kind of glue together, but we didn’t want them to mix too much, otherwise the conductivity would be reduced,” she said.
    Additional WSU authors on this study included first author Wangcheng Liu as well as Zihui Zhao, Dan Liang, Wei-Hong Zhong and Jinwen Zhang. This research received support from the National Science Foundation and the Walmart Foundation Project. More

  • in

    AI chatbot shows potential as diagnostic partner

    Physician-investigators at Beth Israel Deaconess Medical Center (BIDMC) compared a chatbot’s probabilistic reasoning to that of human clinicians. The findings, published in JAMA Network Open, suggest that artificial intelligence could serve as useful clinical decision support tools for physicians.
    “Humans struggle with probabilistic reasoning, the practice of making decisions based on calculating odds,” said the study’s corresponding author Adam Rodman, MD, an internal medicine physician and investigator in the department of Medicine at BIDMC. “Probabilistic reasoning is one of several components of making a diagnosis, which is an incredibly complex process that uses a variety of different cognitive strategies. We chose to evaluate probabilistic reasoning in isolation because it is a well-known area where humans could use support.”
    Basing their study on a previously published national survey of more than 550 practitioners performing probabilistic reasoning on five medical cases, Rodman and colleagues fed the publicly available Large Language Model (LLM), Chat GPT-4, the same series of cases and ran an identical prompt 100 times to generate a range of responses.
    The chatbot — just like the practitioners before them — was tasked with estimating the likelihood of a given diagnosis based on patients’ presentation. Then, given test results such as chest radiography for pneumonia, mammography for breast cancer, stress test for coronary artery disease and a urine culture for urinary tract infection, the chatbot program updated its estimates.
    When test results were positive, it was something of a draw; the chatbot was more accurate in making diagnoses than the humans in two cases, similarly accurate in two cases and less accurate in one case. But when tests came back negative, the chatbot shone, demonstrating more accuracy in making diagnoses than humans in all five cases.
    “Humans sometimes feel the risk is higher than it is after a negative test result, which can lead to overtreatment, more tests and too many medications,” said Rodman.
    But Rodman is less interested in how chatbots and humans perform toe-to-toe than in how highly skilled physicians’ performance might change in response to having these new supportive technologies available to them in the clinic, added Rodman. He and colleagues are looking into it.
    “LLMs can’t access the outside world — they aren’t calculating probabilities the way that epidemiologists, or even poker players, do. What they’re doing has a lot more in common with how humans make spot probabilistic decisions,” he said. “But that’s what is exciting. Even if imperfect, their ease of use and ability to be integrated into clinical workflows could theoretically make humans make better decisions,” he said. “Future research into collective human and artificial intelligence is sorely needed.”
    Co-authors included Thomas A. Buckley, University of Massachusetts Amherst; Arun K. Manrai, PhD, Harvard Medical School; Daniel J. Morgan, MD, MS, University of Maryland School of Medicine.
    Rodman reported receiving grants from the Gordon and Betty Moore Foundation. Morgan reported receiving grants from the Department of Veterans Affairs, the Agency for Healthcare Research and Quality, the Centers for Disease Control and Prevention, and the National Institutes of Health, and receiving travel reimbursement from the Infectious Diseases Society of America, the Society for Healthcare Epidemiology of America. The American College of Physicians and the World Heart Health Organization outside the submitted work. More

  • in

    Battle of the AIs in medical research: ChatGPT vs Elicit

    The use of generative AI in literature search suggests the possibility of efficiently collecting a vast amount of medical information, provided that users are well aware that the performance of generative AI is still in its infancy and that not all information presented is necessarily reliable. It is advised to use different generative AIs depending on the type of information needed.
    Can AI save us from the arduous and time-consuming task of academic research collection? An international team of researchers investigated the credibility and efficiency of generative AI as an information-gathering tool in the medical field.
    The research team, led by Professor Masaru Enomoto of the Graduate School of Medicine at Osaka Metropolitan University, fed identical clinical questions and literature selection criteria to two generative AIs; ChatGPT and Elicit. The results showed that while ChatGPT suggested fictitious articles, Elicit was efficient, suggesting multiple references within a few minutes with the same level of accuracy as the researchers.
    “This research was conceived out of our experience with managing vast amounts of medical literature over long periods of time. Access to information using generative AI is still in its infancy, so we need to exercise caution as the current information is not accurate or up-to-date.” Said Dr. Enomoto. “However, ChatGPT and other generative AIs are constantly evolving and are expected to revolutionize the field of medical research in the future.”
    Their findings were published in Hepatology Communications. More

  • in

    Researchers safely integrate fragile 2D materials into devices

    Two-dimensional materials, which are only a few atoms thick, can exhibit some incredible properties, such as the ability to carry electric charge extremely efficiently, which could boost the performance of next-generation electronic devices.
    But integrating 2D materials into devices and systems like computer chips is notoriously difficult. These ultrathin structures can be damaged by conventional fabrication techniques, which often rely on the use of chemicals, high temperatures, or destructive processes like etching.
    To overcome this challenge, researchers from MIT and elsewhere have developed a new technique to integrate 2D materials into devices in a single step while keeping the surfaces of the materials and the resulting interfaces pristine and free from defects.
    Their method relies on engineering surface forces available at the nanoscale to allow the 2D material to be physically stacked onto other prebuilt device layers. Because the 2D material remains undamaged, the researchers can take full advantage of its unique optical and electrical properties.
    They used this approach to fabricate arrays of 2D transistors that achieved new functionalities compared to devices produced using conventional fabrication techniques. Their method, which is versatile enough to be used with many materials, could have diverse applications in high-performance computing, sensing, and flexible electronics.
    Core to unlocking these new functionalities is the ability to form clean interfaces, held together by special forces that exist between all matter, called van der Waals forces.
    However, such van der Waals integration of materials into fully functional devices is not always easy, says Farnaz Niroui, assistant professor of electrical engineering and computer science (EECS), a member of the Research Laboratory of Electronics (RLE), and senior author of a new paper describing the work.

    “Van der Waals integration has a fundamental limit,” she explains. “Since these forces depend on the intrinsic properties of the materials, they cannot be readily tuned. As a result, there are some materials that cannot be directly integrated with each other using their van der Waals interactions alone. We have come up with a platform to address this limit to help make van der Waals integration more versatile, to promote the development of 2D-materials-based devices with new and improved functionalities.”
    Niroui wrote the paper with lead author Peter Satterthwaite, an electrical engineering and computer science graduate student; Jing Kong, professor of EECS and a member of RLE; and others at MIT, Boston University, National Tsing Hua University in Taiwan, the National Science and Technology Council of Taiwan, and National Cheng Kung University in Taiwan. The research will be published in Nature Electronics.
    Advantageous attraction
    Making complex systems such as a computer chip with conventional fabrication techniques can get messy. Typically, a rigid material like silicon is chiseled down to the nanoscale, then interfaced with other components like metal electrodes and insulating layers to form an active device. Such processing can cause damage to the materials.
    Recently, researchers have focused on building devices and systems from the bottom up, using 2D materials and a process that requires sequential physical stacking. In this approach, rather than using chemical glues or high temperatures to bond a fragile 2D material to a conventional surface like silicon, researchers leverage van der Waals forces to physically integrate a layer of 2D material onto a device.
    Van der Waals forces are natural forces of attraction that exist between all matter. For example, a gecko’s feet can stick to the wall temporarily due to van der Waals forces. Though all materials exhibit a van der Waals interaction, depending on the material, the forces are not always strong enough to hold them together. For instance, a popular semiconducting 2D material known as molybdenum disulfide will stick to gold, a metal, but won’t directly transfer to insulators like silicon dioxide by just coming into physical contact with that surface.

    However, heterostructures made by integrating semiconductor and insulating layers are key building blocks of an electronic device. Previously, this integration has been enabled by bonding the 2D material to an intermediate layer like gold, then using this intermediate layer to transfer the 2D material onto the insulator, before removing the intermediate layer using chemicals or high temperatures.
    Instead of using this sacrificial layer, the MIT researchers embed the low-adhesion insulator in a high-adhesion matrix. This adhesive matrix is what makes the 2D material stick to the embedded low-adhesion surface, providing the forces needed to create a van der Waals interface between the 2D material and the insulator.
    Making the matrix
    To make electronic devices, they form a hybrid surface of metals and insulators on a carrier substrate. This surface is then peeled off and flipped over to reveal a completely smooth top surface that contains the building blocks of the desired device.
    This smoothness is important, since gaps between the surface and 2D material can hamper van der Waals interactions. Then, the researchers prepare the 2D material separately, in a completely clean environment, and bring it into direct contact with the prepared device stack.
    “Once the hybrid surface is brought into contact with the 2D layer, without needing any high-temperatures, solvents, or sacrificial layers, it can pick up the 2D layer and integrate it with the surface. This way, we are allowing a van der Waals integration that would be traditionally forbidden, but now is possible and allows formation of fully functioning devices in a single step,” Satterthwaite explains.
    This single-step process keeps the 2D material interface completely clean, which enables the material to reach its fundamental limits of performance without being held back by defects or contamination.
    And because the surfaces also remain pristine, researchers can engineer the surface of the 2D material to form features or connections to other components. For example, they used this technique to create p-type transistors, which are generally challenging to make with 2D materials. Their transistors have improved on previous studies, and can provide a platform toward studying and achieving the performance needed for practical electronics.
    Their approach can be done at scale to make larger arrays of devices. The adhesive matrix technique can also be used with a range of materials, and even with other forces to enhance the versatility of this platform. For instance, the researchers integrated graphene onto a device, forming the desired van der Waals interfaces using a matrix made with a polymer. In this case, adhesion relies on chemical interactions rather than van der Waals forces alone.
    In the future, the researchers want to build on this platform to enable integration of a diverse library of 2D materials to study their intrinsic properties without the influence of processing damage, and develop new device platforms that leverage these superior functionalities.
    This research is funded, in part, by the U.S. National Science Foundation, the U.S. Department of Energy, the BUnano Cross-Disciplinary Fellowship at Boston University, and the U.S. Army Research Office. The fabrication and characterization procedures were carried out, largely, in the MIT.nano shared facilities. More