More stories

  • in

    Precision technology, machine learning lead to early diagnosis of calf pneumonia

    Monitoring dairy calves with precision technologies based on the “internet of things,” or IoT, leads to the earlier diagnosis of calf-killing bovine respiratory disease, according to a new study. The novel approach — a result of crosscutting collaboration by a team of researchers from Penn State, University of Kentucky and University of Vermont — will offer dairy producers an opportunity to improve the economies of their farms, according to researchers.
    This is not your grandfather’s dairy farming strategy, notes lead researcher Melissa Cantor, assistant professor of precision dairy science in Penn State’s College of Agricultural Sciences. Cantor noted that new technology is becoming increasingly affordable, offering farmers opportunities to detect animal health problems soon enough to intervene, saving the calves and the investment they represent.
    IoT refers to embedded devices equipped with sensors, processing and communication abilities, software, and other technologies to connect and exchange data with other devices over the Internet. In this study, Cantor explained, IoT technologies such as wearable sensors and automatic feeders were used to closely watch and analyze the condition of calves.
    Such IoT devices generate a huge amount of data by closely monitoring the cows’ behavior. To make such data easier to interpret, and provide clues to calf health problems, the researchers adopted machine learning — a branch of artificial intelligence that learns the hidden patterns in the data to discriminate between sick and healthy calves, given the input from the IoT devices.
    “We put leg bands on the calves, which record activity behavior data in dairy cattle, such as the number of steps and lying time,” Cantor said. “And we used automatic feeders, which dispense milk and grain and record feeding behaviors, such as the number of visits and liters of consumed milk. Information from those sources signaled when a calf’s condition was on the verge of deteriorating.”
    Bovine respiratory disease is an infection of the respiratory tract that is the leading reason for antimicrobial use in dairy calves and represents 22% of calf mortalities. The costs and effects of the ailment can severely damage a farm’s economy, since raising dairy calves is one of the largest economic investments.
    “Diagnosing bovine respiratory disease requires intensive and specialized labor that is hard to find,” Cantor said. “So, precision technologies based on IoT devices such as automatic feeders, scales and accelerometers can help detect behavioral changes before outward clinical signs of the disease are manifested.”
    In the study, data was collected from 159 dairy calves using precision livestock technologies and by researchers who performed daily physical health exams on the calves at the University of Kentucky. Researchers recorded both automatic data-collection results and manual data-collection results and compared the two.
    In findings recently published in IEEE Access, a peer-reviewed open-access scientific journal published by the Institute of Electrical and Electronics Engineers, the researchers reported that the proposed approach is able to identify calves that developed bovine respiratory disease sooner. Numerically, the system achieved an accuracy of 88% for labeling sick and healthy calves. Seventy percent of sick calves were predicted four days prior to diagnosis, and 80% of calves that developed a chronic case of the disease were detected within the first five days of sickness.
    “We were really surprised to find out that the relationship with the behavioral changes in those animals was very different than animals that got better with one treatment,” she said. “And nobody had ever looked at that before. We came up with the concept that if these animals actually behave differently, then there’s probably a chance that IoT technologies empowered with machine learning inference techniques could actually identify them sooner, before anybody can with the naked eye. That offers producers options.”
    Contributing to the research were: Enrico Casella, Department of Animal and Dairy Science, University of Wisconsin-Madison; Melissa Cantor, Department of Animal Science, Penn State University; Megan Woodrum Setser, Department of Animal and Food Sciences, University of Kentucky; Simone Silvestri, Department of Computer Science, University of Kentucky; and Joao Costa, Department of Animal and Veterinary Sciences, University of Vermont.
    This work was supported by the U.S. Department of Agriculture and the National Science Foundation. More

  • in

    ROSE: Revolutionary, nature-inspired soft embracing robotic gripper

    Although grasping objects is a relatively straightforward task for us humans, there is a lot of mechanics involved in this simple task. Picking up an object requires fine control of the fingers, of their positioning, and of the pressure each finger applies, which in turn necessitates intricate sensing capabilities. It’s no wonder that robotic grasping and manipulation is a very active research area within the field of robotics.
    Today, industrial robotic hands have replaced humans in various complex and hazardous activities, including in restaurants, farms, factories, and manufacturing plants. In general, soft robotic grippers are better suited for tasks in which the objects to be picked up are fragile, such as fruits and vegetables. However, while soft robots are promising as harvesting tools, they usually share a common disadvantage: their price tag. Most soft robotic gripper designs require the intricate assembly of multiple pieces. This drives up development and maintenance costs.
    Fortunately, a research team from the Japan Advanced Institute of Technology (JAIST), led by Associate Professor Van Anh Ho, have come up with a groundbreaking solution to these problems. Taking a leaf from nature, they have developed an innovative soft robotic gripper called ‘ROSE,’ which stands for ‘Rotation-based Squeezing Gripper.’ Details about ROSE’s design, as well as the results of their latest study, have been presented at the Robotics: Science and Systems 2023 (RSS2023) conference.
    What makes ROSE so impressive is its design. The soft gripping part has the shape of a cylindrical funnel or sleeve and is connected to a hard circular base, which in turn is attached to the shaft of an actuator. The funnel must be placed over the object meant to be picked up, covering a decent portion of its surface area. Then, the actuator makes the base turn, which causes the flexible funnel’s skin to wrap tightly around the object. This mechanism was loosely inspired by the changing shapes of roses, which bloom during the day and close up during the night.
    ROSE offers substantial advantages compared to more conventional grippers. First, it is much less expensive to manufacture. The hard parts can all be 3D-printed, whereas the funnel itself can be easily produced using a mold and liquid silicone rubber. This ensures that the design is easily scalable and is suitable for mass production.
    Second, ROSE can easily pick up a wide variety of objects without complex control and sensing mechanisms. Unlike grippers that rely on finger-like structures, ROSE’s sleeve applies a gentler, more uniform pressure. This makes ROSE better suited for handling fragile produce, such as strawberries and pears, as well as slippery objects. Weighing less than 200 grams, the gripper can achieve an impressive payload-to-weight ratio of 6812%.
    Third, ROSE is extremely durable and sturdy. The team showed that it could successfully continue to pick up objects even after 400,000 trials. Moreover, the funnel still works properly in the presence of significant cracks or cuts. “The proposed gripper excels in demanding scenarios, as evidenced by its ability to withstand a severe test in which we cut the funnel into four separate sections at full height,” remarks Assoc. Prof. Ho, “This test underscores the gripper’s exceptional resilience and optimal performance in challenging conditions.”
    Finally, ROSE can be endowed with sensing capabilities. The researchers achieved this by placing multiple cameras on top of the circular base, pointing at the inside of the funnel, which was covered in markers, whose position could be picked up by the cameras and analyzed through image processing algorithms. This promising approach allows for size and shape estimation of the grasped object.
    The research team notes that ROSE could be an enticing option for various applications, including harvesting operations and sorting items in factories. It could also find a home in cluttered environments such as farms, professional kitchens, and warehouses. “The ROSE gripper holds significant potential to revolutionize gripping applications and gain widespread acceptance across various fields,” concludes Assoc. Prof. Ho, “Its straightforward yet robust and dependable design is set to inspire researchers and manufacturers to embrace it for a broad variety of gripping tasks in the near future.” More

  • in

    Improving urban planning with virtual reality

    What should the city we live in look like? How do structural changes affect the people who move around it? Cartographers at Ruhr University Bochum use virtual reality tools to explore these questions before a great deal of money is spent on building measures. Using the Unity3 game engine, they recreate scenarios in 3D where people can experience potential changes through immersion. They were able to prove that the physical reaction to this experience is measurable. Julian Keil and Marco Weißmann from Professor Frank Dickmann’s team published their findings in KN — Journal of Cartography and Geographic Information on 1 May 2023 and in Applied Sciences on 13 May 2023.
    Lab kit for urban scenarios
    Construction measures that transform urban settings change the environment of both the people who live there permanently and those who visit them temporarily. It’s not always possible to foresee the effects in advance. In such cases, it helps to recreate the setting in a 3D model which people can experience through immersion. To this end, the cartographers working with Marco Weißmann use software that was originally designed to programme computer game environments. “We’ve developed a lab kit of sorts in which you can simulate an environment virtually, complete with traffic,” explains Weißmann. The researchers can use it to directly visualise the effects of planned structural changes: how does the traffic flow? Do cars and pedestrians get in each other’s way or not?
    Measuring the implicit effects of spaces
    Moreover, the space that surrounds us affects our well-being. We do notice it sometimes, but not always. “People who’ve lived on a noisy street for a long time, for example, might think they don’t even hear the noise anymore,” says Julian Keil. “But we know that, objectively speaking, residents in such streets experience significantly higher stress levels than others.” In order to determine such implicit effects of urban planning measures before a lot of money has been poured into them, the cartography team developed a method to measure them in advance. For this purpose, they programmed an urban environment in virtual reality and had test participants experience the scenarios. At the same time, they measured the skin conductivity of the test persons, which provides information about their stress level.
    They showed that a higher traffic volume in a street clearly upset the test persons, as measured by their skin conductivity. To corroborate their findings, a study is planned to incorporate more physical measurements that will provide information about the participants’ stress levels and various emotions, including heart rate, blood pressure and pupil size. “Until now, residents and other stakeholders have been involved in the planning stage of construction measures, but only in the form of surveys, i.e. explicit statements,” says Keil, whose background is in psychology. “Our method enables spatial planners to assess implicit effects of possible measures and to include them in the planning, too.”
    Climate-friendly experiments
    The experiments for both studies were conducted in a climate-friendly way using electricity from a mobile solar system on the roof of the institute building. More

  • in

    Researchers establish criterion for nonlocal quantum behavior in networks

    A new theoretical study provides a framework for understanding nonlocality, a feature that quantum networks must possess to perform operations inaccessible to standard communications technology. By clarifying the concept, researchers determined the conditions necessary to create systems with strong, quantum correlations.
    The study, published in Physical Review Letters, adapts techniques from quantum computing theory to create a new classification scheme for quantum nonlocality. This not only allowed the researchers to unify prior studies of the concept into a common framework, but it facilitated a proof that networked quantum systems can only display nonlocality if they possess a particular set of quantum features.
    “On the surface, quantum computing and nonlocality in quantum networks are different things, but our study shows that, in certain ways, they are two sides of the same coin,” said Eric Chitambar, a professor of electrical and computer engineering at the University of Illinois Urbana-Champaign and the project lead. “In particular, they require the same fundamental set of quantum operations to deliver effects that cannot be replicated with classical technology.”
    Nonlocality is a consequence of entanglement, in which quantum objects experience strong connections even when separated over vast physical distances. When entangled objects are used to perform quantum operations, the results display statistical correlations that cannot be explained by non-quantum means. Such correlations are said to be nonlocal. A quantum network must possess a degree of nonlocality to ensure that it can perform truly quantum functions, but the phenomenon is still poorly understood.
    To facilitate study of nonlocality, Chitambar and physics graduate student Amanda Gatto Lamas applied the formalism of quantum resource theory. By treating nonlocality as a “resource” to manage, the researchers’ framework allowed them to view past studies of nonlocality as separate instances of the same concept, just with different restrictions on the resource’s availability. This facilitated the proof of their main result, that nonlocality can only be achieved with a limited set of quantum operations.
    “Our result is the quantum network analogue to an important quantum computing result called the Gottesman-Knill theorem,” Gatto Lamas explained. “While Gottesman-Knill clearly defines what a quantum computer must do to surpass a classical one, we show that a quantum network must be constructed with a particular set of operations to do things that a standard communications network cannot.”
    Chitambar believes that the framework will not only be useful for developing criteria to assess a quantum network’s quality based on the degree of nonlocality it possesses, but that it can be used to expand the concept of nonlocality.
    “Right now, there is a relatively good understanding of the type of nonlocality that can emerge between two parties,” he said. “However, one can imagine for a quantum network consisting of many connected parties that there might be some kind of global property that you can’t reduce to individual pairs on the network. Such a property might depend intimately on the network’s overall structure.” More

  • in

    Detecting spoiled food with LEDs

    A team of researchers has developed new LEDs which emit light simultaneously in two different wavelength ranges, for a simpler and more comprehensive way to monitor the freshness of fruit and vegetables. As the team write in the journal Angewandte Chemie, modifying the LEDs with perovskite materials causes them to emit in both the near-infrared range and the visible range, a significant development in the contact-free monitoring of food.
    Perovskite crystals are able to capture and convert light. Being simple to produce and highly efficient, perovskites are already used in solar cells but are also being intensively researched for suitability in other technologies. Angshuman Nag and his team at the Indian Institute of Science Education and Research (IISER) in Pune, India, are now proposing a perovskite application in LED technology that could simplify the quality control of fresh fruit and vegetables.
    Without light converters, LEDs would emit light in rather narrow light bands. To cover the whole range of white light produced by the sun, the diodes in “phosphor-converted” (pc) LEDs are coated with luminescent substances. Nag and his team have used a double emission coating with the purpose to produce pc-LEDs that emit both white (“normal”) light and also a strong band in the near-infrared range (NIR).
    To make the dual-emission pc-LED, they applied a double perovskite doped with bismuth and chromium. Part of the bismuth component emits warm white light and another part transfers energy to the chromium component, de-exciting it and causing an additional emission in the NIR range, the researchers found out.
    NIR is already used in the food industry to examine freshness in fruit and vegetables. Nag and PhD student Sajid Saikia, first author of the paper, explain their idea: “Food contains water, which absorbs the broad near-infrared emission at around 1000 nm. The more water that is present [due to rotting], the greater the absorption of near-infrared radiation, yielding darker contrast in an image taken under near-infrared radiation. This easy, non-invasive imaging process can estimate the water content in different parts of food, assessing its freshness.”
    Using these modified pc-LEDs to examine apples or strawberries, the team observed dark spots that were not visible in standard camera images. Illuminating the food with both white and NIR light revealed normal coloring that could be seen by the naked eye, as well as those parts which were starting to rot, but not yet visibly so.
    Saikia and Nag envision a compact device for simultaneous visual and NIR food inspection, although the two detectors, one for visible light and one for NIR light, could make such an instrument costly for common applications. On the other hand, the researchers emphasize that the pc LEDs are easy to produce without any chemical waste or solvents and short-term costs could be more than recovered by the long service life and scalability of this novel dual-emitting pc-LED device. More

  • in

    Analogous to algae: Scientists move toward engineering living matter by manipulating movement of microparticles

    A team of scientists has devised a system that replicates the movement of naturally occurring phenomena, such as hurricanes and algae, using laser beams and the spinning of microscopic rotors.
    The breakthrough, reported in the journal Nature Communications, reveals new ways that living matter can be reproduced on a cellular scale.
    “Living organisms are made of materials that actively pump energy through their molecules, which produce a range of movements on a larger cellular scale,” explains Matan Yah Ben Zion, a doctoral student in New York University’s Department of Physics at the time of the work and one of the paper’s authors. “By engineering cellular-scale machines from the ground up, our work can offer new insights into the complexity of the natural world.”
    The research centers on vortical flows, which appear in both biological and meteorological systems, such as algae or hurricanes. Specifically, particles move into orbital motion in the flow generated by their own rotation, resulting in a range of complex interactions.
    To better understand these dynamics, the paper’s authors, who also included Alvin Modin, an NYU undergraduate at the time of the study and now a doctoral student at Johns Hopkins University, and Paul Chaikin, an NYU physics professor, sought to replicate them at their most basic level. To do so, they created tiny micro-rotors — about 1/10th the width of a strand of human hair — to move micro-particles using a laser beam (Chaikin and his colleagues devised this process in a previous work).
    The researchers found that the rotating particles mutually affected each other into orbital motion, with striking similarities to dynamics observed by other scientists in “dancing” algae — algae groupings that move in concert with each other.
    In addition, the NYU team found that the spins of the particles reciprocate as the particles orbit.
    “The spins of the synthetic particles reciprocate in the same fashion as that observed in algae — in contrast to previous work with artificial micro-rotors,” explains Ben Zion, now a researcher at Tel Aviv University. “So we were able to reproduce synthetically — and on the micron scale — an effect that is seen in living systems.”
    “Collectively, these findings suggest that the dance of algae can be reproduced in a synthetic system, better establishing our understanding of living matter,” he adds.
    The research was supported by grants from the Department of Energy (DE-SC0007991, SC0020976). More

  • in

    Displacement or complement? Mixed-bag responses in human interaction study with AI

    Artificial intelligence (AI) is all the rage lately in the public eye. How AI is being incorporated to the advantage of our everyday life despite its rapid development, however, remains an elusive topic that deserves the attention of many scientists. While in theory AI can replace, or even displace, human beings from their positions, the challenge remains on how different industries and institutions can take advantage of this technological advancement and not drown in it.
    Recently, a team of researchers at the Hong Kong University of Science and Technology (HKUST) conducted an ambitious study of AI applications on the education front, examining how AI could enhance grading while observing human participants’ behavior in the presence of a computerized companion. They found that teachers were generally receptive to AI’s input — until both sides came to an argument on who should reign supreme. This very much resembles how human beings interact with one another when a new member forays into existing territory.
    The research was conducted by HKUST Department of Computer Science and Engineering Ph.D. candidate Chengbo Zheng and four of his teammates under the supervision of Associate Professor Prof. Xiaojuan MA. They developed an AI group member named AESER (Automated Essay ScorER) and separated twenty English teachers into ten groups to investigate the impact of AESER in a group discussion setting, where the AI would contribute in opinion deliberation, asking and answering questions and even voting for the final decision. In this study, designed akin to the controlled “Wizard of Oz” research method, a deep learning model and a human researcher would form joint input to AESER, which would then exchange views and conduct discussions with other participants in an online meeting room.
    While the team expected AESER to promote objectivity and provide novel perspectives that would otherwise be overlooked, potential challenges were soon revealed. First, there was the risk of conformity, where the engagement of AI would soon create a majority to thwart discussions. Second, views provided by AESER were found to be rigid and even stubborn, which frustrated the participants when they found that an argument could never be “won.” Many also did not think AI’s input should be given equal weight and are more fit to play the role of an assistant to actual human work.
    “At this stage, AI is deemed somewhat ‘stubborn’ by human collaborators, for good and bad,” noted Prof Ma. “On the one hand, AI is stubborn so it does not fear to express its opinions frankly and openly. However, human collaborators feel disengaged when they could not meaningfully persuade AI to change its view. Humans varying attitudes towards AI. Some consider it to be a single intelligent entity while others regard AI as the voice of collective intelligence that emerges from big data. Discussions about issues such as authority and bias thus arise.”
    The immediate next step for the team involves expanding its scope to gathermore quantitative data, which will provide more measurable and precise insights into how AI impacts group decision-making. They are also looking to explore large language models (LLMs) such as ChatGPT into the study, which could potentially bring new insights and perspectives to group discussions. More