More stories

  • in

    Robotics: New skin-like sensors fit almost everywhere

    Researchers from the Munich Institute of Robotics and Machine Intelligence (MIRMI) at the Technical University of Munich (TUM) have developed an automatic process for making soft sensors. These universal measurement cells can be attached to almost any kind of object. Applications are envisioned especially in robotics and prosthetics.
    “Detecting and sensing our environment is essential for understanding how to interact with it effectively,” says Sonja Groß. An important factor for interactions with objects is their shape. “This determines how we can perform certain tasks,” says the researcher from the Munich Institute of Robotics and Machine Intelligence (MIRMI) at TUM. In addition, physical properties of objects, such as their hardness and flexibility, influence how we can grasp and manipulate them, for example.
    Artificial hand: interaction with the robotic system
    The holy grail in robotics and prosthetics is a realistic emulation of the sensorimotoric skills of a person such as those in a human hand. In robotics, force and torque sensors are fully integrated into most devices. These measurement sensors provide valuable feedback on the interactions of the robotic system, such as an artificial hand, with its surroundings. However, traditional sensors have been limited in terms of customization possibilities. Nor can they be attached to arbitrary objects. In short: until now, no process existed for producing sensors for rigid objects of arbitrary shapes and sizes.
    New framework for soft sensors presented for the first time
    This was the starting point for the research of Sonja Groß and Diego Hidalgo, which they have now presented at the ICRA robotics conference in London. The difference: a soft, skin-like material that wraps around objects. The research group has also developed a framework that largely automates the production process for this skin. It works as follows: “We use software to build the structure for the sensory systems,” says Hidalgo. “We then send this information to a 3D printer where our soft sensors are made.” The printer injects a conductive black paste into liquid silicone. The silicone hardens, but the paste is enclosed by it and remains liquid. When the sensors are squeezed or stretched, their electrical resistance changes. “That tells us how much compression or stretching force is applied to a surface. We use this principle to gain a general understanding of interactions with objects and, specifically, to learn how to control an artificial hand interacting with these objects,” explains Hidalgo. What sets their work apart: the sensors embedded in silicon adjust to the surface in question (such as fingers or hands) but still provide precise data that can be used for the interaction with the environment.
    New perspectives for robotics and especially prosthetics
    “The integration of these soft, skin-like sensors in 3D objects opens up new paths for advanced haptic sensing in artificial intelligence,” says MIRMI Executive Director Prof. Sami Haddadin. The sensors provide valuable data on compressive forces and deformations in real time — thus providing immediate feedback. This expands the range of perception of an object or a robotic hand — facilitating a more sophisticated and sensitive interaction. Haddadin: “This work has the potential to bring about a general revolution in industries such as robotics, prosthetics and the human/machine interaction by making it possible to create wireless and customizable sensor technology for arbitrary objects and machines.”
    Video showing the entire process: https://youtu.be/i43wgx9bT-E More

  • in

    AI to predict your health later in life — all at the press of a button

    Thanks to artificial intelligence, we will soon be able to predict our risk of developing serious health conditions later in life, at the press of a button.
    Abdominal aortic calcification, or AAC, is a calcification which can build up within the walls of the abdominal aorta and predicts your risk of developing cardiovascular disease events such as heart attacks and stroke.
    It also predicts your risk of falls, fractures and late-life dementia.
    Conveniently, common bone density machine scans used to detect osteoporosis, can also detect AAC.
    However, highly trained expert readers are needed to analyse the images, a process which can take 5-15 minutes per image.
    But researchers from Edith Cowan University’s (ECU) School of Science and School of Medical and Health Sciences have collaborated to develop software which can analyse scans much, much faster: roughly 60,000 images in a single day.
    Researcher and Heart Foundation Future Leader Fellow Associate Professor Joshua Lewis said this significant boost in efficiency will be crucial for the widespread use of AAC in research and helping people avoid developing health problems later in life. More

  • in

    ChatGPT’s responses to people’s healthcare-related queries are nearly indistinguishable from those provided by humans, new study reveals

    ChatGPT’s responses to people’s healthcare-related queries are nearly indistinguishable from those provided by humans, a new study from NYU Tandon School of Engineering and Grossman School of Medicine reveals, suggesting the potential for chatbots to be effective allies to healthcare providers’ communications with patients.
    An NYU research team presented 392 people aged 18 and above with ten patient questions and responses, with half of the responses generated by a human healthcare provider and the other half by ChatGPT.
    Participants were asked to identify the source of each response and rate their trust in the ChatGPT responses using a 5-point scale from completely untrustworthy to completely trustworthy.
    The study found people have limited ability to distinguish between chatbot and human-generated responses. On average, participants correctly identified chatbot responses 65.5% of the time and provider responses 65.1% of the time, with ranges of 49.0% to 85.7% for different questions. Results remained consistent no matter the demographic categories of the respondents.
    The study found participants mildly trust chatbots’ responses overall (3.4 average score), with lower trust when the health-related complexity of the task in question was higher. Logistical questions (e.g. scheduling appointments, insurance questions) had the highest trust rating (3.94 average score), followed by preventative care (e.g. vaccines, cancer screenings, 3.52 average score). Diagnostic and treatment advice had the lowest trust ratings (scores 2.90 and 2.89, respectively).
    According to the researchers, the study highlights the possibility that chatbots can assist in patient-provider communication particularly related to administrative tasks and common chronic disease management. Further research is needed, however, around chatbots’ taking on more clinical roles. Providers should remain cautious and exercise critical judgment when curating chatbot-generated advice due to the limitations and potential biases of AI models. More

  • in

    Precision technology, machine learning lead to early diagnosis of calf pneumonia

    Monitoring dairy calves with precision technologies based on the “internet of things,” or IoT, leads to the earlier diagnosis of calf-killing bovine respiratory disease, according to a new study. The novel approach — a result of crosscutting collaboration by a team of researchers from Penn State, University of Kentucky and University of Vermont — will offer dairy producers an opportunity to improve the economies of their farms, according to researchers.
    This is not your grandfather’s dairy farming strategy, notes lead researcher Melissa Cantor, assistant professor of precision dairy science in Penn State’s College of Agricultural Sciences. Cantor noted that new technology is becoming increasingly affordable, offering farmers opportunities to detect animal health problems soon enough to intervene, saving the calves and the investment they represent.
    IoT refers to embedded devices equipped with sensors, processing and communication abilities, software, and other technologies to connect and exchange data with other devices over the Internet. In this study, Cantor explained, IoT technologies such as wearable sensors and automatic feeders were used to closely watch and analyze the condition of calves.
    Such IoT devices generate a huge amount of data by closely monitoring the cows’ behavior. To make such data easier to interpret, and provide clues to calf health problems, the researchers adopted machine learning — a branch of artificial intelligence that learns the hidden patterns in the data to discriminate between sick and healthy calves, given the input from the IoT devices.
    “We put leg bands on the calves, which record activity behavior data in dairy cattle, such as the number of steps and lying time,” Cantor said. “And we used automatic feeders, which dispense milk and grain and record feeding behaviors, such as the number of visits and liters of consumed milk. Information from those sources signaled when a calf’s condition was on the verge of deteriorating.”
    Bovine respiratory disease is an infection of the respiratory tract that is the leading reason for antimicrobial use in dairy calves and represents 22% of calf mortalities. The costs and effects of the ailment can severely damage a farm’s economy, since raising dairy calves is one of the largest economic investments.
    “Diagnosing bovine respiratory disease requires intensive and specialized labor that is hard to find,” Cantor said. “So, precision technologies based on IoT devices such as automatic feeders, scales and accelerometers can help detect behavioral changes before outward clinical signs of the disease are manifested.”
    In the study, data was collected from 159 dairy calves using precision livestock technologies and by researchers who performed daily physical health exams on the calves at the University of Kentucky. Researchers recorded both automatic data-collection results and manual data-collection results and compared the two.
    In findings recently published in IEEE Access, a peer-reviewed open-access scientific journal published by the Institute of Electrical and Electronics Engineers, the researchers reported that the proposed approach is able to identify calves that developed bovine respiratory disease sooner. Numerically, the system achieved an accuracy of 88% for labeling sick and healthy calves. Seventy percent of sick calves were predicted four days prior to diagnosis, and 80% of calves that developed a chronic case of the disease were detected within the first five days of sickness.
    “We were really surprised to find out that the relationship with the behavioral changes in those animals was very different than animals that got better with one treatment,” she said. “And nobody had ever looked at that before. We came up with the concept that if these animals actually behave differently, then there’s probably a chance that IoT technologies empowered with machine learning inference techniques could actually identify them sooner, before anybody can with the naked eye. That offers producers options.”
    Contributing to the research were: Enrico Casella, Department of Animal and Dairy Science, University of Wisconsin-Madison; Melissa Cantor, Department of Animal Science, Penn State University; Megan Woodrum Setser, Department of Animal and Food Sciences, University of Kentucky; Simone Silvestri, Department of Computer Science, University of Kentucky; and Joao Costa, Department of Animal and Veterinary Sciences, University of Vermont.
    This work was supported by the U.S. Department of Agriculture and the National Science Foundation. More

  • in

    ROSE: Revolutionary, nature-inspired soft embracing robotic gripper

    Although grasping objects is a relatively straightforward task for us humans, there is a lot of mechanics involved in this simple task. Picking up an object requires fine control of the fingers, of their positioning, and of the pressure each finger applies, which in turn necessitates intricate sensing capabilities. It’s no wonder that robotic grasping and manipulation is a very active research area within the field of robotics.
    Today, industrial robotic hands have replaced humans in various complex and hazardous activities, including in restaurants, farms, factories, and manufacturing plants. In general, soft robotic grippers are better suited for tasks in which the objects to be picked up are fragile, such as fruits and vegetables. However, while soft robots are promising as harvesting tools, they usually share a common disadvantage: their price tag. Most soft robotic gripper designs require the intricate assembly of multiple pieces. This drives up development and maintenance costs.
    Fortunately, a research team from the Japan Advanced Institute of Technology (JAIST), led by Associate Professor Van Anh Ho, have come up with a groundbreaking solution to these problems. Taking a leaf from nature, they have developed an innovative soft robotic gripper called ‘ROSE,’ which stands for ‘Rotation-based Squeezing Gripper.’ Details about ROSE’s design, as well as the results of their latest study, have been presented at the Robotics: Science and Systems 2023 (RSS2023) conference.
    What makes ROSE so impressive is its design. The soft gripping part has the shape of a cylindrical funnel or sleeve and is connected to a hard circular base, which in turn is attached to the shaft of an actuator. The funnel must be placed over the object meant to be picked up, covering a decent portion of its surface area. Then, the actuator makes the base turn, which causes the flexible funnel’s skin to wrap tightly around the object. This mechanism was loosely inspired by the changing shapes of roses, which bloom during the day and close up during the night.
    ROSE offers substantial advantages compared to more conventional grippers. First, it is much less expensive to manufacture. The hard parts can all be 3D-printed, whereas the funnel itself can be easily produced using a mold and liquid silicone rubber. This ensures that the design is easily scalable and is suitable for mass production.
    Second, ROSE can easily pick up a wide variety of objects without complex control and sensing mechanisms. Unlike grippers that rely on finger-like structures, ROSE’s sleeve applies a gentler, more uniform pressure. This makes ROSE better suited for handling fragile produce, such as strawberries and pears, as well as slippery objects. Weighing less than 200 grams, the gripper can achieve an impressive payload-to-weight ratio of 6812%.
    Third, ROSE is extremely durable and sturdy. The team showed that it could successfully continue to pick up objects even after 400,000 trials. Moreover, the funnel still works properly in the presence of significant cracks or cuts. “The proposed gripper excels in demanding scenarios, as evidenced by its ability to withstand a severe test in which we cut the funnel into four separate sections at full height,” remarks Assoc. Prof. Ho, “This test underscores the gripper’s exceptional resilience and optimal performance in challenging conditions.”
    Finally, ROSE can be endowed with sensing capabilities. The researchers achieved this by placing multiple cameras on top of the circular base, pointing at the inside of the funnel, which was covered in markers, whose position could be picked up by the cameras and analyzed through image processing algorithms. This promising approach allows for size and shape estimation of the grasped object.
    The research team notes that ROSE could be an enticing option for various applications, including harvesting operations and sorting items in factories. It could also find a home in cluttered environments such as farms, professional kitchens, and warehouses. “The ROSE gripper holds significant potential to revolutionize gripping applications and gain widespread acceptance across various fields,” concludes Assoc. Prof. Ho, “Its straightforward yet robust and dependable design is set to inspire researchers and manufacturers to embrace it for a broad variety of gripping tasks in the near future.” More

  • in

    Improving urban planning with virtual reality

    What should the city we live in look like? How do structural changes affect the people who move around it? Cartographers at Ruhr University Bochum use virtual reality tools to explore these questions before a great deal of money is spent on building measures. Using the Unity3 game engine, they recreate scenarios in 3D where people can experience potential changes through immersion. They were able to prove that the physical reaction to this experience is measurable. Julian Keil and Marco Weißmann from Professor Frank Dickmann’s team published their findings in KN — Journal of Cartography and Geographic Information on 1 May 2023 and in Applied Sciences on 13 May 2023.
    Lab kit for urban scenarios
    Construction measures that transform urban settings change the environment of both the people who live there permanently and those who visit them temporarily. It’s not always possible to foresee the effects in advance. In such cases, it helps to recreate the setting in a 3D model which people can experience through immersion. To this end, the cartographers working with Marco Weißmann use software that was originally designed to programme computer game environments. “We’ve developed a lab kit of sorts in which you can simulate an environment virtually, complete with traffic,” explains Weißmann. The researchers can use it to directly visualise the effects of planned structural changes: how does the traffic flow? Do cars and pedestrians get in each other’s way or not?
    Measuring the implicit effects of spaces
    Moreover, the space that surrounds us affects our well-being. We do notice it sometimes, but not always. “People who’ve lived on a noisy street for a long time, for example, might think they don’t even hear the noise anymore,” says Julian Keil. “But we know that, objectively speaking, residents in such streets experience significantly higher stress levels than others.” In order to determine such implicit effects of urban planning measures before a lot of money has been poured into them, the cartography team developed a method to measure them in advance. For this purpose, they programmed an urban environment in virtual reality and had test participants experience the scenarios. At the same time, they measured the skin conductivity of the test persons, which provides information about their stress level.
    They showed that a higher traffic volume in a street clearly upset the test persons, as measured by their skin conductivity. To corroborate their findings, a study is planned to incorporate more physical measurements that will provide information about the participants’ stress levels and various emotions, including heart rate, blood pressure and pupil size. “Until now, residents and other stakeholders have been involved in the planning stage of construction measures, but only in the form of surveys, i.e. explicit statements,” says Keil, whose background is in psychology. “Our method enables spatial planners to assess implicit effects of possible measures and to include them in the planning, too.”
    Climate-friendly experiments
    The experiments for both studies were conducted in a climate-friendly way using electricity from a mobile solar system on the roof of the institute building. More

  • in

    Researchers establish criterion for nonlocal quantum behavior in networks

    A new theoretical study provides a framework for understanding nonlocality, a feature that quantum networks must possess to perform operations inaccessible to standard communications technology. By clarifying the concept, researchers determined the conditions necessary to create systems with strong, quantum correlations.
    The study, published in Physical Review Letters, adapts techniques from quantum computing theory to create a new classification scheme for quantum nonlocality. This not only allowed the researchers to unify prior studies of the concept into a common framework, but it facilitated a proof that networked quantum systems can only display nonlocality if they possess a particular set of quantum features.
    “On the surface, quantum computing and nonlocality in quantum networks are different things, but our study shows that, in certain ways, they are two sides of the same coin,” said Eric Chitambar, a professor of electrical and computer engineering at the University of Illinois Urbana-Champaign and the project lead. “In particular, they require the same fundamental set of quantum operations to deliver effects that cannot be replicated with classical technology.”
    Nonlocality is a consequence of entanglement, in which quantum objects experience strong connections even when separated over vast physical distances. When entangled objects are used to perform quantum operations, the results display statistical correlations that cannot be explained by non-quantum means. Such correlations are said to be nonlocal. A quantum network must possess a degree of nonlocality to ensure that it can perform truly quantum functions, but the phenomenon is still poorly understood.
    To facilitate study of nonlocality, Chitambar and physics graduate student Amanda Gatto Lamas applied the formalism of quantum resource theory. By treating nonlocality as a “resource” to manage, the researchers’ framework allowed them to view past studies of nonlocality as separate instances of the same concept, just with different restrictions on the resource’s availability. This facilitated the proof of their main result, that nonlocality can only be achieved with a limited set of quantum operations.
    “Our result is the quantum network analogue to an important quantum computing result called the Gottesman-Knill theorem,” Gatto Lamas explained. “While Gottesman-Knill clearly defines what a quantum computer must do to surpass a classical one, we show that a quantum network must be constructed with a particular set of operations to do things that a standard communications network cannot.”
    Chitambar believes that the framework will not only be useful for developing criteria to assess a quantum network’s quality based on the degree of nonlocality it possesses, but that it can be used to expand the concept of nonlocality.
    “Right now, there is a relatively good understanding of the type of nonlocality that can emerge between two parties,” he said. “However, one can imagine for a quantum network consisting of many connected parties that there might be some kind of global property that you can’t reduce to individual pairs on the network. Such a property might depend intimately on the network’s overall structure.” More