More stories

  • in

    Bot inspired by baby turtles can swim under the sand

    This robot can swim under the sand and dig itself out too, thanks to two front limbs that mimic the oversized flippers of turtle hatchlings.
    It’s the only robot that is able to travel in sand at a depth of 5 inches. It can also travel at a speed of 1.2 millimeters per second-roughly 4 meters, or 13 feet, per hour. This may seem slow but is comparable to other subterranean animals like worms and clams. The robot is equipped with force sensors at the end of its limbs that allow it to detect obstacles while in motion. It can operate untethered and be controlled via WiFi.
    Robots that can move through sand face significant challenges like dealing with higher forces than robots that move in air or water. They also get damaged more easily. However, the potential benefits of solving locomotion in sand include inspection of grain silos, measurements for soil contaminants, seafloor digging, extraterrestrial exploration,and search and rescue.
    The robot is the result of several experiments conducted by a team of roboticists at the University of California San Diego to better understand sand and how robots could travel through it. Sand is particularly challenging because of the friction between sand grains that leads to large forces; difficulty sensing obstacles; and the fact that it switches between behaving like a liquid and a solid depending on the context.
    The team believed that observing animals would be key to developing a bot that can swim in sand and dig itself out of sand as well. After considering worms, they landed on sea turtle hatchlings, which have enlarged front fins that allow them to surface after hatching. Turtle-like flippers can generate large propulsive forces; allow the robot to steer; and have the potential to detect obstacles.
    Scientists still do not fully understand how robots with flipper-like appendages move within sand. The research team at UC San Diego conducted extensive simulations and testing, and finally landed on a tapered body design and a shovel-shaped nose.
    “We needed to build a robot that is both strong and streamlined,” said Shivam Chopra, lead author of the paper describing the robot in the journal Advanced Intelligent Systems and a Ph.D. student in the research group of professor Nick Gravish at the Jacobs School of Engineering at UC San Diego. More

  • in

    Researchers develop AI model to better predict which drugs may cause birth defects

    Data scientists at the Icahn School of Medicine at Mount Sinai in New York and colleagues have created an artificial intelligence model that may more accurately predict which existing medicines, not currently classified as harmful, may in fact lead to congenital disabilities.
    The model, or “knowledge graph,” described in the July 17 issue of the Naturejournal Communications Medicine, also has the potential to predict the involvement of pre-clinical compounds that may harm the developing fetus. The study is the first known of its kind to use knowledge graphs to integrate various data types to investigate the causes of congenital disabilities.
    Birth defects are abnormalities that affect about 1 in 33 births in the United States. They can be functional or structural and are believed to result from various factors, including genetics. However, the causes of most of these disabilities remain unknown. Certain substances found in medicines, cosmetics, food, and environmental pollutants can potentially lead to birth defects if exposed during pregnancy.
    “We wanted to improve our understanding of reproductive health and fetal development, and importantly, warn about the potential of new drugs to cause birth defects before these drugs are widely marketed and distributed,” says Avi Ma’ayan, PhD, Professor, Pharmacological Sciences, and Director of the Mount Sinai Center for Bioinformatics at Icahn Mount Sinai, and senior author of the paper. “Although identifying the underlying causes is a complicated task, we offer hope that through complex data analysis like this that integrates evidence from multiple sources, we will be able, in some cases, to better predict, regulate, and protect against the significant harm that congenital disabilities could cause.”
    The researchers gathered knowledge across several datasets on birth-defect associations noted in published work, including those produced by NIH Common Fund programs, to demonstrate how integrating data from these resources can lead to synergistic discoveries. Particularly, the combined data is from the known genetics of reproductive health, classification of medicines based on their risk during pregnancy, and how drugs and pre-clinical compounds affect the biological mechanisms inside human cells.
    Specifically, the data included studies on genetic associations, drug- and preclinical-compound-induced gene expression changes in cell lines, known drug targets, genetic burden scores for human genes, and placental crossing scores for small molecule drugs.
    Importantly, using ReproTox-KG, with semi-supervised learning (SSL), the research team prioritized 30,000 preclinical small molecule drugs for their potential to cross the placenta and induce birth defects. SSL is a branch of machine learning that uses a small amount of labeled data to guide predictions for much larger unlabeled data. In addition, by analyzing the topology of the ReproTox-KG more than 500 birth-defect/gene/drug cliques were identified that could explain molecular mechanisms that underlie drug-induced birth defects. In graph theory terms, cliques are subsets of a graph where all the nodes in the clique are directly connected to all other nodes in the clique.
    The investigators caution that the study’s findings are preliminary and that further experiments are needed for validation.
    Next, the investigators plan to use a similar graph-based approach for other projects focusing on the relationship between genes, drugs, and diseases. They also aim to use the processed dataset as training materials for courses and workshops on bioinformatics analysis. In addition, they plan to extend the study to consider more complex data, such as gene expression from specific tissues and cell types collected at multiple stages of development.
    “We hope that our collaborative work will lead to a new global framework to assess potential toxicity for new drugs and explain the biological mechanisms by which some drugs, known to cause birth defects, may operate. It’s possible that at some point in the future, regulatory agencies such as the U.S. Food and Drug Administration and the U.S. Environmental Protection Agency may use this approach to evaluate the risk of new drugs or other chemical applications,” says Dr. Ma’ayan. More

  • in

    Robotics: New skin-like sensors fit almost everywhere

    Researchers from the Munich Institute of Robotics and Machine Intelligence (MIRMI) at the Technical University of Munich (TUM) have developed an automatic process for making soft sensors. These universal measurement cells can be attached to almost any kind of object. Applications are envisioned especially in robotics and prosthetics.
    “Detecting and sensing our environment is essential for understanding how to interact with it effectively,” says Sonja Groß. An important factor for interactions with objects is their shape. “This determines how we can perform certain tasks,” says the researcher from the Munich Institute of Robotics and Machine Intelligence (MIRMI) at TUM. In addition, physical properties of objects, such as their hardness and flexibility, influence how we can grasp and manipulate them, for example.
    Artificial hand: interaction with the robotic system
    The holy grail in robotics and prosthetics is a realistic emulation of the sensorimotoric skills of a person such as those in a human hand. In robotics, force and torque sensors are fully integrated into most devices. These measurement sensors provide valuable feedback on the interactions of the robotic system, such as an artificial hand, with its surroundings. However, traditional sensors have been limited in terms of customization possibilities. Nor can they be attached to arbitrary objects. In short: until now, no process existed for producing sensors for rigid objects of arbitrary shapes and sizes.
    New framework for soft sensors presented for the first time
    This was the starting point for the research of Sonja Groß and Diego Hidalgo, which they have now presented at the ICRA robotics conference in London. The difference: a soft, skin-like material that wraps around objects. The research group has also developed a framework that largely automates the production process for this skin. It works as follows: “We use software to build the structure for the sensory systems,” says Hidalgo. “We then send this information to a 3D printer where our soft sensors are made.” The printer injects a conductive black paste into liquid silicone. The silicone hardens, but the paste is enclosed by it and remains liquid. When the sensors are squeezed or stretched, their electrical resistance changes. “That tells us how much compression or stretching force is applied to a surface. We use this principle to gain a general understanding of interactions with objects and, specifically, to learn how to control an artificial hand interacting with these objects,” explains Hidalgo. What sets their work apart: the sensors embedded in silicon adjust to the surface in question (such as fingers or hands) but still provide precise data that can be used for the interaction with the environment.
    New perspectives for robotics and especially prosthetics
    “The integration of these soft, skin-like sensors in 3D objects opens up new paths for advanced haptic sensing in artificial intelligence,” says MIRMI Executive Director Prof. Sami Haddadin. The sensors provide valuable data on compressive forces and deformations in real time — thus providing immediate feedback. This expands the range of perception of an object or a robotic hand — facilitating a more sophisticated and sensitive interaction. Haddadin: “This work has the potential to bring about a general revolution in industries such as robotics, prosthetics and the human/machine interaction by making it possible to create wireless and customizable sensor technology for arbitrary objects and machines.”
    Video showing the entire process: https://youtu.be/i43wgx9bT-E More

  • in

    AI to predict your health later in life — all at the press of a button

    Thanks to artificial intelligence, we will soon be able to predict our risk of developing serious health conditions later in life, at the press of a button.
    Abdominal aortic calcification, or AAC, is a calcification which can build up within the walls of the abdominal aorta and predicts your risk of developing cardiovascular disease events such as heart attacks and stroke.
    It also predicts your risk of falls, fractures and late-life dementia.
    Conveniently, common bone density machine scans used to detect osteoporosis, can also detect AAC.
    However, highly trained expert readers are needed to analyse the images, a process which can take 5-15 minutes per image.
    But researchers from Edith Cowan University’s (ECU) School of Science and School of Medical and Health Sciences have collaborated to develop software which can analyse scans much, much faster: roughly 60,000 images in a single day.
    Researcher and Heart Foundation Future Leader Fellow Associate Professor Joshua Lewis said this significant boost in efficiency will be crucial for the widespread use of AAC in research and helping people avoid developing health problems later in life. More

  • in

    ChatGPT’s responses to people’s healthcare-related queries are nearly indistinguishable from those provided by humans, new study reveals

    ChatGPT’s responses to people’s healthcare-related queries are nearly indistinguishable from those provided by humans, a new study from NYU Tandon School of Engineering and Grossman School of Medicine reveals, suggesting the potential for chatbots to be effective allies to healthcare providers’ communications with patients.
    An NYU research team presented 392 people aged 18 and above with ten patient questions and responses, with half of the responses generated by a human healthcare provider and the other half by ChatGPT.
    Participants were asked to identify the source of each response and rate their trust in the ChatGPT responses using a 5-point scale from completely untrustworthy to completely trustworthy.
    The study found people have limited ability to distinguish between chatbot and human-generated responses. On average, participants correctly identified chatbot responses 65.5% of the time and provider responses 65.1% of the time, with ranges of 49.0% to 85.7% for different questions. Results remained consistent no matter the demographic categories of the respondents.
    The study found participants mildly trust chatbots’ responses overall (3.4 average score), with lower trust when the health-related complexity of the task in question was higher. Logistical questions (e.g. scheduling appointments, insurance questions) had the highest trust rating (3.94 average score), followed by preventative care (e.g. vaccines, cancer screenings, 3.52 average score). Diagnostic and treatment advice had the lowest trust ratings (scores 2.90 and 2.89, respectively).
    According to the researchers, the study highlights the possibility that chatbots can assist in patient-provider communication particularly related to administrative tasks and common chronic disease management. Further research is needed, however, around chatbots’ taking on more clinical roles. Providers should remain cautious and exercise critical judgment when curating chatbot-generated advice due to the limitations and potential biases of AI models. More

  • in

    Precision technology, machine learning lead to early diagnosis of calf pneumonia

    Monitoring dairy calves with precision technologies based on the “internet of things,” or IoT, leads to the earlier diagnosis of calf-killing bovine respiratory disease, according to a new study. The novel approach — a result of crosscutting collaboration by a team of researchers from Penn State, University of Kentucky and University of Vermont — will offer dairy producers an opportunity to improve the economies of their farms, according to researchers.
    This is not your grandfather’s dairy farming strategy, notes lead researcher Melissa Cantor, assistant professor of precision dairy science in Penn State’s College of Agricultural Sciences. Cantor noted that new technology is becoming increasingly affordable, offering farmers opportunities to detect animal health problems soon enough to intervene, saving the calves and the investment they represent.
    IoT refers to embedded devices equipped with sensors, processing and communication abilities, software, and other technologies to connect and exchange data with other devices over the Internet. In this study, Cantor explained, IoT technologies such as wearable sensors and automatic feeders were used to closely watch and analyze the condition of calves.
    Such IoT devices generate a huge amount of data by closely monitoring the cows’ behavior. To make such data easier to interpret, and provide clues to calf health problems, the researchers adopted machine learning — a branch of artificial intelligence that learns the hidden patterns in the data to discriminate between sick and healthy calves, given the input from the IoT devices.
    “We put leg bands on the calves, which record activity behavior data in dairy cattle, such as the number of steps and lying time,” Cantor said. “And we used automatic feeders, which dispense milk and grain and record feeding behaviors, such as the number of visits and liters of consumed milk. Information from those sources signaled when a calf’s condition was on the verge of deteriorating.”
    Bovine respiratory disease is an infection of the respiratory tract that is the leading reason for antimicrobial use in dairy calves and represents 22% of calf mortalities. The costs and effects of the ailment can severely damage a farm’s economy, since raising dairy calves is one of the largest economic investments.
    “Diagnosing bovine respiratory disease requires intensive and specialized labor that is hard to find,” Cantor said. “So, precision technologies based on IoT devices such as automatic feeders, scales and accelerometers can help detect behavioral changes before outward clinical signs of the disease are manifested.”
    In the study, data was collected from 159 dairy calves using precision livestock technologies and by researchers who performed daily physical health exams on the calves at the University of Kentucky. Researchers recorded both automatic data-collection results and manual data-collection results and compared the two.
    In findings recently published in IEEE Access, a peer-reviewed open-access scientific journal published by the Institute of Electrical and Electronics Engineers, the researchers reported that the proposed approach is able to identify calves that developed bovine respiratory disease sooner. Numerically, the system achieved an accuracy of 88% for labeling sick and healthy calves. Seventy percent of sick calves were predicted four days prior to diagnosis, and 80% of calves that developed a chronic case of the disease were detected within the first five days of sickness.
    “We were really surprised to find out that the relationship with the behavioral changes in those animals was very different than animals that got better with one treatment,” she said. “And nobody had ever looked at that before. We came up with the concept that if these animals actually behave differently, then there’s probably a chance that IoT technologies empowered with machine learning inference techniques could actually identify them sooner, before anybody can with the naked eye. That offers producers options.”
    Contributing to the research were: Enrico Casella, Department of Animal and Dairy Science, University of Wisconsin-Madison; Melissa Cantor, Department of Animal Science, Penn State University; Megan Woodrum Setser, Department of Animal and Food Sciences, University of Kentucky; Simone Silvestri, Department of Computer Science, University of Kentucky; and Joao Costa, Department of Animal and Veterinary Sciences, University of Vermont.
    This work was supported by the U.S. Department of Agriculture and the National Science Foundation. More

  • in

    ROSE: Revolutionary, nature-inspired soft embracing robotic gripper

    Although grasping objects is a relatively straightforward task for us humans, there is a lot of mechanics involved in this simple task. Picking up an object requires fine control of the fingers, of their positioning, and of the pressure each finger applies, which in turn necessitates intricate sensing capabilities. It’s no wonder that robotic grasping and manipulation is a very active research area within the field of robotics.
    Today, industrial robotic hands have replaced humans in various complex and hazardous activities, including in restaurants, farms, factories, and manufacturing plants. In general, soft robotic grippers are better suited for tasks in which the objects to be picked up are fragile, such as fruits and vegetables. However, while soft robots are promising as harvesting tools, they usually share a common disadvantage: their price tag. Most soft robotic gripper designs require the intricate assembly of multiple pieces. This drives up development and maintenance costs.
    Fortunately, a research team from the Japan Advanced Institute of Technology (JAIST), led by Associate Professor Van Anh Ho, have come up with a groundbreaking solution to these problems. Taking a leaf from nature, they have developed an innovative soft robotic gripper called ‘ROSE,’ which stands for ‘Rotation-based Squeezing Gripper.’ Details about ROSE’s design, as well as the results of their latest study, have been presented at the Robotics: Science and Systems 2023 (RSS2023) conference.
    What makes ROSE so impressive is its design. The soft gripping part has the shape of a cylindrical funnel or sleeve and is connected to a hard circular base, which in turn is attached to the shaft of an actuator. The funnel must be placed over the object meant to be picked up, covering a decent portion of its surface area. Then, the actuator makes the base turn, which causes the flexible funnel’s skin to wrap tightly around the object. This mechanism was loosely inspired by the changing shapes of roses, which bloom during the day and close up during the night.
    ROSE offers substantial advantages compared to more conventional grippers. First, it is much less expensive to manufacture. The hard parts can all be 3D-printed, whereas the funnel itself can be easily produced using a mold and liquid silicone rubber. This ensures that the design is easily scalable and is suitable for mass production.
    Second, ROSE can easily pick up a wide variety of objects without complex control and sensing mechanisms. Unlike grippers that rely on finger-like structures, ROSE’s sleeve applies a gentler, more uniform pressure. This makes ROSE better suited for handling fragile produce, such as strawberries and pears, as well as slippery objects. Weighing less than 200 grams, the gripper can achieve an impressive payload-to-weight ratio of 6812%.
    Third, ROSE is extremely durable and sturdy. The team showed that it could successfully continue to pick up objects even after 400,000 trials. Moreover, the funnel still works properly in the presence of significant cracks or cuts. “The proposed gripper excels in demanding scenarios, as evidenced by its ability to withstand a severe test in which we cut the funnel into four separate sections at full height,” remarks Assoc. Prof. Ho, “This test underscores the gripper’s exceptional resilience and optimal performance in challenging conditions.”
    Finally, ROSE can be endowed with sensing capabilities. The researchers achieved this by placing multiple cameras on top of the circular base, pointing at the inside of the funnel, which was covered in markers, whose position could be picked up by the cameras and analyzed through image processing algorithms. This promising approach allows for size and shape estimation of the grasped object.
    The research team notes that ROSE could be an enticing option for various applications, including harvesting operations and sorting items in factories. It could also find a home in cluttered environments such as farms, professional kitchens, and warehouses. “The ROSE gripper holds significant potential to revolutionize gripping applications and gain widespread acceptance across various fields,” concludes Assoc. Prof. Ho, “Its straightforward yet robust and dependable design is set to inspire researchers and manufacturers to embrace it for a broad variety of gripping tasks in the near future.” More

  • in

    Improving urban planning with virtual reality

    What should the city we live in look like? How do structural changes affect the people who move around it? Cartographers at Ruhr University Bochum use virtual reality tools to explore these questions before a great deal of money is spent on building measures. Using the Unity3 game engine, they recreate scenarios in 3D where people can experience potential changes through immersion. They were able to prove that the physical reaction to this experience is measurable. Julian Keil and Marco Weißmann from Professor Frank Dickmann’s team published their findings in KN — Journal of Cartography and Geographic Information on 1 May 2023 and in Applied Sciences on 13 May 2023.
    Lab kit for urban scenarios
    Construction measures that transform urban settings change the environment of both the people who live there permanently and those who visit them temporarily. It’s not always possible to foresee the effects in advance. In such cases, it helps to recreate the setting in a 3D model which people can experience through immersion. To this end, the cartographers working with Marco Weißmann use software that was originally designed to programme computer game environments. “We’ve developed a lab kit of sorts in which you can simulate an environment virtually, complete with traffic,” explains Weißmann. The researchers can use it to directly visualise the effects of planned structural changes: how does the traffic flow? Do cars and pedestrians get in each other’s way or not?
    Measuring the implicit effects of spaces
    Moreover, the space that surrounds us affects our well-being. We do notice it sometimes, but not always. “People who’ve lived on a noisy street for a long time, for example, might think they don’t even hear the noise anymore,” says Julian Keil. “But we know that, objectively speaking, residents in such streets experience significantly higher stress levels than others.” In order to determine such implicit effects of urban planning measures before a lot of money has been poured into them, the cartography team developed a method to measure them in advance. For this purpose, they programmed an urban environment in virtual reality and had test participants experience the scenarios. At the same time, they measured the skin conductivity of the test persons, which provides information about their stress level.
    They showed that a higher traffic volume in a street clearly upset the test persons, as measured by their skin conductivity. To corroborate their findings, a study is planned to incorporate more physical measurements that will provide information about the participants’ stress levels and various emotions, including heart rate, blood pressure and pupil size. “Until now, residents and other stakeholders have been involved in the planning stage of construction measures, but only in the form of surveys, i.e. explicit statements,” says Keil, whose background is in psychology. “Our method enables spatial planners to assess implicit effects of possible measures and to include them in the planning, too.”
    Climate-friendly experiments
    The experiments for both studies were conducted in a climate-friendly way using electricity from a mobile solar system on the roof of the institute building. More