More stories

  • in

    Researchers develop algorithm to divvy up tasks for human-robot teams

    As robots increasingly join people on the factory floor, in warehouses and elsewhere on the job, dividing up who will do which tasks grows in complexity and importance. People are better suited for some tasks, robots for others. And in some cases, it is advantageous to spend time teaching a robot to do a task now and reap the benefits later.
    Researchers at Carnegie Mellon University’s Robotics Institute (RI) have developed an algorithmic planner that helps delegate tasks to humans and robots. The planner, “Act, Delegate or Learn” (ADL), considers a list of tasks and decides how best to assign them. The researchers asked three questions: When should a robot act to complete a task? When should a task be delegated to a human? And when should a robot learn a new task?
    “There are costs associated with the decisions made, such as the time it takes a human to complete a task or teach a robot to complete a task and the cost of a robot failing at a task,” said Shivam Vats, the lead researcher and a Ph.D. student in the RI. “Given all those costs, our system will give you the optimal division of labor.”
    The team’s work could be useful in manufacturing and assembly plants, for sorting packages, or in any environment where humans and robots collaborate to complete several tasks. The researchers tested the planner in scenarios where humans and robots had to insert blocks into a peg board and stack parts of different shapes and sizes made of Lego bricks.
    Using algorithms and software to decide how to delegate and divide labor is not new, even when robots are part of the team. However, this work is among the first to include robot learning in its reasoning.
    “Robots aren’t static anymore,” Vats said. “They can be improved and they can be taught.”
    Often in manufacturing, a person will manually manipulate a robotic arm to teach the robot how to complete a task. Teaching a robot takes time and, therefore, has a high upfront cost. But it can be beneficial in the long run if the robot can learn a new skill. Part of the complexity is deciding when it is best to teach a robot versus delegating the task to a human. This requires the robot to predict what other tasks it can complete after learning a new task.
    Given this information, the planner converts the problem into a mixed integer program — an optimization program commonly used in scheduling, production planning or designing communication networks — that can be solved efficiently by off-the-shelf software. The planner performed better than traditional models in all instances and decreased the cost of completing the tasks by 10% to 15%.
    Vats presented the work, “Synergistic Scheduling of Learning and Allocation of Tasks in Human-Robot Teams” at the International Conference on Robotics and Automation in Philadelphia, where it was nominated for the outstanding interaction paper award. The research team included Oliver Kroemer, an assistant professor in RI; and Maxim Likhachev, an associate professor in RI.
    The research was funded by the Office of Naval Research and the Army Research Laboratory.
    Story Source:
    Materials provided by Carnegie Mellon University. Original written by Aaron Aupperlee. Note: Content may be edited for style and length. More

  • in

    Biocrusts reduce global dust emissions by 60 percent

    In the unceasing battle against dust, humans possess a deep arsenal of weaponry, from microfiber cloths to feather dusters to vacuum cleaners. But new research suggests that none of that technology can compare to nature’s secret weapon — biological soil crusts.

    These biocrusts are thin, cohesive layers of soil, glued together by dirt-dwelling organisms, that often carpet arid landscapes. Though innocuous, researchers now estimate that these rough soil skins prevent around 700 teragrams (30,000 times the mass of the Statue of Liberty) of dust from wafting into the air each year, reducing global dust emissions by a staggering 60 percent. Unless steps are taken to preserve and restore biocrusts, which are threatened by climate change and shifts in land use, the future will be much dustier, ecologist Bettina Weber and colleagues report online May 16 in Nature Geoscience.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Dry-land ecosystems, such as savannas, shrublands and deserts, may appear barren, but they’re providing this important natural service that is often overlooked, says Weber, of the Max Planck Institute for Chemistry in Mainz, Germany. These findings “really call for biocrust conservation.”

    Biocrusts cover around 12 percent of the planet’s land surface and are most often found in arid regions. They are constructed by communities of fungi, lichens, cyanobacteria and other microorganisms that live in the topmost millimeters of soil and produce adhesive substances that clump soil particles together. In dry-land ecosystems, biocrusts play an important role in concentrating nutrients such as carbon and nitrogen and also help prevent soil erosion (SN: 4/12/22).

    And since most of the world’s dust comes from dry lands, biocrusts are important for keeping dust bound to the ground. Fallen dust can carry nutrients that benefit plants, but it can also reduce water and air quality, hasten glacier melting and reduce river flows. For instance in the Upper Colorado River Basin, researchers found that dust not only decreased snow’s ability to reflect sunlight, but it also shortened the duration of snow cover by weeks, reducing flows of meltwater into the Colorado River by 5 percent. That’s more water than the city of Las Vegas draws in a year, says Matthew Bowker, an ecologist from Northern Arizona University in Flagstaff who wasn’t involved in the new study.

    Experiments had already demonstrated that biocrusts strengthened soils against erosion, but Weber and her colleagues were curious how that effect played out on a global scale. So they pulled data from experimental studies that measured wind velocities needed to erode dust from various soil types and calculated how differences in biocrust coverage affected dust generation. They found that the wind velocities needed to erode dust from soils completely shielded by biocrusts were on average 4.8 times greater than the wind velocities need to erode bare soils. 

    The researchers then incorporated their results, along with data on global biocrust coverage, into a global climate simulation which allowed them to estimate how much dust the world’s biocrusts trapped each year.  

    “Nobody has really tried to make that calculation globally before,” says Bowker. “Even if their number is off, it shows us that the real number is probably significant.”

    Using projections of future climate conditions and data on the conditions biocrusts can tolerate, Weber and her colleagues estimated that by 2070, climate change and land-use shifts may result in biocrust losses of 25 to 40 percent, which would increase global dust emissions by 5 to 15 percent.

    Preserving and restoring biocrusts will be key to mitigating soil erosion and dust production in the future, Bowker says. Hopefully, these results will help to whip up more discussions on the impacts of land-use changes on biocrust health, he says. “We need to have those conversations.” More

  • in

    Designers find better solutions with computer assistance, but sacrifice creative touch

    From building software to designing cars, engineers grapple with complex design situations every day. ‘Optimizing a technical system, whether it’s making it more usable or energy-efficient, is a very hard problem!’ says Antti Oulasvirta, professor of electrical engineering at Aalto University and the Finnish Center for Artificial Intelligence. Designers often rely on a mix of intuition, experience and trial and error to guide them. Besides being inefficient, this process can lead to ‘design fixation’, homing in on familiar solutions while new avenues go unexplored. A ‘manual’ approach also won’t scale to larger design problems and relies a lot on individual skill.
    Oulasvirta and colleagues tested an alternative, computer-assisted method that uses an algorithm to search through a design space, the set of possible solutions given multi-dimensional inputs and constraints for a particular design issue. They hypothesized that a guided approach could yield better designs by scanning a broader swath of solutions and balancing out human inexperience and design fixation.
    Along with collaborators from the University of Cambridge, the researchers set out to compare the traditional and assisted approaches to design, using virtual reality as their laboratory. They employed Bayesian optimization, a machine learning technique that both explores the design space and steers towards promising solutions. ‘We put a Bayesian optimizer in the loop with a human, who would try a combination of parameters. The optimizer then suggests some other values, and they proceed in a feedback loop. This is great for designing virtual reality interaction techniques,’ explains Oulasvirta. ‘What we didn’t know until now is how the user experiences this kind of optimization-driven design approach.’
    To find out, Oulasvirta’s team asked 40 novice designers to take part in their virtual reality experiment. The subjects had to find the best settings for mapping the location of their real hand holding a vibrating controller to the virtual hand seen in the headset. Half of these designers were free to follow their own instincts in the process, and the other half were given optimizer-selected designs to evaluate. Both groups had to choose three final designs that would best capture accuracy and speed in the 3D virtual reality interaction task. Finally, subjects reported how confident and satisfied they were with the experience and how in control they felt over the process and the final designs.
    The results were clear-cut: ‘Objectively, the optimizer helped designers find better solutions, but designers did not like being hand-held and commanded. It destroyed their creativity and sense of agency,’ reports Oulasvirta. The optimizer-led process allowed designers to explore more of the design space compared with the manual approach, leading to more diverse design solutions. The designers who worked with the optimizer also reported less mental demand and effort in the experiment. By contrast, this group also scored lower on expressiveness, agency and ownership, compared with the designers who did the experiment without a computer assistant.
    ‘There is definitely a trade-off,’ says Oulasvirta. ‘With the optimizer, designers came up with better designs and covered a more extensive set of solutions with less effort. On the other hand, their creativity and sense of ownership of the outcomes was reduced.’ These results are instructive for the development of AI that assists humans in decision-making. Oulasvirta suggests that people need to be engaged in tasks such as assisted design so they retain a sense of control, don’t get bored, and receive more insight into how a Bayesian optimizer or other AI is actually working. ‘We’ve seen that inexperienced designers especially can benefit from an AI boost when engaging in our design experiment,’ says Oulasvirta. ‘Our goal is that optimization becomes truly interactive without compromising human agency.’
    This paper was selected for an honourable mention at the ACM CHI Conference on Human Factors in Computing Systems in May 2022.
    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More

  • in

    Charting a safe course through a highly uncertain environment

    An autonomous spacecraft exploring the far-flung regions of the universe descends through the atmosphere of a remote exoplanet. The vehicle, and the researchers who programmed it, don’t know much about this environment.
    With so much uncertainty, how can the spacecraft plot a trajectory that will keep it from being squashed by some randomly moving obstacle or blown off course by sudden, gale-force winds?
    MIT researchers have developed a technique that could help this spacecraft land safely. Their approach can enable an autonomous vehicle to plot a provably safe trajectory in highly uncertain situations where there are multiple uncertainties regarding environmental conditions and objects the vehicle could collide with.
    The technique could help a vehicle find a safe course around obstacles that move in random ways and change their shape over time. It plots a safe trajectory to a targeted region even when the vehicle’s starting point is not precisely known and when it is unclear exactly how the vehicle will move due to environmental disturbances like wind, ocean currents, or rough terrain.
    This is the first technique to address the problem of trajectory planning with many simultaneous uncertainties and complex safety constraints, says co-lead author Weiqiao Han, a graduate student in the Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory (CSAIL).
    “Future robotic space missions need risk-aware autonomy to explore remote and extreme worlds for which only highly uncertain prior knowledge exists. In order to achieve this, trajectory-planning algorithms need to reason about uncertainties and deal with complex uncertain models and safety constraints,” adds co-lead author Ashkan Jasour, a former CSAIL research scientist who now works on robotics systems at the NASA Jet Propulsion Laboratory. More

  • in

    Twisted soft robots navigate mazes without human or computer guidance

    Researchers from North Carolina State University and the University of Pennsylvania have developed soft robots that are capable of navigating complex environments, such as mazes, without input from humans or computer software.
    “These soft robots demonstrate a concept called ‘physical intelligence,’ meaning that structural design and smart materials are what allow the soft robot to navigate various situations, as opposed to computational intelligence,” says Jie Yin, corresponding author of a paper on the work and an associate professor of mechanical and aerospace engineering at NC State.
    The soft robots are made of liquid crystal elastomers in the shape of a twisted ribbon, resembling translucent rotini. When you place the ribbon on a surface that is at least 55 degrees Celsius (131 degrees Fahrenheit), which is hotter than the ambient air, the portion of the ribbon touching the surface contracts, while the portion of the ribbon exposed to the air does not. This induces a rolling motion in the ribbon. And the warmer the surface, the faster it rolls. Video of the ribbon-like soft robots can be found at https://youtu.be/7q1f_JO5i60.
    “This has been done before with smooth-sided rods, but that shape has a drawback — when it encounters an object, it simply spins in place,” says Yin. “The soft robot we’ve made in a twisted ribbon shape is capable of negotiating these obstacles with no human or computer intervention whatsoever.”
    The ribbon robot does this in two ways. First, if one end of the ribbon encounters an object, the ribbon rotates slightly to get around the obstacle. Second, if the central part of the robot encounters an object, it “snaps.” The snap is a rapid release of stored deformation energy that causes the ribbon to jump slightly and reorient itself before landing. The ribbon may need to snap more than once before finding an orientation that allows is to negotiate the obstacle, but ultimately it always finds a clear path forward.
    “In this sense, it’s much like the robotic vacuums that many people use in their homes,” Yin says. “Except the soft robot we’ve created draws energy from its environment and operates without any computer programming.”
    “The two actions, rotating and snapping, that allow the robot to negotiate obstacles operate on a gradient,” says Yao Zhao, first author of the paper and a postdoctoral researcher at NC State. “The most powerful snap occurs if an object touches the center of the ribbon. But the ribbon will still snap if an object touches the ribbon away from the center, it’s just less powerful. And the further you are from the center, the less pronounced the snap, until you reach the last fifth of the ribbon’s length, which does not produce a snap at all.”
    The researchers conducted multiple experiments demonstrating that the ribbon-like soft robot is capable of navigating a variety of maze-like environments. The researchers also demonstrated that the soft robots would work well in desert environments, showing they were capable of climbing and descending slopes of loose sand.
    “This is interesting, and fun to look at, but more importantly it provides new insights into how we can design soft robots that are capable of harvesting heat energy from natural environments and autonomously negotiating complex, unstructured settings such as roads and harsh deserts.” Yin says.
    The paper, “Twisting for Soft Intelligent Autonomous Robot in Unstructured Environments,” will be published the week of May 23 in the Proceedings of the National Academy of Sciences. The paper was co-authored by NC State Ph.D. students Yinding Chi, Yaoye Hong and Yanbin Li; as well as Shu Yang, the Joseph Bordogna Professor of Materials Science and Engineering at the University of Pennsylvania.
    The work was done with support from the National Science Foundation, under grants CMMI-431 2010717, CMMI-2005374 and DMR-1410253.
    Story Source:
    Materials provided by North Carolina State University. Original written by Matt Shipman. Note: Content may be edited for style and length. More

  • in

    Using Artificial Intelligence to Predict Life-Threatening Bacterial Disease in Dogs

    Leptospirosis, a disease that dogs can get from drinking water contaminated with Leptospira bacteria, can cause kidney failure, liver disease and severe bleeding into the lungs. Early detection of the disease is crucial and may mean the difference between life and death.
    Veterinarians and researchers at the University of California, Davis, School of Veterinary Medicine have discovered a technique to predict leptospirosis in dogs through the use of artificial intelligence. After many months of testing various models, the team has developed one that outperformed traditional testing methods and provided accurate early detection of the disease. The groundbreaking discovery was published in Journal of Veterinary Diagnostic Investigation.
    “Traditional testing for Leptospira lacks sensitivity early in the disease process,” said lead author Krystle Reagan, a board-certified internal medicine specialist and assistant professor focusing on infectious diseases. “Detection also can take more than two weeks because of the need to demonstrate a rise in the level of antibodies in a blood sample. Our AI model eliminates those two roadblocks to a swift and accurate diagnosis.”
    The research involved historical data of patients at the UC Davis Veterinary Medical Teaching Hospital that had been tested for leptospirosis. Routinely collected blood work from these 413 dogs was used to train an AI prediction model. Over the next year, the hospital treated an additional 53 dogs with suspected leptospirosis. The model correctly identified all nine dogs that were positive for leptospirosis (100% sensitivity). The model also correctly identified approximately 90% of the 44 dogs that were ultimately leptospirosis negative.
    The goal for the model is for it to become an online resource for veterinarians to enter patient data and receive a timely prediction.
    “AI-based, clinical decision making is going to be the future for many aspects of veterinary medicine,” said School of Veterinary Medicine Dean Mark Stetter. “I am thrilled to see UC Davis veterinarians and scientists leading that charge. We are committed to putting resources behind AI ventures and look forward to partnering with researchers, philanthropists, and industry to advance this science.”
    Detection model may help people
    Leptospirosis is a life-threatening zoonotic disease, meaning it can transfer from animals to humans. As the disease is also difficult to diagnose in people, Reagan hopes the technology behind this groundbreaking detection model has translational ability into human medicine.
    “My hope is this technology will be able to recognize cases of leptospirosis in near real time, giving clinicians and owners important information about the disease process and prognosis,” said Reagan. “As we move forward, we hope to apply AI methods to improve our ability to quickly diagnose other types of infections.”
    Reagan is a founding member of the school’s Artificial Intelligence in Veterinary Medicine Interest Group comprising veterinarians promoting the use of AI in the profession. This research was done in collaboration with members of UC Davis’ Center for Data Science and Artificial Intelligence Research, led by professor of mathematics Thomas Strohmer. He and his students were involved in the algorithm building.
    Reagan’s group is actively pursuing AI for prediction of outcome for other types of infections, including a prediction model for antimicrobial resistant infections, which is a growing problem in veterinary and human medicine. Previously, the group developed an AI algorithm to predict Addison’s disease with an accuracy rate greater than 99%.
    Funding support comes from the National Science Foundation.
    Story Source:
    Materials provided by University of California – Davis. Original written by Rob Warren. Note: Content may be edited for style and length. More

  • in

    'I don't even remember what I read': People enter a 'dissociative state' when using social media

    Sometimes when we are reading a good book, it’s like we are transported into another world and we stop paying attention to what’s around us.
    Researchers at the University of Washington wondered if people enter a similar state of dissociation when surfing social media, and if that explains why users might feel out of control after spending so much time on their favorite app.
    The team watched how participants interacted with a Twitter-like platform to show that some people are spacing out while they’re scrolling. Researchers also designed intervention strategies that social media platforms could use to help people retain more control over their online experiences.
    The group presented the project May 3 at the CHI 2022 conference in New Orleans.
    “I think people experience a lot of shame around social media use,” said lead author Amanda Baughan, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “One of the things I like about this framing of ‘dissociation’ rather than ‘addiction’ is that it changes the narrative. Instead of: ‘I should be able to have more self-control,’ it’s more like: ‘We all naturally dissociate in many ways throughout our day — whether it’s daydreaming or scrolling through Instagram, we stop paying attention to what’s happening around us.'”
    There are multiple types of dissociation, including trauma-based dissociation and the everyday dissociation associated with spacing out or focusing intently on a task. More

  • in

    Novel AI algorithm for digital pathology analysis

    Digital pathology is an emerging field which deals with mainly microscopy images that are derived from patient biopsies. Because of the high resolution, most of these whole slide images (WSI) have a large size, typically exceeding a gigabyte (Gb). Therefore, typical image analysis methods cannot efficiently handle them.
    Seeing a need, researchers from Boston University School of Medicine (BUSM) have developed a novel artificial intelligence (AI) algorithm based on a framework called representation learning to classify lung cancer subtype based on lung tissue images from resected tumors.
    “We are developing novel AI-based methods that can bring efficiency to assessing digital pathology data. Pathology practice is in the midst of a digital revolution. Computer-based methods are being developed to assist the expert pathologist. Also, in places where there is no expert, such methods and technologies can directly assist diagnosis,” explains corresponding author Vijaya B. Kolachalama, PhD, FAHA, assistant professor of medicine and computer science at BUSM.
    The researchers developed a graph-based vision transformer for digital pathology called Graph Transformer (GTP) that leverages a graph representation of pathology images and the computational efficiency of transformer architectures to perform analysis on the whole slide image.
    “Translating the latest advances in computer science to digital pathology is not straightforward and there is a need to build AI methods that can exclusively tackle the problems in digital pathology,” explains co-corresponding author Jennifer Beane, PhD, associate professor of medicine at BUSM.
    Using whole slide images and clinical data from three publicly available national cohorts, they then developed a model that could distinguish between lung adenocarcinoma, lung squamous cell carcinoma, and adjacent non-cancerous tissue. Over a series of studies and sensitivity analyses, they showed that their GTP framework outperforms current state-of-the-art methods used for whole slide image classification.
    They believe their machine learning framework has implications beyond digital pathology. “Researchers who are interested in the development of computer vision approaches for other real-world applications can also find our approach to be useful,” they added.
    These findings appear online in the journal IEEE Transactions on Medical Imaging.
    Funding for this study was provided by grants from the National Institutes of Health (R21-CA253498, R01-HL159620), Johnson & Johnson Enterprise Innovation, Inc., the American Heart Association (20SFRN35460031), the Karen Toffler Charitable Trust, and the National Science Foundation (1551572, 1838193)
    Story Source:
    Materials provided by Boston University School of Medicine. Note: Content may be edited for style and length. More