More stories

  • in

    Designers find better solutions with computer assistance, but sacrifice creative touch

    From building software to designing cars, engineers grapple with complex design situations every day. ‘Optimizing a technical system, whether it’s making it more usable or energy-efficient, is a very hard problem!’ says Antti Oulasvirta, professor of electrical engineering at Aalto University and the Finnish Center for Artificial Intelligence. Designers often rely on a mix of intuition, experience and trial and error to guide them. Besides being inefficient, this process can lead to ‘design fixation’, homing in on familiar solutions while new avenues go unexplored. A ‘manual’ approach also won’t scale to larger design problems and relies a lot on individual skill.
    Oulasvirta and colleagues tested an alternative, computer-assisted method that uses an algorithm to search through a design space, the set of possible solutions given multi-dimensional inputs and constraints for a particular design issue. They hypothesized that a guided approach could yield better designs by scanning a broader swath of solutions and balancing out human inexperience and design fixation.
    Along with collaborators from the University of Cambridge, the researchers set out to compare the traditional and assisted approaches to design, using virtual reality as their laboratory. They employed Bayesian optimization, a machine learning technique that both explores the design space and steers towards promising solutions. ‘We put a Bayesian optimizer in the loop with a human, who would try a combination of parameters. The optimizer then suggests some other values, and they proceed in a feedback loop. This is great for designing virtual reality interaction techniques,’ explains Oulasvirta. ‘What we didn’t know until now is how the user experiences this kind of optimization-driven design approach.’
    To find out, Oulasvirta’s team asked 40 novice designers to take part in their virtual reality experiment. The subjects had to find the best settings for mapping the location of their real hand holding a vibrating controller to the virtual hand seen in the headset. Half of these designers were free to follow their own instincts in the process, and the other half were given optimizer-selected designs to evaluate. Both groups had to choose three final designs that would best capture accuracy and speed in the 3D virtual reality interaction task. Finally, subjects reported how confident and satisfied they were with the experience and how in control they felt over the process and the final designs.
    The results were clear-cut: ‘Objectively, the optimizer helped designers find better solutions, but designers did not like being hand-held and commanded. It destroyed their creativity and sense of agency,’ reports Oulasvirta. The optimizer-led process allowed designers to explore more of the design space compared with the manual approach, leading to more diverse design solutions. The designers who worked with the optimizer also reported less mental demand and effort in the experiment. By contrast, this group also scored lower on expressiveness, agency and ownership, compared with the designers who did the experiment without a computer assistant.
    ‘There is definitely a trade-off,’ says Oulasvirta. ‘With the optimizer, designers came up with better designs and covered a more extensive set of solutions with less effort. On the other hand, their creativity and sense of ownership of the outcomes was reduced.’ These results are instructive for the development of AI that assists humans in decision-making. Oulasvirta suggests that people need to be engaged in tasks such as assisted design so they retain a sense of control, don’t get bored, and receive more insight into how a Bayesian optimizer or other AI is actually working. ‘We’ve seen that inexperienced designers especially can benefit from an AI boost when engaging in our design experiment,’ says Oulasvirta. ‘Our goal is that optimization becomes truly interactive without compromising human agency.’
    This paper was selected for an honourable mention at the ACM CHI Conference on Human Factors in Computing Systems in May 2022.
    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More

  • in

    Charting a safe course through a highly uncertain environment

    An autonomous spacecraft exploring the far-flung regions of the universe descends through the atmosphere of a remote exoplanet. The vehicle, and the researchers who programmed it, don’t know much about this environment.
    With so much uncertainty, how can the spacecraft plot a trajectory that will keep it from being squashed by some randomly moving obstacle or blown off course by sudden, gale-force winds?
    MIT researchers have developed a technique that could help this spacecraft land safely. Their approach can enable an autonomous vehicle to plot a provably safe trajectory in highly uncertain situations where there are multiple uncertainties regarding environmental conditions and objects the vehicle could collide with.
    The technique could help a vehicle find a safe course around obstacles that move in random ways and change their shape over time. It plots a safe trajectory to a targeted region even when the vehicle’s starting point is not precisely known and when it is unclear exactly how the vehicle will move due to environmental disturbances like wind, ocean currents, or rough terrain.
    This is the first technique to address the problem of trajectory planning with many simultaneous uncertainties and complex safety constraints, says co-lead author Weiqiao Han, a graduate student in the Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory (CSAIL).
    “Future robotic space missions need risk-aware autonomy to explore remote and extreme worlds for which only highly uncertain prior knowledge exists. In order to achieve this, trajectory-planning algorithms need to reason about uncertainties and deal with complex uncertain models and safety constraints,” adds co-lead author Ashkan Jasour, a former CSAIL research scientist who now works on robotics systems at the NASA Jet Propulsion Laboratory. More

  • in

    Twisted soft robots navigate mazes without human or computer guidance

    Researchers from North Carolina State University and the University of Pennsylvania have developed soft robots that are capable of navigating complex environments, such as mazes, without input from humans or computer software.
    “These soft robots demonstrate a concept called ‘physical intelligence,’ meaning that structural design and smart materials are what allow the soft robot to navigate various situations, as opposed to computational intelligence,” says Jie Yin, corresponding author of a paper on the work and an associate professor of mechanical and aerospace engineering at NC State.
    The soft robots are made of liquid crystal elastomers in the shape of a twisted ribbon, resembling translucent rotini. When you place the ribbon on a surface that is at least 55 degrees Celsius (131 degrees Fahrenheit), which is hotter than the ambient air, the portion of the ribbon touching the surface contracts, while the portion of the ribbon exposed to the air does not. This induces a rolling motion in the ribbon. And the warmer the surface, the faster it rolls. Video of the ribbon-like soft robots can be found at https://youtu.be/7q1f_JO5i60.
    “This has been done before with smooth-sided rods, but that shape has a drawback — when it encounters an object, it simply spins in place,” says Yin. “The soft robot we’ve made in a twisted ribbon shape is capable of negotiating these obstacles with no human or computer intervention whatsoever.”
    The ribbon robot does this in two ways. First, if one end of the ribbon encounters an object, the ribbon rotates slightly to get around the obstacle. Second, if the central part of the robot encounters an object, it “snaps.” The snap is a rapid release of stored deformation energy that causes the ribbon to jump slightly and reorient itself before landing. The ribbon may need to snap more than once before finding an orientation that allows is to negotiate the obstacle, but ultimately it always finds a clear path forward.
    “In this sense, it’s much like the robotic vacuums that many people use in their homes,” Yin says. “Except the soft robot we’ve created draws energy from its environment and operates without any computer programming.”
    “The two actions, rotating and snapping, that allow the robot to negotiate obstacles operate on a gradient,” says Yao Zhao, first author of the paper and a postdoctoral researcher at NC State. “The most powerful snap occurs if an object touches the center of the ribbon. But the ribbon will still snap if an object touches the ribbon away from the center, it’s just less powerful. And the further you are from the center, the less pronounced the snap, until you reach the last fifth of the ribbon’s length, which does not produce a snap at all.”
    The researchers conducted multiple experiments demonstrating that the ribbon-like soft robot is capable of navigating a variety of maze-like environments. The researchers also demonstrated that the soft robots would work well in desert environments, showing they were capable of climbing and descending slopes of loose sand.
    “This is interesting, and fun to look at, but more importantly it provides new insights into how we can design soft robots that are capable of harvesting heat energy from natural environments and autonomously negotiating complex, unstructured settings such as roads and harsh deserts.” Yin says.
    The paper, “Twisting for Soft Intelligent Autonomous Robot in Unstructured Environments,” will be published the week of May 23 in the Proceedings of the National Academy of Sciences. The paper was co-authored by NC State Ph.D. students Yinding Chi, Yaoye Hong and Yanbin Li; as well as Shu Yang, the Joseph Bordogna Professor of Materials Science and Engineering at the University of Pennsylvania.
    The work was done with support from the National Science Foundation, under grants CMMI-431 2010717, CMMI-2005374 and DMR-1410253.
    Story Source:
    Materials provided by North Carolina State University. Original written by Matt Shipman. Note: Content may be edited for style and length. More

  • in

    Using Artificial Intelligence to Predict Life-Threatening Bacterial Disease in Dogs

    Leptospirosis, a disease that dogs can get from drinking water contaminated with Leptospira bacteria, can cause kidney failure, liver disease and severe bleeding into the lungs. Early detection of the disease is crucial and may mean the difference between life and death.
    Veterinarians and researchers at the University of California, Davis, School of Veterinary Medicine have discovered a technique to predict leptospirosis in dogs through the use of artificial intelligence. After many months of testing various models, the team has developed one that outperformed traditional testing methods and provided accurate early detection of the disease. The groundbreaking discovery was published in Journal of Veterinary Diagnostic Investigation.
    “Traditional testing for Leptospira lacks sensitivity early in the disease process,” said lead author Krystle Reagan, a board-certified internal medicine specialist and assistant professor focusing on infectious diseases. “Detection also can take more than two weeks because of the need to demonstrate a rise in the level of antibodies in a blood sample. Our AI model eliminates those two roadblocks to a swift and accurate diagnosis.”
    The research involved historical data of patients at the UC Davis Veterinary Medical Teaching Hospital that had been tested for leptospirosis. Routinely collected blood work from these 413 dogs was used to train an AI prediction model. Over the next year, the hospital treated an additional 53 dogs with suspected leptospirosis. The model correctly identified all nine dogs that were positive for leptospirosis (100% sensitivity). The model also correctly identified approximately 90% of the 44 dogs that were ultimately leptospirosis negative.
    The goal for the model is for it to become an online resource for veterinarians to enter patient data and receive a timely prediction.
    “AI-based, clinical decision making is going to be the future for many aspects of veterinary medicine,” said School of Veterinary Medicine Dean Mark Stetter. “I am thrilled to see UC Davis veterinarians and scientists leading that charge. We are committed to putting resources behind AI ventures and look forward to partnering with researchers, philanthropists, and industry to advance this science.”
    Detection model may help people
    Leptospirosis is a life-threatening zoonotic disease, meaning it can transfer from animals to humans. As the disease is also difficult to diagnose in people, Reagan hopes the technology behind this groundbreaking detection model has translational ability into human medicine.
    “My hope is this technology will be able to recognize cases of leptospirosis in near real time, giving clinicians and owners important information about the disease process and prognosis,” said Reagan. “As we move forward, we hope to apply AI methods to improve our ability to quickly diagnose other types of infections.”
    Reagan is a founding member of the school’s Artificial Intelligence in Veterinary Medicine Interest Group comprising veterinarians promoting the use of AI in the profession. This research was done in collaboration with members of UC Davis’ Center for Data Science and Artificial Intelligence Research, led by professor of mathematics Thomas Strohmer. He and his students were involved in the algorithm building.
    Reagan’s group is actively pursuing AI for prediction of outcome for other types of infections, including a prediction model for antimicrobial resistant infections, which is a growing problem in veterinary and human medicine. Previously, the group developed an AI algorithm to predict Addison’s disease with an accuracy rate greater than 99%.
    Funding support comes from the National Science Foundation.
    Story Source:
    Materials provided by University of California – Davis. Original written by Rob Warren. Note: Content may be edited for style and length. More

  • in

    'I don't even remember what I read': People enter a 'dissociative state' when using social media

    Sometimes when we are reading a good book, it’s like we are transported into another world and we stop paying attention to what’s around us.
    Researchers at the University of Washington wondered if people enter a similar state of dissociation when surfing social media, and if that explains why users might feel out of control after spending so much time on their favorite app.
    The team watched how participants interacted with a Twitter-like platform to show that some people are spacing out while they’re scrolling. Researchers also designed intervention strategies that social media platforms could use to help people retain more control over their online experiences.
    The group presented the project May 3 at the CHI 2022 conference in New Orleans.
    “I think people experience a lot of shame around social media use,” said lead author Amanda Baughan, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “One of the things I like about this framing of ‘dissociation’ rather than ‘addiction’ is that it changes the narrative. Instead of: ‘I should be able to have more self-control,’ it’s more like: ‘We all naturally dissociate in many ways throughout our day — whether it’s daydreaming or scrolling through Instagram, we stop paying attention to what’s happening around us.'”
    There are multiple types of dissociation, including trauma-based dissociation and the everyday dissociation associated with spacing out or focusing intently on a task. More

  • in

    Novel AI algorithm for digital pathology analysis

    Digital pathology is an emerging field which deals with mainly microscopy images that are derived from patient biopsies. Because of the high resolution, most of these whole slide images (WSI) have a large size, typically exceeding a gigabyte (Gb). Therefore, typical image analysis methods cannot efficiently handle them.
    Seeing a need, researchers from Boston University School of Medicine (BUSM) have developed a novel artificial intelligence (AI) algorithm based on a framework called representation learning to classify lung cancer subtype based on lung tissue images from resected tumors.
    “We are developing novel AI-based methods that can bring efficiency to assessing digital pathology data. Pathology practice is in the midst of a digital revolution. Computer-based methods are being developed to assist the expert pathologist. Also, in places where there is no expert, such methods and technologies can directly assist diagnosis,” explains corresponding author Vijaya B. Kolachalama, PhD, FAHA, assistant professor of medicine and computer science at BUSM.
    The researchers developed a graph-based vision transformer for digital pathology called Graph Transformer (GTP) that leverages a graph representation of pathology images and the computational efficiency of transformer architectures to perform analysis on the whole slide image.
    “Translating the latest advances in computer science to digital pathology is not straightforward and there is a need to build AI methods that can exclusively tackle the problems in digital pathology,” explains co-corresponding author Jennifer Beane, PhD, associate professor of medicine at BUSM.
    Using whole slide images and clinical data from three publicly available national cohorts, they then developed a model that could distinguish between lung adenocarcinoma, lung squamous cell carcinoma, and adjacent non-cancerous tissue. Over a series of studies and sensitivity analyses, they showed that their GTP framework outperforms current state-of-the-art methods used for whole slide image classification.
    They believe their machine learning framework has implications beyond digital pathology. “Researchers who are interested in the development of computer vision approaches for other real-world applications can also find our approach to be useful,” they added.
    These findings appear online in the journal IEEE Transactions on Medical Imaging.
    Funding for this study was provided by grants from the National Institutes of Health (R21-CA253498, R01-HL159620), Johnson & Johnson Enterprise Innovation, Inc., the American Heart Association (20SFRN35460031), the Karen Toffler Charitable Trust, and the National Science Foundation (1551572, 1838193)
    Story Source:
    Materials provided by Boston University School of Medicine. Note: Content may be edited for style and length. More

  • in

    New recipe for restaurant, app contracts

    A novel contract proposed by a University of Texas at Dallas researcher and his colleagues could help alleviate key sources of conflict between restaurants and food-delivery platforms.
    In a study published online March 28 in the INFORMS journal Management Science, Dr. Andrew Frazelle, assistant professor of operations management in the Naveen Jindal School of Management, and co-authors Dr. Pnina Feldman of Boston University and Dr. Robert Swinney of Duke University examined how to best structure relationships between food-delivery platforms and the restaurants with which they partner.
    Other platforms in the sharing economy, such as ride-hailing and vacation-rental services, allow people to sell access to resources that would otherwise be generating no revenue for them, Frazelle said. The interests of the resource owner and the platform are reasonably well aligned in that more transactions are good for both.
    “However, restaurant delivery is different,” Frazelle said. “Delivery orders represent incremental business on top of the restaurant’s existing dine-in operation. More business sounds good, but it comes at the cost of a commission charged by the delivery platform.”
    Platforms such as Grubhub, DoorDash and Uber Eats collect customer orders online, transmit them to restaurants and deliver the orders to customers. While this service helps restaurants expand their markets, the study found the relationship has inherent flaws.
    The most common contractual relationship between platforms and restaurants, in which the platform takes a commission, or a percentage cut, of each delivery order, has two key issues, according to the study. More

  • in

    High school students measure Earth's magnetic field from ISS

    A group of high school students used a tiny, inexpensive computer to try to measure Earth’s magnetic field from the International Space Station, showing a way to affordably explore and understand our planet.
    In the American Journal of Physics, published on behalf of the American Association of Physics Teachers by AIP Publishing, three high school students from Portugal, along with their faculty mentor, report the results of their project. The students programmed an add-on board for the Raspberry Pi computer to take measurements of Earth’s magnetic field in orbit. This add-on component, known as the Sense Hat, contained a magnetometer, gyroscope, accelerometer, and sensors for temperature, pressure, and humidity.
    The European Space Agency teamed up with the U.K.’s Raspberry Pi Foundation to hold a contest for high school students. The contest, known as the Astro Pi Challenge, required the students to program a Raspberry Pi computer with code to be run aboard the space station.
    The students used the data acquired from the space station to map out Earth’s magnetic field and compared their results to data provided by the International Geomagnetic Reference Field (IGRF), which uses measurements from observatories and satellites to compute Earth’s magnetic field.
    “I saw the Astro Pi challenge as an opportunity to broaden my knowledge and skill set, and it ended up introducing me to the complex but exciting reality of the practical world,” Lourenço Faria, co-author and one of the students involved in the project, said.
    The IGRF data is updated every five years, so the students compared their measurements, taken in April 2021, with the latest IGRF data from 2020. They found their data differed from the IGRF results by a significant, but fixed, amount. This difference could be due to a static magnetic field inside the space station.
    The students repeated their analysis using another 15 orbits worth of ISS data and found a slight improvement in results. The students thought it surprising the main features of Earth’s magnetic field could be reconstructed with only three hours’ worth of measurements from their low-cost magnetometer aboard the space station.
    Although this project was carried out aboard the space station, it could easily be adapted to ground-based measurements using laboratory equipment or magnetometer apps for smartphones.
    “Taking measurements around the globe and sharing data via the internet or social media would make for an interesting science project that could connect students in different countries,” said Nuno Barros e Sá, co-author and faculty mentor for the students.
    The article “Modeling the Earth’s magnetic field” is authored by Nuno Barros e Sá, Lourenço Faria, Bernardo Alves, and Miguel Cymbron. The article will appear in American Journal of Physics on May 23, 2022.
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More