More stories

  • in

    Scientists propose a model to predict personal learning performance for virtual reality-based safety training

    In Korea, occupational hazards are on the rise, particularly in the construction sector. According to a report on the ‘Occupational Safety Accident Status’ by Korea’s Ministry of Employment and Labor, the industry accounted for the highest number of accidents and fatalities among all sectors in 2021. To address this rise, the Korea Occupational Safety and Health Agency has been providing virtual reality (VR)-based construction safety content to daily workers as part of their educational training initiatives.
    Nevertheless, current VR-based training methods grapple with two limitations. Firstly, VR-based construction safety training is essentially a passive exercise, with learners following one-way instructions that fail to adapt to their judgments and decisions. Secondly, there is an absence of an objective evaluation process during VR-based safety training. To address these challenges, researchers have introduced immersive VR-based construction safety content to promote active worker engagement and have conducted post-written tests. However, these post-written tests have limitations in terms of immediacy and objectivity. Furthermore, among the individual characteristics that can affect learning performance, including personal, academic, social, and cognitive aspects, cognitive characteristics may undergo changes during VR-based safety training.
    To address this, a team of researchers led by Associate Professor Choongwan Koo from the Division of Architecture & Urban Division at Incheon National University, Korea, has now proposed a groundbreaking machine learning approach for forecasting personal learning performance in VR-based construction safety training that uses real-time biometric responses. Their paper was made available online on October 7, 2023, and will be published in Volume 156 of the journal Automation in Construction in December 2023.
    “While traditional methods of evaluating learning outcomes that use post-written tests may lack objectivity, real-time biometric responses, collected from eye-tracking and electroencephalogram (EEG) sensors, can be used to promptly and objectively evaluate personal learning performances during VR-based safety training,” explains Dr. Koo.
    The study involved 30 construction workers undergoing VR-based construction safety training. Real-time biometric responses, collected from eye-tracking and EEG to monitor brain activity, were gathered during the training to assess the psychological responses of the participants. Combining this data with pre-training surveys and post-training written tests, the researchers developed machine-learning-based forecasting models to evaluate the overall personal learning performance of the participants during VR-based safety training.
    The team developed two models — a full forecast model (FM) that uses both demographic factors and biometric responses as independent variables and a simplified forecast model (SM) which solely relies on the identified principal features as independent variables to reduce complexity. While the FM exhibited higher accuracy in predicting personal learning performance than traditional models, it also displayed a high level of overfitting. In contrast, the SM demonstrated higher prediction accuracy than the FM due to a smaller number of variables, significantly reducing overfitting. The team thus concluded that the SM was best suited for practical use.
    Explaining these results, Dr. Koo emphasizes, “This approach can have a significant impact on improving personal learning performance during VR-based construction safety training, preventing safety incidents, and fostering a safe working environment.” Further, the team also emphasizes the need for future research to consider various accident types and hazard factors in VR-based safety training.
    In conclusion, this study marks a significant stride in enhancing personalized safety in construction environments and improving the evaluation of learning performance! More

  • in

    AI networks are more vulnerable to malicious attacks than previously thought

    Artificial intelligence tools hold promise for applications ranging from autonomous vehicles to the interpretation of medical images. However, a new study finds these AI tools are more vulnerable than previously thought to targeted attacks that effectively force AI systems to make bad decisions.
    At issue are so-called “adversarial attacks,” in which someone manipulates the data being fed into an AI system in order to confuse it. For example, someone might know that putting a specific type of sticker at a specific spot on a stop sign could effectively make the stop sign invisible to an AI system. Or a hacker could install code on an X-ray machine that alters the image data in a way that causes an AI system to make inaccurate diagnoses.
    “For the most part, you can make all sorts of changes to a stop sign, and an AI that has been trained to identify stop signs will still know it’s a stop sign,” says Tianfu Wu, co-author of a paper on the new work and an associate professor of electrical and computer engineering at North Carolina State University. “However, if the AI has a vulnerability, and an attacker knows the vulnerability, the attacker could take advantage of the vulnerability and cause an accident.”
    The new study from Wu and his collaborators focused on determining how common these sorts of adversarial vulnerabilities are in AI deep neural networks. They found that the vulnerabilities are much more common than previously thought.
    “What’s more, we found that attackers can take advantage of these vulnerabilities to force the AI to interpret the data to be whatever they want,” Wu says. “Using the stop sign example, you could make the AI system think the stop sign is a mailbox, or a speed limit sign, or a green light, and so on, simply by using slightly different stickers — or whatever the vulnerability is.
    “This is incredibly important, because if an AI system is not robust against these sorts of attacks, you don’t want to put the system into practical use — particularly for applications that can affect human lives.”
    To test the vulnerability of deep neural networks to these adversarial attacks, the researchers developed a piece of software called QuadAttacK. The software can be used to test any deep neural network for adversarial vulnerabilities.

    “Basically, if you have a trained AI system, and you test it with clean data, the AI system will behave as predicted. QuadAttacK watches these operations and learns how the AI is making decisions related to the data. This allows QuadAttacK to determine how the data could be manipulated to fool the AI. QuadAttacK then begins sending manipulated data to the AI system to see how the AI responds. If QuadAttacK has identified a vulnerability it can quickly make the AI see whatever QuadAttacK wants it to see.”
    In proof-of-concept testing, the researchers used QuadAttacK to test four deep neural networks: two convolutional neural networks (ResNet-50 and DenseNet-121) and two vision transformers (ViT-B and DEiT-S). These four networks were chosen because they are in widespread use in AI systems around the world.
    “We were surprised to find that all four of these networks were very vulnerable to adversarial attacks,” Wu says. “We were particularly surprised at the extent to which we could fine-tune the attacks to make the networks see what we wanted them to see.”
    The research team has made QuadAttacK publicly available, so that the research community can use it themselves to test neural networks for vulnerabilities. The program can be found here: https://thomaspaniagua.github.io/quadattack_web/.
    “Now that we can better identify these vulnerabilities, the next step is to find ways to minimize those vulnerabilities,” Wu says. “We already have some potential solutions — but the results of that work are still forthcoming.”
    The paper, “QuadAttacK: A Quadratic Programming Approach to Learning Ordered Top-K Adversarial Attacks,” will be presented Dec. 16 at the Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS 2023), which is being held in New Orleans, La. First author of the paper is Thomas Paniagua, a Ph.D. student at NC State. The paper was co-authored by Ryan Grainger, a Ph.D. student at NC State.
    The work was done with support from the U.S. Army Research Office, under grants W911NF1810295 and W911NF2210010; and from the National Science Foundation, under grants 1909644, 2024688 and 2013451. More

  • in

    ‘Doughnut’ beams help physicists see incredibly small objects

    In a new study, researchers at the University of Colorado Boulder have used doughnut-shaped beams of light to take detailed images of objects too tiny to view with traditional microscopes.
    The new technique could help scientists improve the inner workings of a range of “nanoelectronics,” including the miniature semiconductors in computer chips. The discovery was highlighted Dec. 1 in a special issue of “Optics & Photonics News” called “Optics in 2023.”
    The research is the latest advance in the field of ptychography, a difficult to pronounce (the “p” is silent) but powerful technique for viewing very small things. Unlike traditional microscopes, ptychography tools don’t directly view small objects. Instead, they shine lasers at a target, then measure how the light scatters away — a bit like the microscopic equivalent of making shadow puppets on a wall.
    So far, the approach has worked remarkably well, with one major exception, said study senior author and Distinguished Professor of physics Margaret Murnane.
    “Until recently, it has completely failed for highly periodic samples, or objects with a regularly repeating pattern,” said Murnane, fellow at JILA, a joint research institute of CU Boulder and the National Institute of Standards and Technology (NIST). “It’s a problem because that includes a lot of nanoelectronics.”
    She noted that many important technologies like some semiconductors are made up of atoms like silicon or carbon joined together in regular patterns like a grid or mesh. To date, those structures have proved tricky for scientists to view up close using ptychography.
    In the new study, however, Murnane and her colleagues came up with a solution. Instead of using traditional lasers in their microscopes, they produced beams of extreme ultraviolet light in the shape of doughnuts.

    The team’s novel approach can collect accurate images of tiny and delicate structures that are roughly 10 to 100 nanometers in size, or many times smaller than a millionth of an inch. In the future, the researchers expect to zoom in to view even smaller structures. The doughnut, or optical angular momentum, beams also won’t harm tiny electronics in the process — as some existing imaging tools, like electron microscopes, sometimes can.
    “In the future, this method could be used to inspect the polymers used to make and print semiconductors for defects, without damaging those structures in the process,” Murnane said.
    Bin Wang and Nathan Brooks, who earned their doctoral degrees from JILA in 2023, were first authors of the new study.
    Pushing the limits of microscopes
    The research, Murnane said, pushes the fundamental limits of microscopes: Because of the physics of light, imaging tools using lenses can only see the world down to a resolution of about 200 nanometers — which isn’t accurate enough to capture many of the viruses, for example, that infect humans. Scientists can freeze and kill viruses to view them with powerful cryo-electron microscopes but can’t yet capture these pathogens in action and in real time.
    Ptychography, which was pioneered in the mid-2000s, could help researchers push past that limit.

    To understand how, go back to those shadow puppets. Imagine that scientists want to collect a ptychographic image of a very small structure, perhaps letters spelling out “CU.” To do that, they first zap a laser beam at the letters, scanning them multiple times. When the light hits the “C” and the “U” (in this case, the puppets), the beam will break apart and scatter, producing a complex pattern (the shadows). Employing sensitive detectors, scientists record those patterns, then analyze them with a series of mathematical equations. With enough time, Murnane explained, they recreate the shape of their puppets entirely from the shadows they cast.
    “Instead of using a lens to retrieve the image, we use algorithms,” Murnane said.
    She and her colleagues have previously used such an approach to view submicroscopic shapes like letters or stars.
    But the approach won’t work with repeating structures like those silicon or carbon grids. If you shine a regular laser beam on a semiconductor with such regularity, for example, it will often produce a scatter pattern that is incredibly uniform — ptychographic algorithms struggle to make sense of patterns that don’t have much variation in them.
    The problem has left physicists scratching their heads for close to a decade.
    Doughnut microscopy
    In the new study, however, Murnane and her colleagues decided to try something different. They didn’t make their shadow puppets using regular lasers. Instead, they generated beams of extreme ultraviolet light, then employed a device called a spiral phase plate to twist those beams into the shape of a corkscrew, or vortex. (When such a vortex of light shines on a flat surface, it makes a shape like a doughnut).
    The doughnut beams didn’t have pink glaze or sprinkles, but they did the trick. The team discovered that when these types of beams bounced off repeating structures, they created much more complex shadow puppets than regular lasers.
    To test out the new approach, the researchers created a mesh of carbon atoms with a tiny snap in one of the links. The group was able to spot that defect with precision not seen in other ptychographic tools.
    “If you tried to image the same thing in a scanning electron microscope, you would damage it even further,” Murnane said.
    Moving forward, her team wants to make their doughnut strategy even more accurate, allowing them to view smaller and even more fragile objects — including, one day, the workings of living, biological cells.
    Other co-authors of the new study include Henry Kapteyn, professor of physics and fellow of JILA, and current and former JILA graduate students Peter Johnsen, Nicholas Jenkins, Yuka Esashi, Iona Binnie and Michael Tanksalvala. More

  • in

    Mathematics supporting fresh theoretical approach in oncology

    Mathematics, histopathology and genomics converge to confirm that the most aggressive clear cell renal cell carcinomas display low levels of intratumour heterogeneity, i.e. they contain fewer distinct cell types. The study, conducted by the UPV/EHU Ikerbasque Research Professor Annick Laruelle, supports the hypothesis that it would be advisable to apply therapeutic strategies to maintain high levels of cellular heterogeneity within the tumour in order to slow down the evolution of the cancer and improve survival.
    Mathematical approaches are gaining traction in modern oncology as they provide fresh knowledge about the evolution of cancer and new opportunities for therapeutic improvement. So data obtained from mathematical analyses endorse many of the histological findings and genomic results. Game theory, for example, helps to understand the “social” interactions that occur between cancer cells. This novel perspective allows the scientific and clinical community to understand the hidden events driving the disease. In actual fact, considering a tumour as a collectivity of individuals governed by rules previously defined in ecology opens up new therapeutic possibilities for patients.
    Within the framework of game theory, the hawk-dove game is a mathematical tool developed to analyse cooperation and competition in biology. When applied to cancer cell collectivities, it explains the possible behaviours of tumour cells when competing for an external resource. “It is a decision theory in which the outcome does not depend on one’s own decision alone, but also on the decision of the other actors,” explained Ikerbasque Research Professor Annick Laruelle, an expert in game theory in the UPV/EHU’s Department of Economic Analysis. “In the game, cells may act aggressively, like a hawk, or passively, like a dove, to acquire a resource.”
    Professor Laruelle has used this game to analyse bilateral cell interactions in highly aggressive clear cell renal cell carcinoma in two different scenarios: one involving low tumour heterogeneity, when only two tumour cell types compete for a resource; and the other, high tumour heterogeneity, when such competition occurs between three tumour cell types. Clear cell renal cell carcinoma is so named because the tumour cells appear clear, like bubbles, under the microscope. This type of carcinoma has been taken as a representative case for the study, as it is a widely studied paradigm of intratumour heterogeneity (which refers to the coexistence of different subpopulations of cells within the same tumour).
    Fresh theoretical approach for new therapeutic strategies
    Laruelle has thus shown how some of the fundamentals of intratumour heterogeneity, corroborated from the standpoint of histopathology and genomics, are supported by mathematics using the hawk-dove game. The work, carried out in collaboration with researchers from Biocruces, the San Giovanni Bosco Hospital in Turin (Italy) and the Pontificia Universidade Catolica do Rio de Janeiro has been published in the journal Trends in Cancer by the Ikerbasque Research Professor.
    The group of researchers believe that “this convergence of findings obtained from very different disciplines reinforces the key role of translational research in modern medicine and gives intratumour heterogeneity a key position in the approach to new therapeutic strategies” and they conjecture that “intratumour heterogeneity behaves by following similar pathways in many other tumours.”
    This may have important practical implications for the clinical management of malignant tumours. The constant arrival of new molecules enriches cancer treatment opportunities in the era of precision oncology. However, the researchers say that “it is one thing to discover a new molecule and quite another to find the best strategy for using it. So far, the proposed approach is based on administering the maximum tolerable dose to the patient. However, this strategy forces the tumour cells to develop resistance as early as possible, thus transforming the original tumour into a neoplasm of low intratumour heterogeneity comprising only resistant cells.” So, a therapy specifically aimed at preserving high intratumour heterogeneity may make sense according to this theoretical approach, as it may slow cancer growth and thus lead to longer survivals. This perspective is currently gaining interest in oncology. More

  • in

    A color-based sensor to emulate skin’s sensitivity

    Robotics researchers have already made great strides in developing sensors that can perceive changes in position, pressure, and temperature — all of which are important for technologies like wearable devices and human-robot interfaces. But a hallmark of human perception is the ability to sense multiple stimuli at once, and this is something that robotics has struggled to achieve.
    Now, Jamie Paik and colleagues in the Reconfigurable Robotics Lab (RRL) in EPFL’s School of Engineering have developed a sensor that can perceive combinations of bending, stretching, compression, and temperature changes, all using a robust system that boils down to a simple concept: color.
    Dubbed ChromoSense, the RRL’s technology relies on a translucent rubber cylinder containing three sections dyed red, green, and blue. An LED at the top of the device sends light through its core, and changes in the light’s path through the colors as the device is bent or stretched are picked up by a miniaturized spectral meter at the bottom.
    “Imagine you are drinking three different flavors of slushie through three different straws at once: the proportion of each flavor you get changes if you bend or twist the straws. This is the same principle that ChromoSense uses: it perceives changes in light traveling through the colored sections as the geometry of those sections deforms,” says Paik.
    A thermosensitive section of the device also allows it to detect temperature changes, using a special dye — similar to that in color-changing t-shirts or mood rings — that desaturates in color when it is heated. The research has been published in Nature Communications and selected for the Editor’s Highlights page.
    A more streamlined approach to wearables
    Paik explains that while robotic technologies that rely on cameras or multiple sensing elements are effective, they can make wearable devices heavier and more cumbersome, in addition to requiring more data processing.

    “For soft robots to serve us better in our daily lives, they need to be able to sense what we are doing,” she says. “Traditionally, the fastest and most inexpensive way to do this has been through vision-based systems, which capture all of our activities and then extract the necessary data. ChromoSense allows for more targeted, information-dense readings, and the sensor can be easily embedded into different materials for different tasks.”
    Thanks to its simple mechanical structure and use of color over cameras, ChromoSense could potentially lend itself to inexpensive mass production. In addition to assistive technologies, such as mobility-aiding exosuits, Paik sees everyday applications for ChromoSense in athletic gear or clothing, which could be used to give users feedback about their form and movements.
    A strength of ChromoSense — its ability to sense multiple stimuli at once — can also be a weakness, as decoupling simultaneously applied stimuli is still a challenge the researchers are working on. At the moment, Paik says they are focusing on improving the technology to sense locally applied forces, or the exact boundaries of a material when it changes shape.
    “If ChromoSense gains popularity and many people want to use it as a general-purpose robotic sensing solution, then I think further increasing the information density of the sensor could become a really interesting challenge,” she says.
    Looking ahead, Paik also plans to experiment with different formats for ChromoSense, which has been prototyped as a cylindrical shape and as part of a wearable soft exosuit, but could also be imagined in a flat form more suitable for the RRL’s signature origami robots.
    “With our technology, anything can become a sensor as long as light can pass through it,” she summarizes. More

  • in

    Researchers have taught an algorithm to ‘taste’

    For non-connoisseurs, picking out a bottle of wine can be challenging when scanning an array of unfamiliar labels on the shop shelf. What does it taste like? What was the last one I bought that tasted so good?
    Here, wine apps like Vivino, Hello Vino, Wine Searcher and a host of others can help. Apps like these let wine buyers scan bottle labels and get information about a particular wine and read the reviews of others. These apps build upon artificially intelligent algorithms.
    Now, scientists from the Technical University of Denmark (DTU), the University of Copenhagen and Caltech have shown that you can add a new parameter to the algorithms that makes it easier to find a precise match for your own taste buds: Namely, people’s impressions of flavour.
    “We have demonstrated that, by feeding an algorithm with data consisting of people’s flavour impressions, the algorithm can make more accurate predictions of what kind of wine we individually prefer,” says Thoranna Bender, a graduate student at DTU who conducted the study under the auspices of the Pioneer Centre for AI at the University of Copenhagen.
    More accurate predictions of people’s favourite wines
    The researchers held wine tastings during which 256 participants were asked to arrange shot-sized cups of different wines on a piece of A3 paper based upon which wines they thought tasted most similarly. The greater the distance between the cups, the greater the difference in their flavour. The method is widely used in consumer tests. The researchers then digitized the points on the sheets of paper by photographing them.
    The data collected from the wine tastings was then combined with hundreds of thousands of wine labels and user reviews provided to the researchers by Vivino, a global wine app and marketplace. Next, the researchers developed an algorithm based on the enormous data set.

    “The dimension of flavour that we created in the model provides us with information about which wines are similar in taste and which are not. So, for example, I can stand with my favourite bottle of wine and say: I would like to know which wine is most similar to it in taste — or both in taste and price,” says Thoranna Bender.
    Professor and co-author Serge Belongie from the Department of Computer Science, who heads the Pioneer Centre for AI at the University of Copenhagen, adds:
    “We can see that when the algorithm combines the data from wine labels and reviews with the data from the wine tastings, it makes more accurate predictions of people’s wine preferences than when it only uses the traditional types of data in the form of images and text. So, teaching machines to use human sensory experiences results in better algorithms that benefit the user.”
    Can also be used for beer and coffee
    According to Serge Belongie, there is a growing trend in machine learning of using so-called multimodal data, which usually consists of a combination of images, text and sound. Using taste or other sensory inputs as data sources is entirely new. And it has great potential — e.g., in the food sector. Belongie states:
    “Understanding taste is a key aspect of food science and essential for achieving healthy, sustainable food production. But the use of AI in this context remains very much in its infancy. This project shows the power of using human-based inputs in artificial intelligence, and I predict that the results will spur more research at the intersection of food science and AI.”
    Thoranna Bender points out that the researchers’ method can easily be transferred to other types of food and drink as well:

    “We’ve chosen wine as a case, but the same method can just as well be applied to beer and coffee. For example, the approach can be used to recommend products and perhaps even food recipes to people. And if we can better understand the taste similarities in food, we can also use it in the healthcare sector to put together meals that meet with the tastes and nutritional needs of patients. It might even be used to develop foods tailored to different taste profiles.”
    The researchers have published their data on an open server and can be used for free.
    “We hope that someone out there will want to build upon our data. I’ve already fielded requests from people who have additional data that they would like to include in our dataset. I think that’s really cool,” concludes Thoranna Bender. More

  • in

    Photonic chip that ‘fits together like Lego’ opens door to semiconductor industry

    Researchers at the University of Sydney Nano Institute have invented a compact silicon semiconductor chip that integrates electronics with photonic, or light, components. The new technology significantly expands radio-frequency (RF) bandwidth and the ability to accurately control information flowing through the unit.
    Expanded bandwidth means more information can flow through the chip and the inclusion of photonics allows for advanced filter controls, creating a versatile new semiconductor device.
    Researchers expect the chip will have application in advanced radar, satellite systems, wireless networks and the roll-out of 6G and 7G telecommunications and also open the door to advanced sovereign manufacturing. It could also assist in the creation of high-tech value-add factories at places like Western Sydney’s Aerotropolis precinct.
    The chip is built using an emerging technology in silicon photonics that allows integration of diverse systems on semiconductors less than 5 millimetres wide. Pro-Vice-Chancellor (Research) Professor Ben Eggleton, who guides the research team, likened it to fitting together Lego building blocks, where new materials are integrated through advanced packaging of components, using electronic ‘chiplets’.
    The research for this invention has been published in Nature Communications.
    Dr Alvaro Casas Bedoya, Associate Director for Photonic Integration in the School of Physics, who led the chip design, said the unique method of heterogenous materials integration has been 10 years in the making.
    “The combined use of overseas semiconductor foundries to make the basic chip wafer with local research infrastructure and manufacturing has been vital in developing this photonic integrated circuit,” he said.

    “This architecture means Australia could develop its own sovereign chip manufacturing without exclusively relying on international foundries for the value-add process.”
    Professor Eggleton highlighted the fact that most of the items on the Federal Government’s List of Critical Technologies in the National Interest depend upon semiconductors.
    He said the invention means the work at Sydney Nano fits well with initiatives like the Semiconductor Sector Service Bureau (S3B), sponsored by the NSW Government, which aims to develop the local semiconductor ecosystem.
    Dr Nadia Court, Director of S3B, said, “This work aligns with our mission to drive advancements in semiconductor technology, holding great promise for the future of semiconductor innovation in Australia. The result reinforces local strength in research and design at a pivotal time of increased global focus and investment in the sector.”
    Designed in collaboration with scientists at the Australian National University, the integrated circuit was built at the Core Research Facility cleanroom at the University of Sydney Nanoscience Hub, a purpose-built $150 million building with advanced lithography and deposition facilities.
    The photonic circuit in the chip means a device with an impressive 15 gigahertz bandwidth of tunable frequencies with spectral resolution down to just 37 megahertz, which is less than a quarter of one percent of the total bandwidth.

    Professor Eggleton said: “Led by our impressive PhD student Matthew Garrett, this invention is a significant advance for microwave photonics and integrated photonics research.
    “Microwave photonic filters play a crucial role in modern communication and radar applications, offering the flexibility to precisely filter different frequencies, reducing electromagnetic interference and enhancing signal quality.
    “Our innovative approach of integrating advanced functionalities into semiconductor chips, particularly the heterogenous integration of chalcogenide glass with silicon, holds the potential to reshape the local semiconductor landscape.”
    Co-author and Senior Research Fellow Dr Moritz Merklein said: “This work paves the way for a new generation of compact, high-resolution RF photonic filters with wideband frequency tunability, particularly beneficial in air and spaceborne RF communication payloads, opening possibilities for enhanced communications and sensing capabilities.” More

  • in

    To help autonomous vehicles make moral decisions, researchers ditch the ‘trolley problem’

    Researchers have developed a new experiment to better understand what people view as moral and immoral decisions related to driving vehicles, with the goal of collecting data to train autonomous vehicles how to make “good” decisions. The work is designed to capture a more realistic array of moral challenges in traffic than the widely discussed life-and-death scenario inspired by the so-called “trolley problem.”
    “The trolley problem presents a situation in which someone has to decide whether to intentionally kill one person (which violates a moral norm) in order to avoid the death of multiple people,” says Dario Cecchini, first author of a paper on the work and a postdoctoral researcher at North Carolina State University.
    “In recent years, the trolley problem has been utilized as a paradigm for studying moral judgment in traffic,” Cecchini says. “The typical situation comprises a binary choice for a self-driving car between swerving left, hitting a lethal obstacle, or proceeding forward, hitting a pedestrian crossing the street. However, these trolley-like cases are unrealistic. Drivers have to make many more realistic moral decisions every day. Should I drive over the speed limit? Should I run a red light? Should I pull over for an ambulance?”
    “Those mundane decisions are important because they can ultimately lead to life-or-death situations,” says Veljko Dubljevic, corresponding author of the paper and an associate professor in the Science, Technology & Society program at NC State.
    “For example, if someone is driving 20 miles over the speed limit and runs a red light, then they may find themselves in a situation where they have to either swerve into traffic or get into a collision. There’s currently very little data in the literature on how we make moral judgments about the decisions drivers make in everyday situations.”
    To address that lack of data, the researchers developed a series of experiments designed to collect data on how humans make moral judgments about decisions that people make in low-stakes traffic situations. The researchers created seven different driving scenarios, such as a parent who has to decide whether to violate a traffic signal while trying to get their child to school on time. Each scenario is programmed into a virtual reality environment, so that study participants engaged in the experiment have audiovisual information about what drivers are doing when they make decisions, rather than simply reading about the scenario.
    For this work, the researchers built on something called the Agent Deed Consequence (ADC) model, which posits that people take three things into account when making a moral judgment: the agent, which is the character or intent of the person who is doing something; the deed, or what is being done; and the consequence, or the outcome that resulted from the deed.

    Researchers created eight different versions of each traffic scenario, varying the combinations of agent, deed and consequence. For example, in one version of the scenario where a parent is trying to get the child to school, the parent is caring, brakes at a yellow light, and gets the child to school on time. In a second version, the parent is abusive, runs a red light, and causes an accident. The other six versions alter the nature of the parent (the agent), their decision at the traffic signal (the deed), and/or the outcome of their decision (the consequence).
    “The goal here is to have study participants view one version of each scenario and determine how moral the behavior of the driver was in each scenario, on a scale from one to 10,” Cecchini says. “This will give us robust data on what we consider moral behavior in the context of driving a vehicle, which can then be used to develop AI algorithms for moral decision making in autonomous vehicles.”
    The researchers have done pilot testing to fine-tune the scenarios and ensure that they reflect believable and easily understood situations.
    “The next step is to engage in large-scale data collection, getting thousands of people to participate in the experiments,” says Dubljevic. “We can then use that data to develop more interactive experiments with the goal of further fine-tuning our understanding of moral decision making. All of this can then be used to create algorithms for use in autonomous vehicles. We’ll then need to engage in additional testing to see how those algorithms perform.” More