More stories

  • in

    AI better detects prostate cancer on MRI than radiologists

    AI detects prostate cancer more often than radiologists. Additionally, AI triggers false alarms half as often. This is shown by an international study coordinated by Radboud university medical center and published in The Lancet Oncology. This is the first large-scale study where an international team transparently evaluates and compares AI with radiologist assessments and clinical outcomes.
    Radiologists face an increasing workload as men with a higher risk of prostate cancer now routinely receive a prostate MRI. Diagnosing prostate cancer with MRI requires significant expertise, and there is a shortage of experienced radiologists. AI can assist with these challenges.
    AI expert Henkjan Huisman and radiologist Maarten de Rooij, project leaders of the PI-CAI study, organized a major competition between AI teams and radiologists with an international team. Along with other centers in the Netherlands and Norway, they provided over 10,000 MRI scans. They transparently determined for each patient whether prostate cancer was present. They allowed various groups worldwide to develop AI for analyzing these images. The top five submissions were combined into a super-algorithm for analyzing MRI scans for prostate cancer. Finally, AI assessments were compared to those of a group of radiologists on four hundred prostate MRI scans.
    Accurate Diagnosis
    The PI-CAI community brought together over two hundred AI teams and 62 radiologists from twenty countries. They compared the findings of AI and radiologists not only with each other but also with a gold standard, as they monitored the outcomes of the men from whom the scans originated. On average, the men were followed for five years.
    This first international study on AI in prostate diagnostics shows that AI detects nearly seven percent more significant prostate cancers than the group of radiologists. Additionally, AI identifies suspicious areas, later found not to be cancer, fifty percent less often. This means the number of biopsies could be halved with the use of AI. If these results are replicated in follow-up studies, it could greatly assist radiologists and patients in the future. It could reduce radiologists’ workload, provide more accurate diagnoses, and minimize unnecessary prostate biopsies. The developed AI still needs to be validated and is currently not yet available for patients in clinical settings.
    Quality System
    Huisman observes that society has little trust in AI. ‘This is because manufacturers sometimes build AI that isn’t good enough’, he explains. He is working on two things. The first is a public and transparent test to fairly evaluate AI. The second is a quality management system, similar to what exists in the aviation industry. ‘If planes almost collide, a safety committee will look at how to improve the system so that it doesn’t happen in the future. I want the same for AI. I want to research and develop a system that learns from every mistake so that AI is monitored and can continue to improve. That way, we can build trust in AI for healthcare. Optimal, governed AI can help make healthcare better and more efficient.’ More

  • in

    Breakthrough in next-generation memory technology!

    A research team led by Professor Jang-Sik Lee from the Department of Materials Science and Engineering and the Department of Semiconductor Engineering at Pohang University of Science and Technology (POSTECH) has significantly enhanced the data storage capacity of ferroelectric memory devices. By utilizing hafnia-based ferroelectric materials and an innovative device structure, their findings, published on June 7 in the international journal Science Advances, mark a substantial advancement in memory technology.
    With the exponential growth in data production and processing due to advancements in electronics and artificial intelligence (AI), the importance of data storage technologies has surged. NAND flash memory, one of the most prevalent technologies for mass data storage, can store more data in the same area by stacking cells in a three-dimensional structure rather than a planar one. However, this approach relies on charge traps to store data, which results in higher operating voltages and slower speeds.
    Recently, hafnia-based ferroelectric memory has emerged as a promising next-generation memory technology. Hafnia (Hafnium oxide) enables ferroelectric memories to operate at low voltages and high speeds. However, a significant challenge has been the limited memory window for multilevel data storage.
    Professor Jang-Sik Lee’s team at POSTECH has addressed this issue by introducing new materials and a novel device structure. They enhanced the performance of hafnia-based memory devices by doping the ferroelectric materials with aluminum, creating high-performance ferroelectric thin films. Additionally, they replaced the conventional metal-ferroelectric-semiconductor (MFS) structure, where the metal and ferroelectric materials that make up the device are simply arranged, with an innovative metal-ferroelectric-metal-ferroelectric-semiconductor (MFMFS) structure.
    The team successfully controlled the voltage across each layer by adjusting the capacitance of the ferroelectric layers, which involved fine-tuning factors such as the thickness and area ratio of the metal-to-metal and metal-to-channel ferroelectric layers. This efficient use of applied voltage to switch ferroelectric material improved the device’s performance and reduced energy consumption.
    Conventional hafnia-based ferroelectric devices typically have a memory window of around 2 volts (V). In contrast, the research team’s device achieved a memory window exceeding 10 V, enabling Quad-Level Cell (QLC) technology, which stores 16 levels of data (4 bits) per unit transistor. It also demonstrated high stability after more than one million cycles and operated at voltages of 10 V or less, significantly lower than the 18 V required for NAND flash memory. Furthermore, the team’s memory device exhibited stable characteristics in terms of data retention.
    NAND flash memory programs its memory states using Incremental Step Pulse Programming (ISPP), which leads to long programming times and complex circuitry. In contrast, the team’s device achieves rapid programming through one-shot programming by controlling ferroelectric polarization switching.
    Professor Jang-Sik Lee of POSTECH commented, “We have laid the technological foundation for overcoming the limitations of existing memory devices and provided a new research direction for hafnia-based ferroelectric memory.” He added, “Through follow-up research, we aim to develop low-power, high-speed, and high-density memory devices, contributing to solving power issues in data centers and artificial intelligence applications.”
    The research was conducted with the support from the Project for Next-generation Intelligent Semiconductor Technology Development of the Ministry of Science and ICT (National Research Foundation of Korea) and Samsung Electronics. More

  • in

    An AI-powered wearable system tracks the 3D movement of smart pills in the gut

    Scientists at the University of Southern California have developed an artificial intelligence (AI)-powered system to track tiny devices that monitor markers of disease in the gut. Devices using the novel system may help at-risk individuals monitor their gastrointestinal (GI) tract health at home, without the need for invasive tests in hospital settings. This work appears June 12 in the journal Cell Reports Physical Science.
    “Ingestibles are like Fitbits for the gut,” says author Yasser Khan, assistant professor of electrical and computer engineering at the University of Southern California. “But tracking them once swallowed has been a significant challenge.”
    Gas that is formed in the intestines when bacteria break down food can offer insights into a person’s health. Currently, to measure GI tract gases, physicians either use direct methods such as flatus collection and intestinal tube collection, or indirect methods such as breath testing and stool analysis. Ingestible capsules — devices that a user swallows — offer a promising alternative, but no such technologies have been developed for precise gas sensing.
    To tackle this problem, Khan and colleagues developed a system that includes a wearable coil, which the user can conceal under a t-shirt or other clothing. The coil creates a magnetic field, which interacts with sensors embedded in an ingestible pill after it has been swallowed. AI analyzes the signals the pill receives, pinpointing where the device is located in the gut within less than a few millimeters. In addition, the system monitors 3D real-time concentrations of ammonia, a proxy for a bacteria linked with ulcers and gastric cancer, via the device’s optical gas-sensing membranes.
    While previous attempts to track ingestibles as they journey through the gut have relied on bulky desktop coils, the wearable coil can be used anywhere, says Khan. The technology may also have other applications beyond measuring GI tract gases, such as identifying inflammation in the gut caused by Crohn’s disease and delivering drugs to precisely these regions.
    The researchers tested the system’s performance in a variety of mediums that mimic the GI tract, including a simulated cow intestine and liquids designed to replicate stomach and intestinal fluids.
    “During these tests, the device demonstrated its ability to pinpoint its location and measure levels of oxygen and ammonia gases,” says Khan. “Any ingestible device can utilize the technology we’ve developed.”
    However, there are still improvements to be made to the device, says Khan, such as designing it to be smaller and to use less power. Next, as they continue to hone the device, Khan and colleagues plan to test it in pigs in order to study its safety and effectiveness in an organism with human-like biology.
    “Successful outcomes from these trials will bring the device nearer to readiness for human clinical trials,” says Khan. “We are optimistic about the practicality of the system and believe it will soon be applicable for use in humans.” More

  • in

    AI-powered simulation training improves human performance in robotic exoskeletons

    Researchers at North Carolina State University have demonstrated a new method that leverages artificial intelligence (AI) and computer simulations to train robotic exoskeletons to autonomously help users save energy while walking, running and climbing stairs.
    “This work proposes and demonstrates a new machine-learning framework that bridges the gap between simulation and reality to autonomously control wearable robots to improve mobility and health of humans,” says Hao Su, corresponding author of a paper on the work which will be published June 12 in the journal Nature.
    “Exoskeletons have enormous potential to improve human locomotive performance,” says Su, who is an associate professor of mechanical and aerospace engineering at North Carolina State University. “However, their development and broad dissemination are limited by the requirement for lengthy human tests and handcrafted control laws.
    “The key idea here is that the embodied AI in a portable exoskeleton is learning how to help people walk, run or climb in a computer simulation, without requiring any experiments,” says Su.
    Specifically, the researchers focused on improving autonomous control of embodied AI systems — which are systems where an AI program is integrated into a physical robot technology. This work focused on teaching robotic exoskeletons how to assist able-bodied people with various movements. Normally, users have to spend hours “training” an exoskeleton so that the technology knows how much force is needed — and when to apply that force — to help users walk, run or climb stairs. The new method allows users to utilize the exoskeletons immediately.
    “This work is essentially making science fiction reality — allowing people to burn less energy while conducting a variety of tasks,” says Su.
    “We have developed a way to train and control wearable robots to directly benefit humans,” says Shuzhen Luo, first author of the paper and a former postdoctoral researcher at NC State. Luo is now an assistant professor at Embry-Riddle Aeronautical University.

    For example, in testing with human subjects, the researchers found that study participants used 24.3% less metabolic energy when walking in the robotic exoskeleton than without the exoskeleton. Participants used 13.1% less energy when running in the exoskeleton, and 15.4% less energy when climbing stairs.
    “It’s important to note that these energy reductions are comparing the performance of the robotic exoskeleton to that of a user who is not wearing an exoskeleton,” Su says. “That means it’s a true measure of how much energy the exoskeleton saves.”
    While this study focused on the researchers’ work with able-bodied people, the new method also applies to robotic exoskeleton applications aimed at helping people with mobility impairments.
    “Our framework may offer a generalizable and scalable strategy for the rapid development and widespread adoption of a variety of assistive robots for both able-bodied and mobility-impaired individuals,” Su says.
    “We are in the early stages of testing the new method’s performance in robotic exoskeletons being used by older adults and people with neurological conditions, such as cerebral palsy. And we are also interested in exploring how the method could improve the performance of robotic prosthetic devices for amputee populations.”
    This research was done with support from the National Science Foundation under awards 1944655 and 2026622; the National Institute on Disability, Independent Living, and Rehabilitation Research, under award 90DPGE0019 and Switzer Research Fellowship SFGE22000372; and the National Institutes of Health, under award 1R01EB035404.
    Shuzhen Luo and Hao Su are co-inventors on intellectual property related to the controller discussed in this work. Su is also a co-founder of, and has a financial interest in, Picasso Intelligence, LLC, which develops exoskeletons. More

  • in

    Hybrid work is a ‘win-win-win’ for companies, workers

    It is one of the most hotly debated topics in today’s workplace: Is allowing employees to log in from home a few days a week good for their productivity, careers, and job satisfaction?
    Nicholas Bloom, a Stanford economist and one of the foremost researchers on work-from-home policies, has uncovered compelling evidence that hybrid schedules are a boon to both employees and their bosses.
    In a study, newly published in the journal Nature, of an experiment on more than 1,600 workers at Trip.com — a Chinese company that is one of the world’s largest online travel agencies — Bloom finds that employees who work from home for two days a week are just as productive and as likely to be promoted as their fully office-based peers.
    On a third key measure, employee turnover, the results were also encouraging. Resignations fell by 33 percent among workers who shifted from working full-time in the office to a hybrid schedule. Women, non-managers, and employees with long commutes were the least likely to quit their jobs when their treks to the office were cut to three days a week. Trip.com estimates that reduced attrition saved the company millions of dollars.
    “The results are clear: Hybrid work is a win-win-win for employee productivity, performance, and retention,” says Bloom, who is the William D. Eberle Professor of Economics at the Stanford School of Humanities and Sciences and also a senior fellow at the Stanford Institute for Economic Policy Research (SIEPR).
    The findings are especially significant given that, by Bloom’s count, about 100 million workers worldwide now spend a mix of days at home and in the office each week, more than four years after COVID-19 pandemic lockdowns upended how and where people do their jobs. Many of these hybrid workers are lawyers, accountants, marketers, software engineers and other with a college degree or higher.
    Over time, though, working outside the office has come under attack from high-profile business leaders like Elon Musk, the head of Tesla, SpaceX, and X (formerly Twitter), and Jamie Dimon, CEO of JPMorgan Chase, who argue that the costs of remote work outweigh any benefits. Opponents say that employee training and mentoring, innovation, and company culture suffer when workers are not on site five days a week.

    Blooms says that critics often confuse hybrid for fully remote, in part because most of the research into working from home has focused on workers who aren’t required to come into an office and on a specific type of job, like customer support or data entry. The results of these studies have been mixed, though they tend to skew negative. This suggests to Bloom that problems with fully remote work arise when it’s not managed well.
    As one of the few randomized control trials to analyze hybrid arrangements — where workers are offsite two or three days a week and are in the office the rest of the time — Bloom says his findings offer important lessons for other multinationals, many of which share similarities with Trip.com.
    “This study offers powerful evidence for why 80 percent of U.S. companies now offer some form of remote work,” Bloom says, “and for why the remaining 20 percent of firms that don’t are likely paying a price.”
    The research is also the largest to date of hybrid work involving university-trained professionals that relies on the gold standard in research, the randomized controlled trial. This allowed Bloom and his co-authors to show that the benefits they identify resulted from Trip.com’s hybrid experiment and not something else.
    In addition to Bloom, the study’s authors are Ruobing Han, an assistant professor at The Chinese University of Hong Kong, and James Liang, an economics professor at Peking University and co-founder of Trip.com. Han and Liang both earned their PhDs in economics from Stanford.
    The hybrid approach: Only winners
    Trip.com didn’t have a hybrid work policy when it undertook the 6-month experiment starting in 2021 that is at the heart of the study. In all, 395 managers and 1,217 non-managers with undergraduate degrees — all of whom worked in engineering, marketing, accounting, and finance in the company’s Shanghai office — participated. Employees whose birthdays fell on an even-numbered day of the month were told to come to the office five days a week. Workers with odd-numbered birthdays were allowed to work from home two days a week.

    Of the study participants, 32 percent also had postgraduate degrees, mostly in computer science, accounting or finance. Most were in their mid-30s, half had children, and 65 percent were male.
    In finding that hybrid work not only helps employees, but also companies, the researchers relied on various company data and worker surveys, including performance reviews and promotions for up to two years after the experiment. Trip.com’s thorough performance review process includes evaluations of an employee’s contributions to innovation, leadership, and mentoring.
    The study authors also compared the quality and amount of computer code written by Trip.com software engineers who were hybrid against code produced by peers who were in the office full-time.
    In finding that hybrid work had zero effect on workers’ productivity or career advancement and dramatically boosted retention rates, the study authors highlight some important nuances. Resignations, for example, fell only among non-managers; managers were just as likely to quit whether they were hybrid or not.
    Bloom and his coauthors identify misconceptions held by workers and their bosses. Workers, especially women, were reluctant to sign up as volunteers for Trip.com’s hybrid trial — likely for fear that they would be judged negatively for not coming into the office five days a week, Bloom says. In addition, managers predicted on average that remote working would hurt productivity, only to change their minds by the time the experiment ended.
    For business leaders, Bloom says the study confirms that concerns that hybrid work does more harm than good are overblown.
    “If managed right, letting employees work from home two or three days a week still gets you the level of mentoring, culture-building, and innovation that you want,” Bloom says. “From an economic policymaking standpoint, hybrid work is one of the few instances where there aren’t major trade-offs with clear winners and clear losers. There are almost only winners.”
    Trip.com was sold: It now allows hybrid work company-wide. More

  • in

    Female AI ‘teammate’ generates more participation from women

    An artificial intelligence-powered virtual teammate with a female voice boosts participation and productivity among women on teams dominated by men, according to new Cornell University research.
    The findings suggest that the gender of an AI’s voice can positively tweak the dynamics of gender-imbalanced teams and could help inform the design of bots used for human-AI teamwork, researchers said.
    The findings mirror previous research that shows minority teammates are more likely to participate if the team adds members similar to them, said Angel Hsing-Chi Hwang, postdoctoral associate in information science and lead author of the paper.
    To better understand how AI can help gender-imbalanced teams, Hwang and Andrea Stevenson Won, associate professor of communication and the paper’s co-author, carried out an experiment with around 180 men and women who were assigned to groups of three and asked to collaborate virtually on a set of tasks (the study only included participants who identified as either male or female).
    Each group had either one woman or one man and a fourth agent in the form of an abstract shape with either a male or female voice, which would appear on screen and read instructions, contribute an idea and handle timekeeping. There was a catch — the bot wasn’t completely automated. In what’s referred to in human-computer interaction as a “Wizard of Oz” experiment, Hwang was behind the scenes, feeding lines generated by ChatGPT into the bot.
    After the experiment, Hwang and Won analyzed the chat logs of team conversations to determine how often participants offered ideas or arguments. They also asked participants to reflect on the experience.
    “When we looked at participants’ actual behaviors, that’s where we started to see differences between men and women and how they were reacting when there was either a female agent or a male agent on the team,” she said.
    “One interesting thing about this study is that most participants didn’t express a preference for a male- or female-sounding voice,” Won said. “This implies that people’s social inferences about AI can be influential even when people don’t believe they are important.”
    When women were in the minority, they participated more when the AI’s voice was female, while men in the minority were more talkative but were less focused on tasks when working with a male-sounding bot, researchers found. Unlike the men, women reported significantly more positive perceptions of the AI teammate when women were the minority members, according to researchers.
    “With only a gendered voice, the AI agent can provide a small degree of support to women minority members in a group,” said Hwang. More

  • in

    3D-printed mini-actuators can move small soft robots, lock them into new shapes

    Researchers from North Carolina State University have demonstrated miniature soft hydraulic actuators that can be used to control the deformation and motion of soft robots that are less than a millimeter thick. The researchers have also demonstrated that this technique works with shape memory materials, allowing users to repeatedly lock the soft robots into a desired shape and return to the original shape as needed.
    “Soft robotics holds promise for many applications, but it is challenging to design the actuators that drive the motion of soft robots on a small scale,” says Jie Yin, corresponding author of a paper on the work and an associate professor of mechanical and aerospace engineering at NC State. “Our approach makes use of commercially available multi-material 3D printing technologies and shape memory polymers to create soft actuators on a microscale that allow us to control very small soft robots, which allows for exceptional control and delicacy.”
    The new technique relies on creating soft robots that consist of two layers. The first layer is a flexible polymer that is created using 3D printing technologies and incorporates a pattern of microfluidic channels — essentially very small tubes running through the material. The second layer is a flexible shape memory polymer. Altogether, the soft robot is only 0.8 millimeters thick.
    By pumping fluid into the microfluidic channels, users create hydraulic pressure that forces the soft robot to move and change shape. The pattern of microfluidic channels controls the motion and shape change of the soft robot — whether it bends, twists, or so on. In addition, the amount of fluid being introduced, and how quickly it is introduced, controls how quickly the soft robot moves and the amount of force the soft robot exerts.
    If users wish to ‘freeze’ the soft robot’s shape, they can apply moderate heat (64C, or 147F), and then let the robot cool briefly. This prevents the soft robot from reverting to its original shape, even after the liquid in the microfluidic channels is pumped out. If users want to return the soft robot to its original shape, they simply apply the heat again after pumping out the liquid, and the robot relaxes to its original configuration.
    “A key factor here is fine-tuning the thickness of the shape memory layer relative to the layer that contains the microfluidic channels,” says Yinding Chi, co-lead author of the paper and a former Ph.D. student at NC State. “You need the shape memory layer to be thin enough to bend when the actuator’s pressure is applied, but thick enough to get the soft robot to retain its shape even after the pressure is removed.”
    To demonstrate the technique, the researchers created a soft robot “gripper,” capable of picking up small objects. The researchers applied hydraulic pressure, causing the gripper to pinch closed on an object. By applying heat, the researchers were able to fix the gripper in its “closed” position, even after releasing pressure from the hydraulic actuator. The gripper could then be moved — transporting the object it held — into a new position. Researchers then applied heat again, causing the gripper to release the object it had picked up. Video of these soft robots in action can be found at https://youtu.be/5SIwsw9IyIc.

    “Because these soft robots are so thin, we can heat them up to 64C quickly and easily using a small infrared light source — and they also cool very quickly,” says Haitao Qing, co-lead author of the paper and a Ph.D. student at NC State. “So this entire series of operations only takes about two minutes.
    “And the movement does not have to be a gripper that pinches,” says Qing. “We’ve also demonstrated a gripper that was inspired by vines in nature. These grippers quickly wrap around an object and clasp it tightly, allowing for a secure grip.
    “This paper serves as a proof-of-concept for this new technique, and we’re excited about potential applications for this class of miniature soft actuators in small-scale soft robots, shape-shifting machines, and biomedical engineering.”
    This work was done with support from the National Science Foundation under grants 2126072 and 2329674. More

  • in

    Virtual reality as a reliable shooting performance-tracking tool

    Virtual reality technology can do more than teach weaponry skills in law enforcement and military personnel, a new study suggests: It can accurately record shooting performance and reliably track individuals’ progress over time.
    In the study of 30 people with a range of experience levels in handling a rifle, researchers at The Ohio State University found that a ballistic simulator captured data on the shooters’ accuracy, decision-making and reaction time — down to the millimeter in distance and millisecond in time — on a consistent basis.
    In addition to confirming that the simulator — called the VirTra V-100 — is a dependable research tool, the findings could lead to establishing the first-ever standardized performance scores for virtual reality ballistics training.
    “To our knowledge, we’re the first team to answer the question of whether the simulator could be converted to an assessment tool and if it’s credible to use it day-to-day,” said Alex Buga, first author of the study and a PhD student in kinesiology at Ohio State.
    “We’ve figured out how to export the data and interpret it. We’ve focused on the three big challenges of marksmanship, decision-making and reaction time to measure 21 relevant variables — allowing us to put a report in a user’s hand and say, ‘This is how accurate, precise, focused and fast you are.'”
    The study was published online June 6 in The Journal of Strength and Conditioning Research.
    U.S. military leaders and law enforcement agencies have shown an interest in increasing the use of virtual reality for performance assessment, said Buga and senior study author Jeff Volek, professor of human sciences at Ohio State. Earlier this year, an Ohio Attorney General Task Force on the Future of Police Training in Ohio recommended incorporating virtual reality technology into training protocols.

    Volek is the principal investigator on a $10 million U.S. Department of Defense grant focused on improving the health of military service members, veterans and the American public. As part of that initiative, the research team is investigating the extent to which nutritional ketosis reduces detrimental effects of sleep loss on cognitive and physical performance in ROTC cadets — including their shooting ability as measured by the VirTra simulator. Verifying the simulator’s results for research purposes triggered the attempt to extract and analyze its data.
    “We were using it as an outcome variable for research, and we found that it has very good day-to-day reproducibility of performance, which is crucial for research,” Volek said. “You want a sensitive and reproducible outcome in your test where there’s not a lot of device or equipment variation.”
    Because the lab also focuses on human performance in first responders, researchers’ conversations with military and law enforcement communities convinced Buga that data collected by the simulator could be more broadly useful.
    “I created a few programs that enabled us to calculate the shooting data and produce objective training measures,” he said. “This equipment is close to what the military and police use every day, so this has potential to be used as a screening tool across the country.”
    Users of the simulator operate the infrared-guided M4 rifle by shooting at a large screen onto which different digitally generated visuals are projected — no headset required. The rifle at Ohio State has been retrofitted to produce the same recoil as a police or military weapon.
    The study participants included civilians, police and SWAT officers, and ROTC cadets. Each was first familiarized in a single learning session with the simulator and then completed multiple rounds of three different tasks in each of three study performance sessions.

    In the first task, participants fired at the same target a total of 50 times to produce measures of shooting precision. The decision-making assessment involved shooting twice within two seconds at designated shapes and colors on a screen displaying multiple shape and color choices. In the reaction-time scenario, participants shot at a series of plates from left to right as rapidly as possible.
    Internal consistency ratings showed the simulator generated good to excellent test-retest agreement on the 21 variables measured.
    All participants were well-rested and completed the study sessions at about the same time of day. Self-evaluations showed that participants’ overall confidence about their shooting performance increased from their first to final sessions. They also rated the simulator as a realistic and a low-stress shooting assessment tool.
    The low stress and well-rested conditions were important to establishing baseline performance measures, the researchers noted, which then would enable evaluating how injuries and other physical demands of first-responder professions affect shooting performance.
    “This simulator could be used to assess the effectiveness of specific training programs designed to improve shooting performance, or to evaluate marksmanship in response to various stressors encountered by the same law enforcement and military personnel,” Buga said. “These novel lines of evidence have enabled us to push the boundaries of tactical research and set the groundwork for using virtual reality in sophisticated training scenarios that support national defense goals.”
    Additional co-authors, all from Ohio State, included Drew Decker, Bradley Robinson, Christopher Crabtree, Justen Stoner, Lucas Arce, Xavier El-Shazly, Madison Kackley, Teryn Sapper, John Paul Anders and William Kraemer. More