More stories

  • in

    Age, race impact AI performance on digital mammograms

    In a study of nearly 5,000 screening mammograms interpreted by an FDA-approved AI algorithm, patient characteristics such as race and age influenced false positive results. The study’s results were published today in Radiology, a journal of the Radiological Society of North America (RSNA).
    “AI has become a resource for radiologists to improve their efficiency and accuracy in reading screening mammograms while mitigating reader burnout,” said Derek L. Nguyen, M.D., assistant professor at Duke University in Durham, North Carolina. “However, the impact of patient characteristics on AI performance has not been well studied.”
    Dr. Nguyen said while preliminary data suggests that AI algorithms applied to screening mammography exams may improve radiologists’ diagnostic performance for breast cancer detection and reduce interpretation time, there are some aspects of AI to be aware of.
    “There are few demographically diverse databases for AI algorithm training, and the FDA does not require diverse datasets for validation,” he said. “Because of the differences among patient populations, it’s important to investigate whether AI software can accommodate and perform at the same level for different patient ages, races and ethnicities.”
    In the retrospective study, researchers identified patients with negative (no evidence of cancer) digital breast tomosynthesis screening examinations performed at Duke University Medical Center between 2016 and 2019. All patients were followed for a two-year period after the screening mammograms, and no patients were diagnosed with a breast malignancy.
    The researchers randomly selected a subset of this group consisting of 4,855 patients (median age 54 years) broadly distributed across four ethnic/racial groups. The subset included 1,316 (27%) white, 1,261 (26%) Black, 1,351 (28%) Asian, and 927 (19%) Hispanic patients.
    A commercially available AI algorithm interpreted each exam in the subset of mammograms, generating both a case score (or certainty of malignancy) and a risk score (or one-year subsequent malignancy risk).

    “Our goal was to evaluate whether an AI algorithm’s performance was uniform across age, breast density types and different patient race/ethnicities,” Dr. Nguyen said.
    Given all mammograms in the study were negative for the presence of cancer, anything flagged as suspicious by the algorithm was considered a false positive result. False positive case scores were significantly more likely in Black and older patients (71-80 years) and less likely in Asian patients and younger patients (41-50 years) compared to white patients and women between the ages of 51 and 60.
    “This study is important because it highlights that any AI software purchased by a healthcare institution may not perform equally across all patient ages, races/ethnicities and breast densities,” Dr. Nguyen said. “Moving forward, I think AI software upgrades should focus on ensuring demographic diversity.”
    Dr. Nguyen said healthcare institutions should understand the patient population they serve before purchasing an AI algorithm for screening mammogram interpretation and ask vendors about their algorithm training.
    “Having a baseline knowledge of your institution’s demographics and asking the vendor about the ethnic and age diversity of their training data will help you understand the limitations you’ll face in clinical practice,” he said. More

  • in

    Math discovery provides new method to study cell activity, aging

    New mathematical tools revealing how quickly cell proteins break down are poised to uncover deeper insights into how we age, according to a recently published paper co-authored by a Mississippi State researcher and his colleagues from Harvard Medical School and the University of Cambridge.
    Galen Collins, assistant professor in MSU’s Department of Biochemistry, Molecular Biology, Entomology and Plant Pathology, co-authored the groundbreaking paper published in the Proceedings of the National Academy of Sciences, or PNAS, in April.
    “We already understand how quickly proteins are made, which can happen in a matter of minutes,” said Collins, who is also a scientist in the Mississippi Agricultural and Forestry Experiment Station. “Until now, we’ve had a very poor understanding of how much time it takes them to break down.”
    The paper in applied mathematics, “Maximum entropy determination of mammalian proteome dynamics,” presents the new tools that quantify the degradation rates of cell proteins — how quickly they break down — helping us understand how cells grow and die and how we age. Proteins — complex molecules made from various combinations of amino acids — carry the bulk of the workload within a cell, providing its structure, responding to messages from outside the cell and removing waste.
    The results proved that not all proteins degrade at the same pace but instead fall into one of three categories, breaking down over the course of minutes, hours or days. While previous research has examined cell protein breakdown, this study was the first to quantify mathematically the degradation rates of all cell protein molecules, using a technique called maximum entropy.
    “For certain kinds of scientific questions, experiments can often reveal infinitely many possible answers; however, they are not all equally plausible,” said lead author Alexander Dear, research fellow in applied mathematics at Harvard University. “The principle of maximum entropy is a mathematical law that shows us how to precisely calculate the plausibility of each answer — its ‘entropy’ — so that we can choose the one that is the most likely.”
    “This kind of math is sort of like a camera that zooms in on your license plate from far away and figures out what the numbers should be,” Collins said. “Maximum entropy gives us a clear and precise picture of how protein degradation occurs in cells.”
    In addition, the team used these tools to study some specific implications of protein degradation for humans and animals. For one, they examined how those rates change as muscles develop and adapt to starvation.

    “We found that starvation had the greatest impact on the intermediate group of proteins in muscular cells, which have a half-life of a few hours, causing the breakdown to shift and accelerate,” Collins said. “This discovery could have implications for cancer patients who experience cachexia, or muscle wasting due to the disease and its treatments.”
    They also explored how a shift in the breakdown of certain cell proteins contributes to neurodegenerative disease.
    “These diseases occur when waste proteins, which usually break down quickly, live longer than they should,” Collins said. “The brain becomes like a teenager’s bedroom, accumulating trash, and when you don’t clean it up, it becomes uninhabitable.”
    Dear affirmed the study’s value lies not only in what it revealed about cell protein degeneration, but also in giving scientists a new method to investigate cell activity with precision.
    “Our work provides a powerful new experimental method for quantifying protein metabolism in cells,” he said. “Its simplicity and rapidity make it particularly well-suited for studying metabolic changes.”
    Collins’s post-doctoral advisor at Harvard and a co-author of the article, the late Alfred Goldberg, was a pioneer in studying the life and death of proteins. Collins noted this study was built on nearly five decades of Goldberg’s research and his late-career collaboration with mathematicians from the University of Cambridge. After coming to MSU a year ago, Collins continued collaborating with his colleagues to complete the paper.
    “It’s an incredible honor to be published in PNAS, but it was also a lot of fun being part of this team,” Collins said. “And it’s very meaningful to see my former mentor’s body of work wrapped up and published.” More

  • in

    Evolving market dynamics foster consumer inattention that can lead to risky purchases

    Researchers have developed a new theory of how changing market conditions can lead large numbers of otherwise cautious consumers to buy risky products such as subprime mortgages, cryptocurrency or even cosmetic surgery procedures.
    These changes can occur in categories of products that are generally low risk when they enter the market. As demand increases, more companies may enter the market and try to attract consumers with lower priced versions of the product that carry more risk. If the negative effects of that risk are not immediately noticeable, the market can evolve to keep consumers ignorant of the risks, said Michelle Barnhart, an associate professor in Oregon State University’s College of Business and a co-author of a new paper.
    “It’s not just the consumer’s fault. It’s not just the producer’s fault. It’s not just the regulator’s fault. All these things together create this dilemma,” Barnhart said. “Understanding how such a situation develops could help consumers, regulators and even producers make better decisions when they are faced with similar circumstances in the future.”
    The researchers’ findings were recently published in the Journal of Consumer Research. The paper’s lead author is Lena Pellandini-Simanyi of the University of Lugano in Switzerland.
    Barnhart, who studies consumer culture and market systems; has researched credit and debit in the U.S. Pellandini-Simanyi, a sociologist with expertise in consumer markets, has studied personal finance in European contexts. Together they analyzed the case of the Hungarian mortgage crisis to understand how people who generally view themselves as risk averse end up pursuing a high-risk product or service.
    To better understand the consumer mindset, the researchers conducted 47 interviews with Hungarian borrowers who took out low-risk mortgages in the local forint currency or in higher risk foreign currency as the Hungarian mortgage industry evolved between 2001 and 2010. They also conducted a larger survey of mortgage borrowers, interviewed 37 finance and mortgage industry experts and financial regulators and analyzed regulatory documents and parliamentary proceedings.
    They found patterns that led to mortgages becoming riskier over time and social and marketplace changes that lead consumers to enter into a state of collective ignorance of increasing risks. In addition, they identified characteristics that encouraged these patterns. Other markets with these characteristics are likely to develop in a similar way.

    “Typically, when there is a new product on the market, people are quite skeptical. The early adopters carefully examine this product, they become highly educated about it and do a lot of work to determine if the risk is too high,” Pellandini-Sumanyi said. “If they deem the risk too high, they don’t buy it.”
    But if those early adopters use the new product or service successfully, the next round of consumers is likely to assume the product will work for them in a similar fashion without examining it in as much detail, even if the quality of the product has been reduced, the researchers noted.
    “Then everything starts to spiral, with quality dropping in the rush to meet consumer demand and maintain profits, and consumers relying more and more on social information that suggests this is a safe purchase without investigating how the risks might have changed,” Barnhart said.
    “It also can lead to a ‘prudence paradox,’ where the most risk averse people wait to enter the market until the end stages and end up buying super risky products. They exercise caution by waiting but they wait so long, they end up with the worst products.”
    The spiral is typically only broken through intervention, either through market collapse or regulation. For example, while cosmetic surgery is relatively safe, an increase in availability of inexpensive procedures at facilities that lacked sufficient equipment and expertise led to a rise in botched procedures until regulation caught up.
    “These findings demonstrate the power of social information,” Barnhart said. “In this environment, it’s very difficult for any individual consumer to pay attention to and assess risk because doing so is so far outside of the norm.”
    To protect themselves against collective ignorance, consumers should ensure that they are weighing their personal risk against others whose experiences are actually similar, Pellandini-Sumanyi said.
    “Make sure this is an apples-to-apples comparison of products and the consumers’ circumstances,” she said. More

  • in

    AI chips could get a sense of time

    Artificial neural networks may soon be able to process time-dependent information, such as audio and video data, more efficiently. The first memristor with a ‘relaxation time’ that can be tuned is reported today in Nature Electronics, in a study led by the University of Michigan.
    Memristors, electrical components that store information in their electrical resistance, could reduce AI’s energy needs by about a factor of 90 compared to today’s graphical processing units. Already, AI is projected to account for about half a percent of the world’s total electricity consumption in 2027, and that has the potential to balloon as more companies sell and use AI tools.
    “Right now, there’s a lot of interest in AI, but to process bigger and more interesting data, the approach is to increase the network size. That’s not very efficient,” said Wei Lu, the James R. Mellor Professor of Engineering at U-M and co-corresponding author of the study with John Heron, U-M associate professor of materials science and engineering.
    The problem is that GPUs operate very differently from the artificial neural networks that run the AI algorithms — the whole network and all its interactions must be sequentially loaded from the external memory, which consumes both time and energy. In contrast, memristors offer energy savings because they mimic key aspects of the way that both artificial and biological neural networks function without external memory. To an extent, the memristor network can embody the artificial neural network.
    “We anticipate that our brand-new material system could improve the energy efficiency of AI chips six times over the state-of-the-art material without varying time constants,” said Sieun Chae, a recent U-M Ph.D. graduate in materials science and engineering and co-first-author of the study with Sangmin Yoo, a recent U-M PhD graduate in electrical and computer engineering.
    In a biological neural network, timekeeping is achieved through relaxation. Each neuron receives electrical signals and sends them on, but it isn’t a guarantee that a signal will move forward. Some threshold of incoming signals must be reached before the neuron will send its own, and it has to be met in a certain amount of time. If too much time passes, the neuron is said to relax as the electrical energy seeps out of it. Having neurons with different relaxation times in our neural networks helps us understand sequences of events.
    Memristors operate a little differently. Rather than the total presence or absence of a signal, what changes is how much of the electrical signal gets through. Exposure to a signal reduces the resistance of the memristor, allowing more of the next signal to pass. In memristors, relaxation means that the resistance rises again over time.

    While Lu’s group had explored building relaxation time into memristors in the past, it was not something that could be systematically controlled. But now, Lu and Heron’s team have shown that variations on a base material can provide different relaxation times, enabling memristor networks to mimic this timekeeping mechanism.
    The team built the materials on the superconductor YBCO, made of yttrium, barium, carbon and oxygen. It has no electrical resistance at temperatures below -292 Fahrenheit, but they wanted it for its crystal structure. It guided the organization of the magnesium, cobalt, nickel, copper and zinc oxides in the memristor material.
    Heron calls this type of oxide, an entropy-stabilized oxide, the “kitchen sink of the atomic world” — the more elements they add, the more stable it becomes. By changing the ratios of these oxides, the team achieved time constants ranging from 159 to 278 nanoseconds, or trillionths of a second. The simple memristor network they built learned to recognize the sounds of the numbers zero to nine. Once trained, it could identify each number before the audio input was complete.
    These memristors were made through an energy-intensive process because the team needed perfect crystals to precisely measure their properties, but they anticipate that a simpler process would work for mass manufacturing.
    “So far, it’s a vision, but I think there are pathways to making these materials scalable and affordable,” Heron said. “These materials are earth-abundant, nontoxic, cheap and you can almost spray them on.”
    The research was funded by the National Science Foundation. It was done in partnership with researchers at the University of Oklahoma, Cornell University and Pennsylvania State University.
    The device was built in the Lurie Nanofabrication Facility and studied at the Michigan Center for Materials Characterization.
    Lu is also a professor of electrical and computer engineering and materials science and engineering. Chae is now an assistant professor of electrical engineering and computer science at Oregon State University. More

  • in

    World leaders still need to wake up to AI risks

    Leading AI scientists are calling for stronger action on AI risks from world leaders, warning that progress has been insufficient since the first AI Safety Summit in Bletchley Park six months ago.
    Then, the world’s leaders pledged to govern AI responsibly. However, as the second AI Safety Summit in Seoul (21-22 May) approaches, twenty-five of the world’s leading AI scientists say not enough is actually being done to protect us from the technology’s risks. In an expert consensus paper published today in Science, they outline urgent policy priorities that global leaders should adopt to counteract the threats from AI technologies.
    Professor Philip Torr,Department of Engineering Science,University of Oxford, a co-author on the paper, says: “The world agreed during the last AI summit that we needed action, but now it is time to go from vague proposals to concrete commitments. This paper provides many important recommendations for what companies and governments should commit to do.”
    World’s response not on track in face of potentially rapid AI progress
    According to the paper’s authors, it is imperative that world leaders take seriously the possibility that highly powerful generalist AI systems — outperforming human abilities across many critical domains — will be developed within the current decade or the next. They say that although governments worldwide have been discussing frontier AI and made some attempt at introducing initial guidelines, this is simply incommensurate with the possibility of rapid, transformative progress expected by many experts.
    Current research into AI safety is seriously lacking, with only an estimated 1-3% of AI publications concerning safety. Additionally, we have neither the mechanisms or institutions in place to prevent misuse and recklessness, including regarding the use of autonomous systems capable of independently taking actions and pursuing goals.
    World-leading AI experts issue call to action
    In light of this, an international community of AI pioneers has issued an urgent call to action. The co-authors include Geoffrey Hinton, Andrew Yao, Dawn Song, the late Daniel Kahneman; in total 25 of the world’s leading academic experts in AI and its governance. The authors hail from the US, China, EU, UK, and other AI powers, and include Turing award winners, Nobel laureates, and authors of standard AI textbooks.

    This article is the first time that such a large and international group of experts have agreed on priorities for global policy makers regarding the risks from advanced AI systems.
    Urgent priorities for AI governance
    The authors recommend governments to: establish fast-acting, expert institutions for AI oversight and provide these with far greater funding than they are due to receive under almost any current policy plan. As a comparison, the US AI Safety Institute currently has an annual budget of $10 million, while the US Food and Drug Administration (FDA) has a budget of $6.7 billion. mandate much more rigorous risk assessments with enforceable consequences, rather than relying on voluntary or underspecified model evaluations. require AI companies to prioritise safety, and to demonstrate their systems cannot cause harm. This includes using “safety cases” (used for other safety-critical technologies such as aviation) which shifts the burden for demonstrating safety to AI developers. implement mitigation standards commensurate to the risk-levels posed by AI systems. An urgent priority is to set in place policies that automatically trigger when AI hits certain capability milestones. If AI advances rapidly, strict requirements automatically take effect, but if progress slows, the requirements relax accordingly.According to the authors, for exceptionally capable future AI systems, governments must be prepared to take the lead in regulation. This includes licensing the development of these systems, restricting their autonomy in key societal roles, halting their development and deployment in response to worrying capabilities, mandating access controls, and requiring information security measures robust to state-level hackers, until adequate protections are ready.
    AI impacts could be catastrophic
    AI is already making rapid progress in critical domains such as hacking, social manipulation, and strategic planning, and may soon pose unprecedented control challenges. To advance undesirable goals, AI systems could gain human trust, acquire resources, and influence key decision-makers. To avoid human intervention, they could be capable of copying their algorithms across global server networks. Large-scale cybercrime, social manipulation, and other harms could escalate rapidly. In open conflict, AI systems could autonomously deploy a variety of weapons, including biological ones. Consequently, there is a very real chance that unchecked AI advancement could culminate in a large-scale loss of life and the biosphere, and the marginalization or extinction of humanity.
    Stuart Russell OBE, Professor of Computer Science at the University of California at Berkeley and an author of the world’s standard textbook on AI, says: “This is a consensus paper by leading experts, and it calls for strict regulation by governments, not voluntary codes of conduct written by industry. It’s time to get serious about advanced AI systems. These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless. Companies will complain that it’s too hard to satisfy regulations — that “regulation stifles innovation.” That’s ridiculous. There are more regulations on sandwich shops than there are on AI companies.” More

  • in

    Blueprints of self-assembly

    Many biological structures of impressive beauty and sophistication arise through processes of self-assembly. Indeed, the natural world is teeming with intricate and useful forms that come together from many constituent parts, taking advantage of the built-in features of molecules.
    Scientists hope to gain a better understanding of how this process unfolds and how such bottom-up construction can be used to advance technologies in computer science, materials science, medical diagnostics and other areas.
    In new research, Arizona State University Assistant Professor Petr Sulc and his colleagues have taken a step closer to replicating nature’s processes of self-assembly. Their study describes the synthetic construction of a tiny, self-assembled crystal known as a “pyrochlore,” which bears unique optical properties.
    The key to creating the crystal is the development of a new simulation method that can predict and guide the self-assembly process, avoiding unwanted structures and ensuring the molecules come together in just the right arrangement.
    The advance provides a steppingstone to the eventual construction of sophisticated, self-assembling devices at the nanoscale — roughly the size of a single virus.
    The new methods were used to engineer the pyrochlore nanocrystal, a special type of lattice that could eventually function as an optical metamaterial, “a special type of material that only transmits certain wavelengths of light,” Sulc says. “Such materials can then be used to produce so-called optical computers and more sensitive detectors, for a range of applications.”
    Sulc is a researcher in the Biodesign Center for Molecular Design and Biomimetics, the School of Molecular Sciences and the Center for Biological Physics at Arizona State University.

    The research appears in the current issue of the journal Science.
    From chaos to complexity
    Imagine placing a disassembled watch into a box, which you then shake vigorously for several minutes. When you open the box, you find an assembled, fully functional watch inside. Intuitively, we know that such an event is nearly impossible, as watches, like all other devices we manufacture, must be assembled progressively, with each component placed in its specific location by a person or a robotic assembly line.
    Biological systems, such as bacteria, living cells or viruses, can construct highly ingenious nanostructures and nanomachines — complexes of biomolecules, like the protective shell of a virus or bacterial flagella that function similarly to a ship’s propeller, helping bacteria move forward.
    These and countless other natural forms, comparable in size to a few dozen nanometers — one nanometer is equal to one-billionth of a meter, or roughly the length your fingernail grows in one second — arise through self-assembly. Such structures are formed from individual building blocks (biomolecules, such as proteins) that move chaotically and randomly within the cell, constantly colliding with water and other molecules, like the watch components in the box you vigorously shake.
    Despite the apparent chaos, evolution has found a way to bring order to the unruly process.

    Molecules interact in specific ways that lead them to fit together in just the right manner, creating functional nanostructures inside or on the cell’s surface. These include various intricate complexes inside cells, such as machinary that can replicate entire genetic material. Less intricate examples, but quite complex nevertheless, include self-assembly of the tough outer shells of viruses, whose assembly process Sulc also previously studied with his colleague, Banu Ozkan from ASU’s Department of Physics.
    Crafting with DNA
    For several decades, the field of bionanotechnology has worked to craft tiny structures in the lab, replicating the natural assembly process seen in living organisms. The technique generally involves mixing molecular components in water, gradually cooling them and hoping that when the solution reaches room temperature, all the pieces will fit together correctly.
    One of the most successful strategies, known as DNA bionanotechnology, uses artificially synthesized DNA as the basic building block. This molecule of life is not only capable of storing vast troves of genetic information — strands of DNA can also be designed in the lab to connect with each other in such a way that a clever 3D structure is formed.
    The resulting nanostructures, known as DNA origami, have a range of promising applications, from diagnostics to therapy, where, for example, they are being tested as a new method of vaccine delivery.
    A significant challenge lies in engineering molecule interactions to form only the specific, pre-designed nanostructures. In practice, unexpected structures often result due to the unpredictable nature of particle collisions and interactions. This phenomenon, known as a kinetic trap, is akin to hoping for an assembled watch after shaking a box of its parts, only to find a jumbled heap instead.
    Maintaining order
    To attempt to overcome kinetic traps and ensure the proper structure self-assembles from the DNA fragments, the researchers developed new statistical methods that can simulate the self-assembly process of nanostructures.
    The challenges for achieving useful simulations of such enormously complex processes are formidable. During the assembly phase, the chaotic dance of molecules can last several minutes to hours before the target nanostructure is formed, but the most powerful simulations in the world can only simulate a few milliseconds at most.
    “Therefore, we developed a whole new range of models that can simulate DNA nanostructures with different levels of precision,” Sulc says. “Instead of simulating individual atoms, as is common in protein simulations, for example, we represent 12,000 DNA bases as one complex particle.”
    This approach allows researchers to pinpoint problematic kinetic traps by combining computer simulations with different degrees of accuracy. Using their optimization method, researchers can fine-tune the blizzard of molecular interactions, compelling the components to assemble correctly into the intended structure.
    The computational framework established in this research will guide the creation of more complex materials and the development of nanodevices with intricate functions, with potential uses in both diagnostics and treatment.
    The research work was carried out in collaboration with researchers from Sapienza University of Rome, Ca’ Foscari University of Venice and Columbia University in New York. More

  • in

    2D materials: A catalyst for future quantum technologies

    For the first time, scientists at the Cavendish Laboratory have found that a single ‘atomic defect’ in a thin material, Hexagonal Boron Nitride (hBN), exhibits spin coherence under ambient conditions, and that these spins can be controlled with light. Spin coherence refers to an electronic spin being capable of retaining quantum information over time. The discovery is significant because materials that can host quantum properties under ambient conditions is quite rare.
    The findings published in Nature Materials, further confirm that the accessible spin coherence at room temperature is longer than the researchers initially imagined it could be. “The results show that once we write a certain quantum state onto the spin of these electrons, this information is stored for ~1 millionth of a second, making this system a very promising platform for quantum applications,” said Carmem M. Gilardoni, co-author of the paper and Rubicon postdoctoral fellow at the Cavendish Laboratory.
    “This may seem short, but the interesting thing is that this system does not require special conditions — it can store the spin quantum state even at room temperature and with no requirement for large magnets.”
    Hexagonal Boron Nitride (hBN) is an ultra-thin material made up of stacked one-atom-thick layers, kind of like sheets of paper. These layers are held together by forces between molecules. But sometimes, there are ‘atomic defects’ withinthese layers, similar to a crystal with molecules trapped inside it. These defects can absorb and emit light in the visible range with well-defined optical transitions, and they can act as local traps for electrons. Because of these ‘atomic defects’ within hBN, scientists can now study how these trapped electrons behave. They can study the spin property, which allows electrons to interact with magnetic fields. What’s truly exciting is that researchers can control and manipulate the electron spins using light within these defects at room temperature.
    This finding paves the way for future technological applications particularly in sensing technology.
    However, since this is the first time anyone has reported the spin coherence of the system, there is a lot to investigate before it is mature enough for technological applications. The scientists are still figuring out how to make these defects even better and more reliable. They are currently probing how far we can extend the spin storage time, and whether we can optimise the system and material parameters that are important for quantum-technological applications, such as defect stability over time and the quality of the light emitted by this defect.
    “Working with this system has highlighted to us the power of the fundamental investigation of materials. As for the hBN system, as a field we can harness excited state dynamics in other new material platforms for use in future quantum technologies,” said Dr. Hannah Stern, first author of the paper, who conducted this research at the Cavendish Laboratory and is now a Royal Society University Research Fellow and Lecturer at University of Manchester.
    In future the researchers are looking at developing the system further, exploring many different directions from quantum sensors to secure communications.
    “Each new promising system will broaden the toolkit of available materials, and every new step in this direction will advance the scalable implementation of quantum technologies. These results substantiate the promise of layered materials towards these goals,” concluded Professor Mete Atatüre, Head of the Cavendish Laboratory, who led the project. More

  • in

    Robot-phobia could exasperate hotel, restaurant labor shortage

    Using more robots to close labor gaps in the hospitality industry may backfire and cause more human workers to quit, according to a Washington State University study.
    The study, involving more than 620 lodging and food service employees, found that “robot-phobia” — specifically the fear that robots and technology will take human jobs — increased workers’ job insecurity and stress, leading to greater intentions to leave their jobs. The impact was more pronounced with employees who had real experience working with robotic technology. It also affected managers in addition to frontline workers. The findings were published in theInternational Journal of Contemporary Hospitality Management.
    “The turnover rate in the hospitality industry ranks among the highest across all non-farm sectors, so this is an issue that companies need to take seriously,” said lead author Bamboo Chen, a hospitality researcher in WSU’s Carson College of Business. “The findings seem to be consistent across sectors and across both frontline employees and managers. For everyone, regardless of their position or sector, robot-phobia has a real impact.”
    Food service and lodging industries were hit particularly hard by the pandemic lockdowns, and many businesses are still struggling to find enough workers. For example, the accommodation workforce in April 2024 was still 9.2% below what it was in February 2020, according to U.S. Bureau of Labor Statistics. The ongoing labor shortage has inspired some employers to turn to robotic technology to fill the gap.
    While other studies have focused on customers’ comfort with robots, this study focuses on how the technology impacted hospitality workers. Chen and WSU colleague Ruying Cai surveyed 321 lodging and 308 food service employees from across the U.S., asking a range of questions about their jobs and attitudes toward robots. The survey defined “robots” broadly to include a range of robotic and automation technologies, such as human-like robot servers and automated robotic arms as well as self-service kiosks and tabletop devices.
    Analyzing the survey data, the researchers found that having a higher degree of robot-phobia was connected to greater feelings of job insecurity and stress — which were then correlated with “turnover intention” or workers’ plans to leave their jobs. Those fears did not decrease with familiarity: employees who had more actual engagement with robotic technology in their daily jobs had higher fears that it would make human workers obsolete.
    Perception also played a role. The employees who viewed robots as being more capable and efficient also ranked higher in turnover intention.
    Robots and automation can be good ways to help augment service, Chen said, as they can handle tedious tasks humans typically do not like doing such as washing dishes or handling loads of hotel laundry. But the danger comes if the robotic additions cause more human workers to quit. The authors point out this can create a “negative feedback loop” that can make the hospitality labor shortage worse.
    Chen recommended that employers communicate not only the benefits but the limitations of the technology — and place a particular emphasis on the role human workers play.
    “When you’re introducing a new technology, make sure not to focus just on how good or efficient it will be. Instead, focus on how people and the technology can work together,” he said. More