More stories

  • in

    Robot radiotherapy could improve treatments for eye disease

    Researchers from King’s, with doctors at King’s College Hospital NHS Foundation Trust, have successfully used a new robot system to improve treatment for debilitating eye disease.
    The custom-built robot was used to treat wet neovascular age-related macular degeneration (AMD), administering a one-off, minimally invasive dose of radiation, followed by patients’ routine treatment with injections into their eye.
    In the landmark trial, published today in The Lancet, it was found that patients then needed fewer injections to effectively control the disease, potentially saving around 1.8 million injections per year around the world.
    Wet AMD is a debilitating eye disease, where abnormal new blood vessels grow into the macula, the light sensing-layer of cells inside the back of the eyeball. The vessels then start to leak blood and fluid, typically causing a rapid, permanent and severe loss of sight.
    Globally, around 196 million people have AMD and the Royal College of Ophthalmologists estimates that the disease affects more than 700,000 people in the UK. The number of people with AMD is expected to increase 60% by 2035, due to the country’s ageing population.
    Wet AMD is currently treated with regular injections into the eye. Initially, treatment substantially improves a patient’s vision. But, because the injections don’t cure the disease, fluid will eventually start to build up again in the macula, and patients will require long-term, repeated injections. Most people require an injection around every 1-3 months, and eye injections, costing between £500 and £800 per injection, have become one of the most common NHS procedures.
    The new treatment can be targeted far better than existing methods, aiming three beams of highly focused radiation into the diseased eye. Scientists found that patients having robotic radiotherapy required fewer injections to control their disease compared to standard treatment.

    The study found that the robotically controlled device saves the NHS £565 for each patient treated over the first two years, as it results in fewer injections.
    The study lead and first author on the paper, Professor Timothy Jackson, King’s College London and Consultant Ophthalmic Surgeon at King’s College Hospital said: “Research has previously tried to find a better way to target radiotherapy to the macula, such as by repurposing devices used to treat brain tumours. But so far nothing has been sufficiently precise to target macular disease that may be less than 1 mm across.
    “With this purpose-built robotic system, we can be incredibly precise, using overlapping beams of radiation to treat a very small lesion in the back of the eye.
    “Patients generally accept that they need to have eye injections to help preserve their vision, but frequent hospital attendance and repeated eye injections isn’t something they enjoy. By better stabilising the disease and reducing its activity, the new treatment could reduce the number of injections people need by about a quarter. Hopefully, this discovery will reduce the burden of treatment that patients have to endure.”
    Dr Helen Dakin, University Research Lecturer at the University of Oxford said: “We found that the savings from giving fewer injections are larger than the cost of robot-controlled radiotherapy. This new treatment can therefore save the NHS money that can be used to treat other patients, while controlling patients’ AMD just as well as standard care.”
    The research was jointly funded by the National Institute for Health and Care Research (NIHR) and the Medical Research Council (MRC) and recruited 411 participants across 30 NHS hospitals. A Lancet-commissioned commentary that accompanied the article described it as a “landmark trial.”
    This study was led by researchers from King’s College London and doctors at King’s College Hospital NHS Foundation Trust, in collaboration with the University of Oxford, the University of Bristol and Queen’s University in Belfast. More

  • in

    Quantum dots and metasurfaces: Deep connections in the nano world

    In relationships, sharing closer spaces naturally deepens the connection as bonds form and strengthen through increasing shared memories. This principle applies not only to human interactions but also to engineering. Recently, an intriguing study was published demonstrating the use of quantum dots to create metasurfaces, enabling two objects to exist in the same space.
    Professor Junsuk Rho from the Department of Mechanical Engineering, the Department of Chemical Engineering, and the Department of Electrical Engineering, PhD candidates Minsu Jeong, Byoungsu Ko, and Jaekyung Kim from the Department of Mechanical Engineering, and Chunghwan Jung, a PhD candidate, from the Department of Chemical Engineering at Pohang University of Science and Technology (POSTECH) employed Nanoimprint Lithography (NIL) to fabricate metasurfaces embedded with quantum dots, enhancing their luminescence efficiency. Their research was recently published in the online edition of Nano Letters.
    NIL, a process for creating optical metasurfaces, utilizes patterned stamps to quickly transfer intricate patterns at the nanometer (nm) scale. This method offers cost advantages over electron beam lithography and other processes and has the advantage of enabling the creation of metasurfaces using materials that are not available in conventional processes.
    Metasurfaces have recently been the focus of extensive research for their ability to control the polarization and emission direction of light from quantum dots. Quantum dots, which are nanoscale semiconductor particles, are highly efficient light emitters capable of emitting light at precise wavelengths. This makes them widely used in applications such as QLEDs and quantum computing. However, conventional processes cannot embed quantum dots within metasurfaces. As a result, research has often involved fabricating metasurfaces and quantum dots separately and then combining them, which imposes limitations on controlling the luminescence of the quantum dots.
    In this study, the researchers integrated quantum dots with titanium dioxide (TiO2), a material used in the NIL process, to create a metasurface. Unlike conventional methods, which involve separately fabricating the metasurface and quantum dots before combining them, this approach embeds the quantum dots directly within the metasurface during its creation.
    The resulting metasurface enhances the proportion of photons emitted from the quantum dots that couple with the resonance mode of the metasurface. This advancement allows for more effective control over the specific direction of light emitted from the quantum dots compared to previous methods.
    Experiments demonstrated that the more photons emitted from the quantum dots that were coupled to the resonant modes of the metasurface, the higher the luminescence efficiency. The team’s metasurface achieved up to 25 times greater luminescence efficiency compared to a simple coating of quantum dots.
    Professor Junsuk Rho of POSTECH who led the research stated, “The use of luminescence-controlled metasurfaces will enable sharper, brighter displays and more precise, sensitive biosensing.” He added, “Further research will allow us to control luminescence more effectively, leading to advances in areas such as nano-optical sensors, optoelectronic devices, and quantum dot displays.”
    The research was conducted with support from POSCO N.EX.T IMPACT, the Samsung Future Technology Incubation Program, and the Mid-Career Researcher Program of the Ministry of Science and ICT and the National Research Foundation of Korea. More

  • in

    Towards a new era in flexible piezoelectric sensors for both humans and robots

    Flexible piezoelectric sensors are essential to monitor the motions of both humans and humanoid robots. However, existing designs are either are costly or have limited sensitivity. In a recent study, researchers from Japan tackled these issues by developing a novel piezoelectric composite material made from electrospun polyvinylidene fluoride nanofibers combined with dopamine. Sensors made from this material showed significant performance and stability improvements at a low cost, promising advancements in medicine, healthcare, and robotics.
    The world is accelerating rapidly towards the intelligent era — a stage in history marked by increased automation and interconnectivity by leveraging technologies such as artificial intelligence and robotics. As a sometimes-overlooked foundational requirement in this transformation, sensors represent an essential interface between humans, machines, and their environment.
    However, now that robots are becoming more agile and wearable electronics are no longer confined to science fiction, traditional silicon-based sensors won’t make the cut in many applications. Thus, flexible sensors, which provide better comfort and higher versatility, have become a very active area of study. Piezoelectric sensors are particularly important in this regard, as they can convert mechanical stress and stretching into an electrical signal. Despite numerous promising approaches, there remains a lack of environmentally sustainable methods for mass-producing flexible, high-performance piezoelectric sensors at a low cost.
    Against this backdrop, a research team from Shinshu University, Japan, decided to step up to the challenge and improve flexible piezoelectric sensor design using a well-established manufacturing technique: electrospinning. Their latest study, which was led by Distinguished Professor Ick Soo Kim in association with Junpeng Xiong, Ling Wang, Mayakrishnan Gopiraman, and Jian Shi, was published on 2 May, 2024, in the journal Nature Communications.
    The proposed flexible sensor design involves the stepwise electrospinning of a composite 2D nanofiber membrane. First, polyvinylidene fluoride (PVDF) nanofibers with diameters in the order of 200 nm are spun, forming a strong uniform network that acts as the base for the piezoelectric sensor. Then, ultrafine PVDF nanofibers with diameters smaller than 35 nm are spun onto the preexisting base. These fibers become automatically interweaved between the gaps of the base network, creating a particular 2D topology.
    After characterization via experiments, simulations, and theoretical analyses, the researchers found that the resulting composite PVDF network had enhanced beta crystal orientation. By enhancing this polar phase, which is responsible for the piezoelectric effect observed in PVDF materials, the piezoelectric performance of the sensors was significantly improved. To increase the stability of the material further, the researchers introduced dopamine (DA) during the electrospinning process, which created a protective core-shell structure.
    “Sensor fabricated from using PVDF/DA composite membranes exhibited superb performance, including a wide response range of 1.5-40 N, high sensitivity of 7.29 V/N to weak forces in the range of 0-4 N, and excellent operational durability,” remarks Kim. These exceptional qualities were demonstrated practically using wearable sensors to measure a wide variety of human movements and actions. More specifically, the proposed sensors, when worn by a human, could produce an easily distinguishable voltage response to natural motions and physiological signals. This included finger tapping, knee and elbow bending, foot stamping, and even speaking and wrist pulses.
    Given the potential low-cost mass production of these piezoelectric sensors, combined with their use of environmentally friendly organic materials instead of harmful inorganics, this study could have important technological implications not only for health monitoring and diagnostics, but also robotics. “Despite the current challenges, humanoid robots are poised to play an increasingly integral role in the very near future. For instance, the well-known Tesla robot ‘Optimus’ can already mimic human motions and walk like a human,” muses Kim, “Considering high-tech sensors are currently being used to monitor robot motions, our proposed nanofiber-based superior piezoelectric sensors hold much potential not only for monitoring human movements, but also in the field of humanoid robotics.”
    To make the adoption of these sensors easier, the research team will be focusing on improving the material’s electrical output properties so that flexible electronic components can be driven without the need for an external power source. Hopefully, further progress in this area will accelerate our stride towards the intelligent era, leading to more comfortable and sustainable lives. More

  • in

    AI better detects prostate cancer on MRI than radiologists

    AI detects prostate cancer more often than radiologists. Additionally, AI triggers false alarms half as often. This is shown by an international study coordinated by Radboud university medical center and published in The Lancet Oncology. This is the first large-scale study where an international team transparently evaluates and compares AI with radiologist assessments and clinical outcomes.
    Radiologists face an increasing workload as men with a higher risk of prostate cancer now routinely receive a prostate MRI. Diagnosing prostate cancer with MRI requires significant expertise, and there is a shortage of experienced radiologists. AI can assist with these challenges.
    AI expert Henkjan Huisman and radiologist Maarten de Rooij, project leaders of the PI-CAI study, organized a major competition between AI teams and radiologists with an international team. Along with other centers in the Netherlands and Norway, they provided over 10,000 MRI scans. They transparently determined for each patient whether prostate cancer was present. They allowed various groups worldwide to develop AI for analyzing these images. The top five submissions were combined into a super-algorithm for analyzing MRI scans for prostate cancer. Finally, AI assessments were compared to those of a group of radiologists on four hundred prostate MRI scans.
    Accurate Diagnosis
    The PI-CAI community brought together over two hundred AI teams and 62 radiologists from twenty countries. They compared the findings of AI and radiologists not only with each other but also with a gold standard, as they monitored the outcomes of the men from whom the scans originated. On average, the men were followed for five years.
    This first international study on AI in prostate diagnostics shows that AI detects nearly seven percent more significant prostate cancers than the group of radiologists. Additionally, AI identifies suspicious areas, later found not to be cancer, fifty percent less often. This means the number of biopsies could be halved with the use of AI. If these results are replicated in follow-up studies, it could greatly assist radiologists and patients in the future. It could reduce radiologists’ workload, provide more accurate diagnoses, and minimize unnecessary prostate biopsies. The developed AI still needs to be validated and is currently not yet available for patients in clinical settings.
    Quality System
    Huisman observes that society has little trust in AI. ‘This is because manufacturers sometimes build AI that isn’t good enough’, he explains. He is working on two things. The first is a public and transparent test to fairly evaluate AI. The second is a quality management system, similar to what exists in the aviation industry. ‘If planes almost collide, a safety committee will look at how to improve the system so that it doesn’t happen in the future. I want the same for AI. I want to research and develop a system that learns from every mistake so that AI is monitored and can continue to improve. That way, we can build trust in AI for healthcare. Optimal, governed AI can help make healthcare better and more efficient.’ More

  • in

    Breakthrough in next-generation memory technology!

    A research team led by Professor Jang-Sik Lee from the Department of Materials Science and Engineering and the Department of Semiconductor Engineering at Pohang University of Science and Technology (POSTECH) has significantly enhanced the data storage capacity of ferroelectric memory devices. By utilizing hafnia-based ferroelectric materials and an innovative device structure, their findings, published on June 7 in the international journal Science Advances, mark a substantial advancement in memory technology.
    With the exponential growth in data production and processing due to advancements in electronics and artificial intelligence (AI), the importance of data storage technologies has surged. NAND flash memory, one of the most prevalent technologies for mass data storage, can store more data in the same area by stacking cells in a three-dimensional structure rather than a planar one. However, this approach relies on charge traps to store data, which results in higher operating voltages and slower speeds.
    Recently, hafnia-based ferroelectric memory has emerged as a promising next-generation memory technology. Hafnia (Hafnium oxide) enables ferroelectric memories to operate at low voltages and high speeds. However, a significant challenge has been the limited memory window for multilevel data storage.
    Professor Jang-Sik Lee’s team at POSTECH has addressed this issue by introducing new materials and a novel device structure. They enhanced the performance of hafnia-based memory devices by doping the ferroelectric materials with aluminum, creating high-performance ferroelectric thin films. Additionally, they replaced the conventional metal-ferroelectric-semiconductor (MFS) structure, where the metal and ferroelectric materials that make up the device are simply arranged, with an innovative metal-ferroelectric-metal-ferroelectric-semiconductor (MFMFS) structure.
    The team successfully controlled the voltage across each layer by adjusting the capacitance of the ferroelectric layers, which involved fine-tuning factors such as the thickness and area ratio of the metal-to-metal and metal-to-channel ferroelectric layers. This efficient use of applied voltage to switch ferroelectric material improved the device’s performance and reduced energy consumption.
    Conventional hafnia-based ferroelectric devices typically have a memory window of around 2 volts (V). In contrast, the research team’s device achieved a memory window exceeding 10 V, enabling Quad-Level Cell (QLC) technology, which stores 16 levels of data (4 bits) per unit transistor. It also demonstrated high stability after more than one million cycles and operated at voltages of 10 V or less, significantly lower than the 18 V required for NAND flash memory. Furthermore, the team’s memory device exhibited stable characteristics in terms of data retention.
    NAND flash memory programs its memory states using Incremental Step Pulse Programming (ISPP), which leads to long programming times and complex circuitry. In contrast, the team’s device achieves rapid programming through one-shot programming by controlling ferroelectric polarization switching.
    Professor Jang-Sik Lee of POSTECH commented, “We have laid the technological foundation for overcoming the limitations of existing memory devices and provided a new research direction for hafnia-based ferroelectric memory.” He added, “Through follow-up research, we aim to develop low-power, high-speed, and high-density memory devices, contributing to solving power issues in data centers and artificial intelligence applications.”
    The research was conducted with the support from the Project for Next-generation Intelligent Semiconductor Technology Development of the Ministry of Science and ICT (National Research Foundation of Korea) and Samsung Electronics. More

  • in

    An AI-powered wearable system tracks the 3D movement of smart pills in the gut

    Scientists at the University of Southern California have developed an artificial intelligence (AI)-powered system to track tiny devices that monitor markers of disease in the gut. Devices using the novel system may help at-risk individuals monitor their gastrointestinal (GI) tract health at home, without the need for invasive tests in hospital settings. This work appears June 12 in the journal Cell Reports Physical Science.
    “Ingestibles are like Fitbits for the gut,” says author Yasser Khan, assistant professor of electrical and computer engineering at the University of Southern California. “But tracking them once swallowed has been a significant challenge.”
    Gas that is formed in the intestines when bacteria break down food can offer insights into a person’s health. Currently, to measure GI tract gases, physicians either use direct methods such as flatus collection and intestinal tube collection, or indirect methods such as breath testing and stool analysis. Ingestible capsules — devices that a user swallows — offer a promising alternative, but no such technologies have been developed for precise gas sensing.
    To tackle this problem, Khan and colleagues developed a system that includes a wearable coil, which the user can conceal under a t-shirt or other clothing. The coil creates a magnetic field, which interacts with sensors embedded in an ingestible pill after it has been swallowed. AI analyzes the signals the pill receives, pinpointing where the device is located in the gut within less than a few millimeters. In addition, the system monitors 3D real-time concentrations of ammonia, a proxy for a bacteria linked with ulcers and gastric cancer, via the device’s optical gas-sensing membranes.
    While previous attempts to track ingestibles as they journey through the gut have relied on bulky desktop coils, the wearable coil can be used anywhere, says Khan. The technology may also have other applications beyond measuring GI tract gases, such as identifying inflammation in the gut caused by Crohn’s disease and delivering drugs to precisely these regions.
    The researchers tested the system’s performance in a variety of mediums that mimic the GI tract, including a simulated cow intestine and liquids designed to replicate stomach and intestinal fluids.
    “During these tests, the device demonstrated its ability to pinpoint its location and measure levels of oxygen and ammonia gases,” says Khan. “Any ingestible device can utilize the technology we’ve developed.”
    However, there are still improvements to be made to the device, says Khan, such as designing it to be smaller and to use less power. Next, as they continue to hone the device, Khan and colleagues plan to test it in pigs in order to study its safety and effectiveness in an organism with human-like biology.
    “Successful outcomes from these trials will bring the device nearer to readiness for human clinical trials,” says Khan. “We are optimistic about the practicality of the system and believe it will soon be applicable for use in humans.” More

  • in

    AI-powered simulation training improves human performance in robotic exoskeletons

    Researchers at North Carolina State University have demonstrated a new method that leverages artificial intelligence (AI) and computer simulations to train robotic exoskeletons to autonomously help users save energy while walking, running and climbing stairs.
    “This work proposes and demonstrates a new machine-learning framework that bridges the gap between simulation and reality to autonomously control wearable robots to improve mobility and health of humans,” says Hao Su, corresponding author of a paper on the work which will be published June 12 in the journal Nature.
    “Exoskeletons have enormous potential to improve human locomotive performance,” says Su, who is an associate professor of mechanical and aerospace engineering at North Carolina State University. “However, their development and broad dissemination are limited by the requirement for lengthy human tests and handcrafted control laws.
    “The key idea here is that the embodied AI in a portable exoskeleton is learning how to help people walk, run or climb in a computer simulation, without requiring any experiments,” says Su.
    Specifically, the researchers focused on improving autonomous control of embodied AI systems — which are systems where an AI program is integrated into a physical robot technology. This work focused on teaching robotic exoskeletons how to assist able-bodied people with various movements. Normally, users have to spend hours “training” an exoskeleton so that the technology knows how much force is needed — and when to apply that force — to help users walk, run or climb stairs. The new method allows users to utilize the exoskeletons immediately.
    “This work is essentially making science fiction reality — allowing people to burn less energy while conducting a variety of tasks,” says Su.
    “We have developed a way to train and control wearable robots to directly benefit humans,” says Shuzhen Luo, first author of the paper and a former postdoctoral researcher at NC State. Luo is now an assistant professor at Embry-Riddle Aeronautical University.

    For example, in testing with human subjects, the researchers found that study participants used 24.3% less metabolic energy when walking in the robotic exoskeleton than without the exoskeleton. Participants used 13.1% less energy when running in the exoskeleton, and 15.4% less energy when climbing stairs.
    “It’s important to note that these energy reductions are comparing the performance of the robotic exoskeleton to that of a user who is not wearing an exoskeleton,” Su says. “That means it’s a true measure of how much energy the exoskeleton saves.”
    While this study focused on the researchers’ work with able-bodied people, the new method also applies to robotic exoskeleton applications aimed at helping people with mobility impairments.
    “Our framework may offer a generalizable and scalable strategy for the rapid development and widespread adoption of a variety of assistive robots for both able-bodied and mobility-impaired individuals,” Su says.
    “We are in the early stages of testing the new method’s performance in robotic exoskeletons being used by older adults and people with neurological conditions, such as cerebral palsy. And we are also interested in exploring how the method could improve the performance of robotic prosthetic devices for amputee populations.”
    This research was done with support from the National Science Foundation under awards 1944655 and 2026622; the National Institute on Disability, Independent Living, and Rehabilitation Research, under award 90DPGE0019 and Switzer Research Fellowship SFGE22000372; and the National Institutes of Health, under award 1R01EB035404.
    Shuzhen Luo and Hao Su are co-inventors on intellectual property related to the controller discussed in this work. Su is also a co-founder of, and has a financial interest in, Picasso Intelligence, LLC, which develops exoskeletons. More

  • in

    Hybrid work is a ‘win-win-win’ for companies, workers

    It is one of the most hotly debated topics in today’s workplace: Is allowing employees to log in from home a few days a week good for their productivity, careers, and job satisfaction?
    Nicholas Bloom, a Stanford economist and one of the foremost researchers on work-from-home policies, has uncovered compelling evidence that hybrid schedules are a boon to both employees and their bosses.
    In a study, newly published in the journal Nature, of an experiment on more than 1,600 workers at Trip.com — a Chinese company that is one of the world’s largest online travel agencies — Bloom finds that employees who work from home for two days a week are just as productive and as likely to be promoted as their fully office-based peers.
    On a third key measure, employee turnover, the results were also encouraging. Resignations fell by 33 percent among workers who shifted from working full-time in the office to a hybrid schedule. Women, non-managers, and employees with long commutes were the least likely to quit their jobs when their treks to the office were cut to three days a week. Trip.com estimates that reduced attrition saved the company millions of dollars.
    “The results are clear: Hybrid work is a win-win-win for employee productivity, performance, and retention,” says Bloom, who is the William D. Eberle Professor of Economics at the Stanford School of Humanities and Sciences and also a senior fellow at the Stanford Institute for Economic Policy Research (SIEPR).
    The findings are especially significant given that, by Bloom’s count, about 100 million workers worldwide now spend a mix of days at home and in the office each week, more than four years after COVID-19 pandemic lockdowns upended how and where people do their jobs. Many of these hybrid workers are lawyers, accountants, marketers, software engineers and other with a college degree or higher.
    Over time, though, working outside the office has come under attack from high-profile business leaders like Elon Musk, the head of Tesla, SpaceX, and X (formerly Twitter), and Jamie Dimon, CEO of JPMorgan Chase, who argue that the costs of remote work outweigh any benefits. Opponents say that employee training and mentoring, innovation, and company culture suffer when workers are not on site five days a week.

    Blooms says that critics often confuse hybrid for fully remote, in part because most of the research into working from home has focused on workers who aren’t required to come into an office and on a specific type of job, like customer support or data entry. The results of these studies have been mixed, though they tend to skew negative. This suggests to Bloom that problems with fully remote work arise when it’s not managed well.
    As one of the few randomized control trials to analyze hybrid arrangements — where workers are offsite two or three days a week and are in the office the rest of the time — Bloom says his findings offer important lessons for other multinationals, many of which share similarities with Trip.com.
    “This study offers powerful evidence for why 80 percent of U.S. companies now offer some form of remote work,” Bloom says, “and for why the remaining 20 percent of firms that don’t are likely paying a price.”
    The research is also the largest to date of hybrid work involving university-trained professionals that relies on the gold standard in research, the randomized controlled trial. This allowed Bloom and his co-authors to show that the benefits they identify resulted from Trip.com’s hybrid experiment and not something else.
    In addition to Bloom, the study’s authors are Ruobing Han, an assistant professor at The Chinese University of Hong Kong, and James Liang, an economics professor at Peking University and co-founder of Trip.com. Han and Liang both earned their PhDs in economics from Stanford.
    The hybrid approach: Only winners
    Trip.com didn’t have a hybrid work policy when it undertook the 6-month experiment starting in 2021 that is at the heart of the study. In all, 395 managers and 1,217 non-managers with undergraduate degrees — all of whom worked in engineering, marketing, accounting, and finance in the company’s Shanghai office — participated. Employees whose birthdays fell on an even-numbered day of the month were told to come to the office five days a week. Workers with odd-numbered birthdays were allowed to work from home two days a week.

    Of the study participants, 32 percent also had postgraduate degrees, mostly in computer science, accounting or finance. Most were in their mid-30s, half had children, and 65 percent were male.
    In finding that hybrid work not only helps employees, but also companies, the researchers relied on various company data and worker surveys, including performance reviews and promotions for up to two years after the experiment. Trip.com’s thorough performance review process includes evaluations of an employee’s contributions to innovation, leadership, and mentoring.
    The study authors also compared the quality and amount of computer code written by Trip.com software engineers who were hybrid against code produced by peers who were in the office full-time.
    In finding that hybrid work had zero effect on workers’ productivity or career advancement and dramatically boosted retention rates, the study authors highlight some important nuances. Resignations, for example, fell only among non-managers; managers were just as likely to quit whether they were hybrid or not.
    Bloom and his coauthors identify misconceptions held by workers and their bosses. Workers, especially women, were reluctant to sign up as volunteers for Trip.com’s hybrid trial — likely for fear that they would be judged negatively for not coming into the office five days a week, Bloom says. In addition, managers predicted on average that remote working would hurt productivity, only to change their minds by the time the experiment ended.
    For business leaders, Bloom says the study confirms that concerns that hybrid work does more harm than good are overblown.
    “If managed right, letting employees work from home two or three days a week still gets you the level of mentoring, culture-building, and innovation that you want,” Bloom says. “From an economic policymaking standpoint, hybrid work is one of the few instances where there aren’t major trade-offs with clear winners and clear losers. There are almost only winners.”
    Trip.com was sold: It now allows hybrid work company-wide. More