More stories

  • in

    Researchers demonstrate noise-free communication with structured light

    The patterns of light hold tremendous promise for a large encoding alphabet in optical communications, but progress is hindered by their susceptibility to distortion, such as in atmospheric turbulence or in bent optical fibre.  Now researchers at the University of the Witwatersrand (Wits) have outlined a new optical communication protocol that exploits spatial patterns of light for multi-dimensional encoding in a manner that does not require the patterns to be recognised, thus overcoming the prior limitation of modal distortion in noisy channels.  The result is a new encoding state-of-the-art of over 50 vectorial patterns of light sent virtually noise-free across a turbulent atmosphere, opening a new approach to high-bit-rate optical communication.  Published this week in Laser & Photonics Reviews, the Wits team from the Structured Light Laboratory in the Wits School of Physics used a new invariant property of vectorial light to encode information.  This quantity, which the team call “vectorness”, scales from 0 to 1 and remains unchanged when passing through a noisy channel.  Unlike traditional amplitude modulation which is 0 or 1 (only a two-letter alphabet), the team used the invariance to partition the 0 to 1 vectorness range into more than 50 parts (0, 0.02, 0.04 and so on up to 1) for a 50-letter alphabet.  Because the channel over which the information is sent does not distort the vectorness, both sender and received will always agree on the value, hence noise-free information transfer.  The critical hurdle that the team overcame is to use patterns of light in a manner that does not require them to be “recognised”, so that the natural distortion of noisy channels can be ignored.  Instead, the invariant quantity just “adds up” light in specialised measurements, revealing a quantity that doesn’t see the distortion at all.“This is a very exciting advance because we can finally exploit the many patterns of light as an encoding alphabet without worrying about how noisy the channel is,” says Professor Andrew Forbes, from the Wits School of Physics. “In fact, the only limit to how big the alphabet can be is how good the detectors are and not at all influenced by the noise of the channel.”Lead author and PhD candidate Keshaan Singh adds: “To create and detect the vectorness modulation requires nothing more than conventional communications technology, allowing our modal (pattern) based protocol to be deployed immediately in real-world settings.”The team have already started demonstrations in optical fibre and in fast links across free-space, and believe that the approach can work in other noisy channels, including underwater. More

  • in

    MethaneMapper is poised to solve the problem of underreported methane emissions

    A central difficulty in controlling greenhouse gas emissions to slow down climate change is finding them in the first place.
    Such is the case with methane, a colorless, odorless gas that is the second most abundant greenhouse gas in the atmosphere today, after carbon dioxide. Although it has a shorter life than carbon dioxide, according to the U.S. Environmental Protection Agency, it’s more than 25 times as potent as CO2 at trapping heat, and is estimated to trap 80 times more heat in the atmosphere than CO2 over 20 years.
    For that reason, curbing methane has become a priority, said UC Santa Barbara researcher Satish Kumar, a doctoral student in the Vision Research Lab of computer scientist B.S. Manjunath.
    “Recently, at the 2022 International Climate Summit, methane was actually the highlight because everybody is struggling with it,” he said.
    Even with reporting requirements in the U.S., methane’s invisibility means that its emissions are likely going underreported. In some cases the discrepancies are vast, such as with the Permian Basin, an 86,000-square-mile oil and natural gas extraction field located in Texas and New Mexico that hosts tens of thousands of wells. Independent methane monitoring of the area has revealed that the site emits eight to 10 times more methane than reported by the field’s operators.
    In the wake of the COP27 meetings, the U.S. government is now seeking ways to tighten controls over these types of “super emitting” leaks, especially as oil and gas production is expected to increase in the country in the near future. To do so, however, there must be a way of gathering reliable fugitive emissions data in order to assess the oil and gas operators’ performance and levy appropriate penalties as needed.

    Enter MethaneMapper, an artificial intelligence-powered hyperspectral imaging tool that Kumar and colleagues have developed to detect real-time methane emissions and trace them to their sources. The tool works by processing hyperspectral data gathered during overhead, airborne scans of the target area.
    “We have 432 channels,” Kumar said. Using survey images from NASA’s Jet Propulsion Laboratory, the researchers take pictures starting from 400 nanometer wavelengths, and at intervals up to 2,500 nanometers — a range that encompasses the spectral signatures of hydrocarbons, including that of methane. Each pixel in the photograph contains a spectrum and represents a range of wavelengths called a “spectral band.” From there, machine learning takes on the huge amount of data to differentiate methane from other hydrocarbons captured in the imaging process. The method also allows users to see not just the magnitude of the plume, but also its source.
    Hyperspectral imaging for methane detection is a hot field, with companies jumping into the fray with equipment and detection systems. What makes MethaneMapper stand out is the diversity and depth of data collected from various types of terrain that allows the machine learning model to pick out the presence of methane against a backdrop of different topographies, foliage and other backgrounds.
    “A very common problem with the remote sensing community is that whatever is designed for one place won’t work outside that place,” Kumar explained. Thus, a remote sensing program will often learn what methane looks like against a certain landscape — say, the dry desert of the American Southwest — but pit it against the rocky shale of Colorado or the flat expanses of the Midwest, and the system might not be as successful.
    “We curated our own data sets, which cover approximately 4,000 emissions sites,” Kumar said. “We have the dry states of California, Texas and Arizona. But we have the dense vegetation of the state of Virginia too. So it’s pretty diverse.” According to him, MethaneMapper’s performance accuracy currently stands at 91%.

    The current operating version of MethaneMapper relies on airplanes for the scanning component of the system. But the researchers are setting some ambitious sights for a satellite-enabled program, which has the potential to scan wider swaths of terrain repeatedly, without the greenhouse gasses that airplanes emit. The major tradeoff between using planes and using satellites is in the resolution, Kumar said.
    “You can detect emissions as small as 50 kg per hour from an airplane,” he said. With a satellite, the threshold increases to about 1000 kg or 1 ton per hour. But for the purpose of monitoring emissions from oil and gas operations, which tend to emit in the thousands of kilograms per hour, it’s a small price to pay for the ability to scan larger parts of the Earth, and in places that might not be on the radar, so to speak.
    “The most recent case, I think seven or eight months ago, were emissions from an oil rig off the coast somewhere toward Mexico,” Kumar said, “which was emitting methane at a rate of 7,610 kilograms per hour for six months. And nobody knew about it.
    “And methane is so dangerous,” he continued. “The amount of damage that carbon dioxide will do in a hundred years, methane can do in only 1.2 years.” Satellite detection could not only track carbon emissions on the global scale, it can also be used to direct subsequent airplane-based scans for higher-resolution investigations.
    Ultimately, Kumar and colleagues want to bring the power of AI and hyperspectral methane imaging to the mainstream, making it available to a wide variety of users even without expertise in machine learning.
    “What we want to provide is an interface through a web platform such as BisQue, where anyone can click and upload their data and it can generate an analysis,” he said. “I want to provide a simple and effective interface that anyone can use.”
    The MethaneMapper project is funded by National Science Foundation award SI2-SSI #1664172. The project is part of the Center for Multimodal Big Data Science and Healthcare initiative at UC Santa Barbara, led by Prof. B.S. Manjunath. Additionally, MethaneMapper will be featured as a Highlight Paper at the 2023 Computer Vision and Pattern Recognition (CVPR) Conference — the premiere event in the computer vision field — to be held June 18-22 in Vancouver, British Columbia. More

  • in

    Schrödinger’s cat makes better qubits

    Quantum computing uses the principles of quantum mechanics to encode and elaborate data, meaning that it could one day solve computational problems that are intractable with current computers. While the latter work with bits, which represent either a 0 or a 1, quantum computers use quantum bits, or qubits — the fundamental units of quantum information.
    “With applications ranging from drug discovery to optimization and simulations of complex biological systems and materials, quantum computing has the potential to reshape vast areas of science, industry, and society,” says Professor Vincenzo Savona, director of the Center for Quantum Science and Engineering at EPFL.
    Unlike classical bits, qubits can exist in a “superposition” of both 0 and 1 states at the same time. This allows quantum computers to explore multiple solutions simultaneously, which could make them significantly faster in certain computational tasks. However, quantum systems are delicate and susceptible to errors caused by interactions with their environment.
    “Developing strategies to either protect or qubits from this or to detect and correct errors once they have occurred is crucial for enabling the development of large-scale, fault-tolerant quantum computers,” says Savona. Together with EPFL physicists Luca Gravina, and Fabrizio Minganti, they have made a significant breakthrough by proposing a “critical Schrödinger cat code” for advanced resilience to errors. The study introduces a novel encoding scheme that could revolutionize the reliability of quantum computers.
    What is a “critical Schrödinger cat code”?
    In 1935, physicist Erwin Schrödinger proposed a thought experiment as a critique of the prevailing understanding of quantum mechanics at the time — the Copenhagen interpretation. In Schrödinger’s experiment, a cat is placed in a sealed box with a flask of poison and a radioactive source. If a single atom of the radioactive source decays, the radioactivity is detected by a Geiger counter, which then shatters the flask. The poison is released, killing the cat.
    According to the Copenhagen view of quantum mechanics, if the atom is initially in superposition, the cat will inherit the same state and find itself in a superposition of alive and dead. “This state represents exactly the notion of a quantum bit, realized at the macroscopic scale,” says Savona.
    In past years, scientists have drawn inspiration by Schrödinger’s cat to build an encoding technique called “Schrödinger’s cat code.” Here, the 0 and 1 states of the qubit are encoded onto two opposite phases of an oscillating electromagnetic field in a resonant cavity, similarly to the dead or alive states of the cat.
    “Schrödinger cat codes have been realized in the past using two distinct approaches,” explains Savona. “One leverages anharmonic effects in the cavity, the other relying on carefully engineered cavity losses. In our work, we bridged the two by operating in an intermediate regime, combining the best of both worlds. Although previously believed to be unfruitful, this hybrid regime results in enhanced error suppression capabilities.” The core idea is to operate close to the critical point of a phase transition, which is what the ‘critical’ part of the critical cat code refers to.
    The critical cat code has an additional advantage: it exhibits exceptional resistance to errors that result from random frequency shifts, which often pose significant challenges to operations involving multiple qubits. This solves a major problem and paves the way to the realization of devices with several mutually interacting qubits — the minimal requirement for building a quantum computer.
    “We are taming the quantum cat,” says Savona. “By operating in a hybrid regime, we have developed a system that surpasses its predecessors, which represents a significant leap forward for cat qubits and quantum computing as a whole. The study is a milestone on the road towards building better quantum computers, and showcases EPFL’s dedication in advancing the field of quantum science and unlocking the true potential of quantum technologies. More

  • in

    Chatgpt designs a robot

    Poems, essays and even books — is there anything the open AI platform ChatGPT can’t handle? These new AI developments have inspired researchers at TU Delft and the Swiss technical university EPFL to dig a little deeper: For instance, can ChatGPT also design a robot? And is this a good thing for the design process, or are there risks? The researchers published their findings in Nature Machine Intelligence.
    What are the greatest future challenges for humanity? This was the first question that Cosimo Della Santina, assistant professor, and PhD student Francesco Stella, both from TU Delft, and Josie Hughes from EPFL, asked ChatGPT. “We wanted ChatGPT to design not just a robot, but one that is actually useful,” says Della Santina. In the end, they chose food supply as their challenge, and as they chatted with ChatGPT, they came up with the idea of creating a tomato-harvesting robot.
    Helpful suggestions
    The researchers followed all of ChatGPT’s design decisions. The input proved particularly valuable in the conceptual phase, according to Stella. “ChatGPT extends the designer’s knowledge to other areas of expertise. For example, the chat robot taught us which crop would be most economically valuable to automate.” But ChatGPT also came up with useful suggestions during the implementation phase: “Make the gripper out of silicone or rubber to avoid crushing tomatoes” and “a Dynamixel motor is the best way to drive the robot.” The result of this partnership between humans and AI is a robotic arm that can harvest tomatoes.
    ChatGPT as a researcher
    The researchers found the collaborative design process to be positive and enriching. “However, we did find that our role as engineers shifted towards performing more technical tasks,” says Stella. In Nature Machine Intelligence, the researchers explore the varying degrees of cooperation between humans and Large Language Models (LLM), of which ChatGPT is one. In the most extreme scenario, AI provides all the input to the robot design, and the human blindly follows it. In this case, the LLM acts as the researcher and engineer, while the human acts as the manager, in charge of specifying the design objectives.
    Risk of misinformation
    Such an extreme scenario is not yet possible with today’s LLMs. And the question is whether it is desirable. “In fact, LLM output can be misleading if it is not verified or validated. AI bots are designed to generate the ‘most probable’ answer to a question, so there is a risk of misinformation and bias in the robotic field,” Della Santina says. Working with LLMs also raises other important issues, such as plagiarism, traceability and intellectual property.
    Della Santina, Stella and Hughes will continue to use the tomato-harvesting robot in their research on robotics. They are also continuing their study of LLMs to design new robots. Specifically, they are looking at the autonomy of AIs in designing their own bodies. “Ultimately an open question for the future of our field is how LLMs can be used to assist robot developers without limiting the creativity and innovation needed for robotics to rise to the challenges of the 21st century,” Stella concludes. More

  • in

    New study could help unlock ‘game-changing’ batteries for electric vehicles and aviation

    Significantly improved electric vehicle (EV) batteries could be a step closer thanks to a new study led by University of Oxford researchers, published today in Nature. Using advanced imaging techniques, this revealed mechanisms which cause lithium metal solid-state batteries (Li-SSBs) to fail. If these can be overcome, solid-state batteries using lithium metal anodes could deliver a step-change improvement in EV battery range, safety and performance, and help advance electrically powered aviation.
    One of the co-lead authors of the study Dominic Melvin, a PhD student in the University of Oxford’s Department of Materials, said: ‘Progressing solid-state batteries with lithium metal anodes is one of the most important challenges facing the advancement of battery technologies. While lithium-ion batteries of today will continue to improve, research into solid-state batteries has the potential to be high-reward and a gamechanger technology.’
    Li-SSBs are distinct from other batteries because they replace the flammable liquid electrolyte in conventional batteries with a solid electrolyte and use lithium metal as the anode (negative electrode). The use of the solid electrolyte improves the safety, and the use of lithium metal means more energy can be stored. A critical challenge with Li-SSBs, however, is that they are prone to short circuit when charging due to the growth of ‘dendrites’: filaments of lithium metal that crack through the ceramic electrolyte. As part of the Faraday Institution’s SOLBAT project, researchers from the University of Oxford’s Departments of Materials, Chemistry and Engineering Science, have led a series of in-depth investigations to understand more about how this short-circuiting happens.
    In this latest study, the group used an advanced imaging technique called X-ray computed tomography at Diamond Light Source to visualise dendrite failure in unprecedented detail during the charging process. The new imaging study revealed that the initiation and propagation of the dendrite cracks are separate processes, driven by distinct underlying mechanisms. Dendrite cracks initiate when lithium accumulates in sub-surface pores. When the pores become full, further charging of the battery increases the pressure, leading to cracking. In contrast, propagation occurs with lithium only partially filling the crack, through a wedge-opening mechanism which drives the crack open from the rear.
    This new understanding points the way forward to overcoming the technological challenges of Li-SSBs. Dominic Melvin said: ‘For instance, while pressure at the lithium anode can be good to avoid gaps developing at the interface with the solid electrolyte on discharge, our results demonstrate that too much pressure can be detrimental, making dendrite propagation and short-circuit on charging more likely.’
    Sir Peter Bruce, Wolfson Chair, Professor of Materials at the University of Oxford, Chief Scientist of the Faraday Institution, and corresponding author of the study, said: ‘The process by which a soft metal such as lithium can penetrate a highly dense hard ceramic electrolyte has proved challenging to understand with many important contributions by excellent scientists around the world. We hope the additional insights we have gained will help the progress of solid-state battery research towards a practical device.’
    According to a recent report by the Faraday Institution, SSBs may satisfy 50% of global demand for batteries in consumer electronics, 30% in transportation, and over 10% in aircraft by 2040.
    Professor Pam Thomas, CEO, Faraday Institution, said: ‘SOLBAT researchers continue to develop a mechanistic understanding of solid-state battery failure — one hurdle that needs to be overcome before high-power batteries with commercially relevant performance could be realised for automotive applications. The project is informing strategies that cell manufacturers might use to avoid cell failure for this technology. This application-inspired research is a prime example of the type of scientific advances that the Faraday Institution was set up to drive.’ More

  • in

    AI-generated academic science writing can be identified with over 99% accuracy

    The debut of artificial intelligence chatbot ChatGPT has set the world abuzz with its ability to churn out human-like text and conversations. Still, many telltale signs can help us distinguish AI chatbots from humans, according to a study published on June 7 in the journal Cell Reports Physical Science. Based on the signs, the researchers developed a tool to identify AI-generated academic science writing with over 99% accuracy.
    “We tried hard to create an accessible method so that with little guidance, even high school students could build an AI detector for different types of writing,” says first author Heather Desaire, a professor at the University of Kansas. “There is a need to address AI writing, and people don’t need a computer science degree to contribute to this field.”
    “Right now, there are some pretty glaring problems with AI writing,” says Desaire. “One of the biggest problems is that it assembles text from many sources and there isn’t any kind of accuracy check — it’s kind of like the game Two Truths and a Lie.”
    Although many AI text detectors are available online and perform fairly well, they weren’t built specifically for academic writing. To fill the gap, the team aimed to build a tool with better performance precisely for this purpose. They focused on a type of article called perspectives, which provide an overview of specific research topics written by scientists. The team selected 64 perspectives and created 128 ChatGPT-generated articles on the same research topics to train the model. When they compared the articles, they found an indicator of AI writing — predictability.
    Contrary to AI, humans have more complex paragraph structures, varying in the number of sentences and total words per paragraph, as well as fluctuating sentence length. Preferences in punctuation marks and vocabulary are also a giveaway. For example, scientists gravitate towards words like “however,” “but” and “although,” while ChatGPT often uses “others” and “researchers” in writing. The team tallied 20 characteristics for the model to look out for.
    When tested, the model aced a 100% accuracy rate at weeding out AI-generated full perspective articles from those written by humans. For identifying individual paragraphs within the article, the model had an accuracy rate of 92%. The research team’s model also outperformed an available AI text detector on the market by a wide margin on similar tests.
    Next, the team plans to determine the scope of the model’s applicability. They want to test it on more extensive datasets and across different types of academic science writing. As AI chatbots advance and become more sophisticated, the researchers also want to know if their model will stand.
    “The first thing people want to know when they hear about the research is ‘Can I use this to tell if my students actually wrote their paper?'” said Desaire. While the model is highly skilled at distinguishing between AI and scientists, Desaire says it was not designed to catch AI-generated student essays for educators. However, she notes that people can easily replicate their methods to build models for their own purposes. More

  • in

    Applying artificial intelligence for early risk forecasting of Alzheimer’s disease

    An international research team led by the Hong Kong University of Science and Technology (HKUST) has developed an artificial intelligence (AI)-based model that uses genetic information to predict an individual’s risk of developing Alzheimer’s disease (AD) well before symptoms occur. This groundbreaking study paves the way for using deep learning methods to predict the risks of diseases and uncover their molecular mechanisms; this could revolutionize the diagnosis of, interventions for, and clinical research on AD and other common diseases such as cardiovascular diseases.
    Researchers led by HKUST’s President, Prof. Nancy IP, in collaboration with the Chair Professor and Director of HKUST’s Big Data Institute, Prof. CHEN Lei, investigated whether AI — specifically deep learning models — can model AD risk using genetic information. The team established one of the first deep learning models for estimating AD polygenic risks in both European-descent and Chinese populations. Compared to other models, these deep learning models more accurately classify patients with AD and stratify individuals into distinct groups based on disease risks associated with alterations of various biological processes.
    In current daily practice, AD is diagnosed clinically, using various means including cognitive tests and brain imaging, but often when patients are showing symptoms, it is already well past the optimal intervention window. Therefore, early forecasting of AD risk can greatly aid diagnosis and the development of intervention strategies. By combining the new deep learning model with genetic testing, an individual’s lifetime risk of developing AD can be estimated with more than 70% accuracy.
    AD is a heritable disorder that can be attributed to genomic variants. As these variants are present from birth and remain constant throughout life, examining an individual’s DNA information can help predict their relative risk of developing AD, thereby enabling early intervention and timely management. While FDA-approved genetic testing for the APOE-?4 genetic variant can estimate AD risk, it may be insufficient to identify high-risk individuals, because multiple genetic risks contribute to the disease. Therefore, it is essential to develop tests that integrate information from multiple AD risk genes to accurately determine an individual’s relative risk of developing AD over their lifetime.
    “Our study demonstrates the efficacy of deep learning methods for genetic research and risk prediction for Alzheimer’s disease. This breakthrough will greatly accelerate population-scale screening and staging of Alzheimer’s disease risk. Besides risk prediction, this approach supports the grouping of individuals according to their disease risk and provides insights into the mechanisms that contribute to the onset and progression of the disease,” said Prof. Nancy Ip.
    Meanwhile, Prof. Chen Lei remarked that, “this study exemplifies how the application of AI to the biological sciences can significantly benefit biomedical and disease-related studies. By utilizing a neural network, we effectively captured nonlinearity in high-dimensional genomic data, which improved the accuracy of Alzheimer’s disease risk prediction. In addition, through AI-based data analysis without human supervision, we categorized at-risk individuals into subgroups, which revealed insights into the underlying disease mechanisms. Our research also highlights how AI can elegantly, efficiently, and effectively address interdisciplinary challenges. I firmly believe that AI will play a vital role in various healthcare fields in the near future.”
    The study was conducted in collaboration with researchers at the Shenzhen Institute of Advanced Technology and University College London as well as clinicians at local Hong Kong hospitals including Prince of Wales Hospital and Queen Elizabeth Hospital. The findings were recently published in Communications Medicine. The research team is now refining the model and aims to ultimately incorporate it into standard screening workflows.
    AD, which affects over 50 million people worldwide, is a fatal disease that involves cognitive dysfunction and the loss of brain cells. Its symptoms include progressive memory loss as well as impaired movement, reasoning, and judgment. More

  • in

    Autonomous products like robot vacuums make our lives easier. But do they deprive us of meaningful experiences?

    Researchers from University of St. Gallen and Columbia Business School published a new Journal of Marketing article that examines how the perceived meaning of manual labor can help predict the adoption of autonomous products.
    The study, forthcoming in the Journal of Marketing, is titled “Meaning of Manual Labor Impedes Consumer Adoption of Autonomous Products” and is authored by Emanuel de Bellis, Gita Venkataramani Johar, and Nicola Poletti.
    Whether it is cleaning homes or mowing lawns, consumers increasingly delegate manual tasks to autonomous products. These gadgets operate without human oversight and free consumers from mundane chores. However, anecdotal evidence suggests that people feel a sense of satisfaction when they complete household chores. Are autonomous products such as robot vacuums and cooking machines depriving consumers from meaningful experiences?
    This new research shows that, despite unquestionable benefits such as gains in efficiency and convenience, autonomous products strip away a source of meaning in life. As a result, consumers are hesitant to buy these products.
    The researchers argue that manual labor is an important source of meaning in life. This is in line with research showing that everyday tasks have value — chores such as cleaning may not make us happy, but they add meaning to our lives. As de Bellis explains, “Our studies show that ‘meaning of manual labor’ causes consumers to reject autonomous products. For example, these consumers have a more negative attitude toward autonomous products and are also more prone to believe in the disadvantages of autonomous products relative to their advantages.”
    Highlight Saving Time for Other Meaningful Tasks
    On one hand, autonomous products take over tasks from consumers, typically leading to a reduction in manual labor and hence in the ability to derive meaning from manual tasks. On the other hand, by taking over manual tasks, autonomous products provide consumers with the opportunity to spend time on other, potentially more meaningful, tasks and activities. “We suggest that companies highlight so-called alternative sources of meaning in life, which should reduce consumers’ need to derive meaning specifically from manual tasks. Highlighting other sources of meaning, such as through family or hobbies, at the time of the adoption decision should counteract the negative effect on autonomous product adoption,” says Johar.
    In fact, a key value proposition for many of these technologies is that they free up time. iRobot claims that its robotic vacuum cleaner Roomba saves owners as much as 110 hours of cleaning a year. Some companies go even a step further by suggesting what consumers could do with their freed-up time. For example, German home appliance company Vorwerk promotes its cooking machine Thermomix with “more family time” and “Thermomix does the work so you can make time for what matters most.” Instead of promoting the quality of task completion (i.e., cooking a delicious meal), the company emphasizes that consumers can spend time on other, arguably more meaningful, activities.
    This study demonstrates that the perceived meaning of manual labor (MML) — a novel concept introduced by the researchers — is key to predicting the adoption of autonomous products. Poletti says that “Consumers with a high MML tend to resist the delegation of manual tasks to autonomous products, irrespective of whether these tasks are central to one’s identity or not. Marketers can start by segmenting consumers into high and low MML consumers.” Unlike other personality variables that can only be reliably measured using complex psychometric scales, the extent of consumers’ MML might be assessed simply by observing their behavioral characteristics, such as whether consumers tend to do the dishes by hand, whether they prefer a manual car transmission, or what type of activities and hobbies they pursue. Activities like woodworking, cookery, painting, and fishing are likely predictors of high MML. Similarly, companies can measure likes on social media for specific activities and hobbies that involve manual labor. Finally, practitioners can ask consumers to rate the degree to which manual versus cognitive tasks are meaningful to them. Having segmented consumers according to their MML, marketers can better target and focus their messages and efforts.
    In promotions, firms can highlight the meaningful time consumers gain with the use of autonomous products (e.g., “this product allows you to spend time on more meaningful tasks and pursuits than cleaning”). Such an intervention can prevent the detrimental effects of meaning of manual labor on autonomous product adoption. More