More stories

  • in

    Autonomous products like robot vacuums make our lives easier. But do they deprive us of meaningful experiences?

    Researchers from University of St. Gallen and Columbia Business School published a new Journal of Marketing article that examines how the perceived meaning of manual labor can help predict the adoption of autonomous products.
    The study, forthcoming in the Journal of Marketing, is titled “Meaning of Manual Labor Impedes Consumer Adoption of Autonomous Products” and is authored by Emanuel de Bellis, Gita Venkataramani Johar, and Nicola Poletti.
    Whether it is cleaning homes or mowing lawns, consumers increasingly delegate manual tasks to autonomous products. These gadgets operate without human oversight and free consumers from mundane chores. However, anecdotal evidence suggests that people feel a sense of satisfaction when they complete household chores. Are autonomous products such as robot vacuums and cooking machines depriving consumers from meaningful experiences?
    This new research shows that, despite unquestionable benefits such as gains in efficiency and convenience, autonomous products strip away a source of meaning in life. As a result, consumers are hesitant to buy these products.
    The researchers argue that manual labor is an important source of meaning in life. This is in line with research showing that everyday tasks have value — chores such as cleaning may not make us happy, but they add meaning to our lives. As de Bellis explains, “Our studies show that ‘meaning of manual labor’ causes consumers to reject autonomous products. For example, these consumers have a more negative attitude toward autonomous products and are also more prone to believe in the disadvantages of autonomous products relative to their advantages.”
    Highlight Saving Time for Other Meaningful Tasks
    On one hand, autonomous products take over tasks from consumers, typically leading to a reduction in manual labor and hence in the ability to derive meaning from manual tasks. On the other hand, by taking over manual tasks, autonomous products provide consumers with the opportunity to spend time on other, potentially more meaningful, tasks and activities. “We suggest that companies highlight so-called alternative sources of meaning in life, which should reduce consumers’ need to derive meaning specifically from manual tasks. Highlighting other sources of meaning, such as through family or hobbies, at the time of the adoption decision should counteract the negative effect on autonomous product adoption,” says Johar.
    In fact, a key value proposition for many of these technologies is that they free up time. iRobot claims that its robotic vacuum cleaner Roomba saves owners as much as 110 hours of cleaning a year. Some companies go even a step further by suggesting what consumers could do with their freed-up time. For example, German home appliance company Vorwerk promotes its cooking machine Thermomix with “more family time” and “Thermomix does the work so you can make time for what matters most.” Instead of promoting the quality of task completion (i.e., cooking a delicious meal), the company emphasizes that consumers can spend time on other, arguably more meaningful, activities.
    This study demonstrates that the perceived meaning of manual labor (MML) — a novel concept introduced by the researchers — is key to predicting the adoption of autonomous products. Poletti says that “Consumers with a high MML tend to resist the delegation of manual tasks to autonomous products, irrespective of whether these tasks are central to one’s identity or not. Marketers can start by segmenting consumers into high and low MML consumers.” Unlike other personality variables that can only be reliably measured using complex psychometric scales, the extent of consumers’ MML might be assessed simply by observing their behavioral characteristics, such as whether consumers tend to do the dishes by hand, whether they prefer a manual car transmission, or what type of activities and hobbies they pursue. Activities like woodworking, cookery, painting, and fishing are likely predictors of high MML. Similarly, companies can measure likes on social media for specific activities and hobbies that involve manual labor. Finally, practitioners can ask consumers to rate the degree to which manual versus cognitive tasks are meaningful to them. Having segmented consumers according to their MML, marketers can better target and focus their messages and efforts.
    In promotions, firms can highlight the meaningful time consumers gain with the use of autonomous products (e.g., “this product allows you to spend time on more meaningful tasks and pursuits than cleaning”). Such an intervention can prevent the detrimental effects of meaning of manual labor on autonomous product adoption. More

  • in

    Sponge makes robotic device a soft touch

    A simple sponge has improved how robots grasp, scientists from the University of Bristol have found.
    This easy-to-make sponge-jamming device can help stiff robots handle delicate items carefully by mimicking the nuanced touch, or variable stiffness, of a human.
    Robots can skip, jump and do somersaults, but they’re too rigid to hold an egg easily. Variable-stiffness devices are potential solutions for contact compliance on hard robots to reduce damage, or for improving the load capacity of soft robots.
    This study, published at the IEEE International Conference on Robotics and Automation (ICRA) 2023, shows that variable stiffness can be achieved by a silicone sponge.
    Lead author Tianqi Yue from Bristol’s Department of Engineering Mathematics explained: “Stiffness, also known as softness, is important in contact scenarios.
    “Robotic arms are too rigid so they cannot make such a soft human-like grasp on delicate objects, for example, an egg.

    “What makes humans different from robotic arms is that we have soft tissues enclosing rigid bones, which act as a natural mitigating mechanism.
    “In this paper, we managed to develop a soft device with variable stiffness, to be mounted on the end robotic arm for making the robot-object contact safe.”
    Silicone sponge is a cheap and easy-to-fabricate material. It is a porous elastomer just like the cleaning sponge used in everyday tasks.
    By squeezing the sponge, the sponge stiffens which is why it can be transformed into a variable-stiffness device.
    This device could be used in industrial robots in scenarios including gripping jellies, eggs and other fragile substances. It can also be used in service robots to make human-robot interaction safer.
    Mr Yue added: “We managed to use a sponge to make a cheap and nimble but effective device that can help robots achieve soft contact with objects. The great potential comes from its low cost and light weight.
    “We believe this silicone-sponge based variable-stiffness device will provide a novel solution in industry and healthcare, for example, tunable-stiffness requirement on robotic polishing and ultrasound imaging.”
    The team will now look at making the device achieve variable stiffness in multiple directions, including rotation. More

  • in

    New superconducting diode could improve performance of quantum computers and artificial intelligence

    A University of Minnesota Twin Cities-led team has developed a new superconducting diode, a key component in electronic devices, that could help scale up quantum computers for industry use and improve the performance of artificial intelligence systems. Compared to other superconducting diodes, the researchers’ device is more energy efficient; can process multiple electrical signals at a time; and contains a series of gates to control the flow of energy, a feature that has never before been integrated into a superconducting diode.
    The paper is published in Nature Communications, a peer-reviewed scientific journal that covers the natural sciences and engineering.
    A diode allows current to flow one way but not the other in an electrical circuit. It’s essentially half of a transistor, the main element in computer chips. Diodes are typically made with semiconductors, but researchers are interested in making them with superconductors, which have the ability to transfer energy without losing any power along the way.
    “We want to make computers more powerful, but there are some hard limits we are going to hit soon with our current materials and fabrication methods,” said Vlad Pribiag, senior author of the paper and an associate professor in the University of Minnesota School of Physics and Astronomy. “We need new ways to develop computers, and one of the biggest challenges for increasing computing power right now is that they dissipate so much energy. So, we’re thinking of ways that superconducting technologies might help with that.”
    The University of Minnesota researchers created the device using three Josephson junctions, which are made by sandwiching pieces of non-superconducting material between superconductors. In this case, the researchers connected the superconductors with layers of semiconductors. The device’s unique design allows the researchers to use voltage to control the behavior of the device.
    Their device also has the ability to process multiple signal inputs, whereas typical diodes can only handle one input and one output. This feature could have applications in neuromorphic computing, a method of engineering electrical circuits to mimic the way neurons function in the brain to enhance the performance of artificial intelligence systems.
    “The device we’ve made has close to the highest energy efficiency that has ever been shown, and for the first time, we’ve shown that you can add gates and apply electric fields to tune this effect,” explained Mohit Gupta, first author of the paper and a Ph.D. student in the University of Minnesota School of Physics and Astronomy. “Other researchers have made superconducting devices before, but the materials they’ve used have been very difficult to fabricate. Our design uses materials that are more industry-friendly and deliver new functionalities.”
    The method the researchers used can, in principle, be used with any type of superconductor, making it more versatile and easier to use than other techniques in the field. Because of these qualities, their device is more compatible for industry applications and could help scale up the development of quantum computers for wider use.
    “Right now, all the quantum computing machines out there are very basic relative to the needs of real-world applications,” Pribiag said. “Scaling up is necessary in order to have a computer that’s powerful enough to tackle useful, complex problems. A lot of people are researching algorithms and usage cases for computers or AI machines that could potentially outperform classical computers. Here, we’re developing the hardware that could enable quantum computers to implement these algorithms. This shows the power of universities seeding these ideas that eventually make their way to industry and are integrated into practical machines.”
    This research was funded primarily by the United States Department of Energy with partial support from Microsoft Research and the National Science Foundation.
    In addition to Pribiag and Gupta, the research team included University of Minnesota School of Physics and Astronomy graduate student Gino Graziano and University of California, Santa Barbara researchers Mihir Pendharkar, Jason Dong, Connor Dempsey, and Chris Palmstrøm. More

  • in

    New AI boosts teamwork training

    Researchers have developed a new artificial intelligence (AI) framework that is better than previous technologies at analyzing and categorizing dialogue between individuals, with the goal of improving team training technologies. The framework will enable training technologies to better understand how well individuals are coordinating with one another and working as part of a team.
    “There is a great deal of interest in developing AI-powered training technologies that can understand teamwork dynamics and modify their training to foster improved collaboration among team members,” says Wookhee Min, co-author of a paper on the work and a research scientist at North Carolina State University. “However, previous AI architectures have struggled to accurately assess the content of what team members are sharing with each other when they communicate.”
    “We’ve developed a new framework that significantly improves the ability of AI to analyze communication between team members,” says Jay Pande, first author of the paper and a Ph.D. student at NC State. “This is a significant step forward for the development of adaptive training technologies that aim to facilitate effective team communication and collaboration.”
    The new AI framework builds on a powerful deep learning model that was trained on a large, text-based language dataset. This model, called the Text-to-Text Transfer Transformer (T5), was then customized using data collected during squad-level training exercises conducted by the U.S. Army.
    “We modified the T5 model to use contextual features of the team — such as the speaker’s role — to more accurately analyze team communication,” Min says. “That context can be important. For example, something a team leader says may need to be viewed differently than something another team member says.”
    To test the performance of the new framework, the researchers compared it to two previous AI technologies. Specifically, the researchers tested the ability of all three AI technologies to understand the dialogue within a squad of six soldiers during a training exercise.

    The AI framework was tasked with two things: classify what sort of dialogue was taking place, and follow the flow of information within the squad. Classifying the dialogue refers to determining the purpose of what was being said. For example, was someone requesting information, providing information, or issuing a command? Following the flow of information refers to how information was being shared within the team. For example, was information being passed up or down the chain of command?
    “We found that the new framework performed substantially better than the previous AI technologies,” Pande says.
    “One of the things that was particularly promising was that we trained our framework using data from one training mission, but tested the model’s performance using data from a different training mission,” Min says. “And the boost in performance over the previous AI models was notable — even though we were testing the model in a new set of circumstances.”
    The researchers also note that they were able to achieve these results using a relatively small version of the T5 model. That’s important, because it means that they can get analysis in fractions of a second without a supercomputer.
    “One next step for this work includes exploring the extent to which the new framework can be applied to a variety of other training scenarios,” Pande says.
    “We tested the new framework with training data that was transcribed from audio files into text by humans,” Min says. “Another next step will involve integrating the framework with an AI model that transcribes audio data into text, so that we can assess the ability of this technology to analyze team communication data in real time. This will likely involve improving the framework’s ability to deal with noises and errors as the AI transcribes audio data.”
    The paper, “Robust Team Communication Analytics with Transformer-Based Dialogue Modeling,” will be presented at the 24th International Conference on Artificial Intelligence in Education (AIED 2023), which will be held July 3-7 in Tokyo, Japan. The paper was co-authored by Jason Saville, a former graduate student at NC State; James Lester, the Goodnight Distinguished University Professor in Artificial Intelligence and Machine Learning at NC State; and Randall Spain of the U.S. Army Combat Capabilities Development Command (DEVCOM). Soldier Center.
    This research was sponsored by the U.S. Army DEVCOM, Soldier Center under cooperative agreement W912CG-19-2-0001. More

  • in

    Wireless olfactory feedback system to let users smell in the VR world

    A research team co-led by researchers from City University of Hong Kong (CityU) recently invented a novel, wireless, skin-interfaced olfactory feedback system that can release various odours with miniaturised odour generators (OGs). The new technology integrates odours into virtual reality (VR)/augmented reality (AR) to provide a more immersive experience, with broad applications ranging from 4D movie watching and medical treatment to online teaching.
    “Recent human machine interfaces highlight the importance of human sensation feedback, including vision, audio and haptics, associated with wide applications in entertainment, medical treatment and VR/AR. Olfaction also plays a significant role in human perceptual experiences,” said Dr Yu Xinge, Associate Professor in the Department of Biomedical Engineering at CityU, who co-led the study. “However, the current olfaction-generating technologies are associated mainly with big instruments to generate odours in a closed area or room, or an in-built bulky VR set.”
    In view of this, Dr Yu and his collaborators from Beihang University developed a new-generation, wearable, olfaction feedback system with wireless, programmable capabilities based on arrays of flexible and miniaturised odour generators.
    They created two designs to release odours on demand through the new olfaction feedback devices, which are made of soft, miniaturised, lightweight substrates. The first one is a small, skin-integrated, patch-like device comprising two OGs, which can be directly mounted on the human upper lip. With an extremely short distance between the OGs and the user’s nose, it can provide an ultra-fast olfaction response. Another design is a flexible facemask design with nine OGs of different odour types, which can provide hundreds of odour combinations.
    The magic of odour generators is based on a subtle heating platform and a mechanical thermal actuator. By heating and melting odorous paraffin wax on OGs to cause phase change, different odours of adjustable concentration are released. To stop the odour, the odour generators can cool down the temperature of the wax by controlling the motion of the thermal actuator.
    By using different paraffin waxes, the research team was able to make about 30 different scents in total, from herbal rosemary and fruity pineapple to sweet baked pancakes. Even less-than-pleasant scents, like stinky durian, can be created. The 11 volunteers were able to recognise the scents generated by the OGs with an average success rate of 93 percent.

    The new system supports long-term utilisation without frequent replacement and maintenance, and enables interaction with users for various applications. Most importantly, the olfactory interface can support wireless and programmable operation, and can interact with users in various applications. It can respond rapidly to burst or suppress odours and for accurate odour concentration control. And the odour sources are easily accessible and biocompatible.
    In their experiments, demonstrations in 4D movie watching, medical treatment, human emotion control and VR/AR experience in online teaching exhibited the great potential of the new olfaction interfaces in various applications.
    For instance, the new wireless olfaction system can interact between the user and a virtual subject when the user is “walking” in a virtual garden by releasing various fruit fragrances. The new technology also showed potential for helping amnesic patients recall lost memories, as odour perception is modulated by experience, leading to the recall of emotional memories.
    “The new olfaction systems provide a new alternative option for users to realise the olfaction display in a virtual environment. The fast response rate in releasing odours, the high odour generator integration density, and two wearable designs ensure great potential for olfaction interfaces in various applications, ranging from entertainment and education to healthcare and human machine interfaces,” said Dr Yu.
    In the next step, he and his research team will focus on developing a next-generation olfaction system with a shorter response time, smaller size, and higher integration density for VR, AR and mixed reality (MR) applications.
    The findings were published in the scientific journal Nature Communications under the title “Soft, Miniaturized, Wireless Olfactory Interface for Virtual Reality”.
    The corresponding authors are Dr Yu and Dr Li Yuhang from the Institute of Solid Mechanics at Beihang University. The first co-authors are Dr Liu Yiming, a postdoc on Dr Yu’s research team, Mr Yiu Chunki and Mr Wooyoung Park, PhD students supervised by Dr Yu, and Dr Zhao Zhao, a postdoc on Dr Li’s research team.
    The research was supported mainly by the National Natural Science Foundation of China, CityU, and the Research Grants Council of the HKSAR. More

  • in

    Illuminating the molecular ballet in living cells

    Researchers at Kyoto University, Okinawa Institute of Science and Technology Graduate University (OIST), and Photron Limited in Japan have developed the world’s fastest camera capable of detecting fluorescence from single molecules. They describe the technology and examples of its power in two articles published in the same issue of the Journal of Cell Biology.
    “Our work with this camera will help scientists understand how cancer spreads and help develop new drugs for treating cancer,” says bio-imaging expert Takahiro Fujiwara, who led the research at the Institute for Integrated Cell-Material Sciences (iCeMS).
    Single fluorescent-molecule imaging (SFMI) uses a fluorescent molecule as a reporter tag that can bind to molecules of interest in a cell and reveal where they are and how they are moving and binding to each other. The team’s ultra-fast camera allows the highest resolution in time ever achieved by SFMI. It can detect single molecule movements that are 1,000 times faster than the normal video frame rate. Specifically, it can detect a molecule with an attached fluorescent tag every 33 microseconds with 34 nanometre precision in position, or every 100 microseconds at 20 nanometre precision.
    “We can now observe how individual molecules dance within living cells, as if we are watching a ballet performance in a theatre,” says Fujiwara. He emphasises that previous SFMI techniques were like watching the ballet once every 30 seconds, so the audience had to guess the story from such sparse observations. It was extremely difficult and the guesses were often entirely wrong.
    Furthermore, the ultrafast camera that the team developed tremendously improved the time resolution of a previous super spatial resolution method, which was recognized with the Nobel award in Chemistry in 2014. In this earlier method, the positions of individual molecules are recorded as small dots of approximately 20 nm, forming images like the pointillism paintings by the new impressionists, led by Georges Seurat. However, the problem of the pointillism under the microscope has been that the image formation is extremely slow and it often takes more than 10 minutes to obtain a single image, and thus the specimens had to be chemically fixed dead cells. With the developed ultrafast camera, the image can be formed in 10 seconds, about 60 times faster, allowing observations of live cells.
    The team further demonstrated the power of their camera by examining the localization and movement of a receptor protein involved in cancers and a cellular structure called the focal adhesion. The focal adhesion is a complex of protein molecules that connects bundles of structural proteins inside cells to the material outside cells called the extracellular matrix. It can play a significant role in the cellular mechanical interactions with its environment, allowing cancer cells to move and metastasize.
    “In one investigation we found that a cancer-promoting receptor that binds to signalling molecules is confined within a specific cellular compartment for a longer time when it is activated. In another, we revealed ultrafine structures and molecular movements within the focal adhesion that are involved in cancer cell activities,” says Akihiro Kusumi, the corresponding author, who is a professor at OIST and professor emeritus of Kyoto University. The results allowed the team to propose a refined model of focal adhesion structure and activity.
    Many research teams worldwide are interested in developing drugs that can interfere with the role of focal adhesions in cancer. The ultrafast camera developed by the team in collaboration with Mr. Takeuchi of Photron Limited, a camera manufacturer in Japan, will assist these efforts by revealing deeper understanding of how these structures move and interact with other structures inside and outside of cells. More

  • in

    Robot ‘chef’ learns to recreate recipes from watching food videos

    Researchers have trained a robotic ‘chef’ to watch and learn from cooking videos, and recreate the dish itself.
    The researchers, from the University of Cambridge, programmed their robotic chef with a ‘cookbook’ of eight simple salad recipes. After watching a video of a human demonstrating one of the recipes, the robot was able to identify which recipe was being prepared and make it.
    In addition, the videos helped the robot incrementally add to its cookbook. At the end of the experiment, the robot came up with a ninth recipe on its own. Their results, reported in the journal IEEE Access, demonstrate how video content can be a valuable and rich source of data for automated food production, and could enable easier and cheaper deployment of robot chefs.
    Robotic chefs have been featured in science fiction for decades, but in reality, cooking is a challenging problem for a robot. Several commercial companies have built prototype robot chefs, although none of these are currently commercially available, and they lag well behind their human counterparts in terms of skill.
    Human cooks can learn new recipes through observation, whether that’s watching another person cook or watching a video on YouTube, but programming a robot to make a range of dishes is costly and time-consuming.
    “We wanted to see whether we could train a robot chef to learn in the same incremental way that humans can — by identifying the ingredients and how they go together in the dish,” said Grzegorz Sochacki from Cambridge’s Department of Engineering, the paper’s first author.

    Sochacki, a PhD candidate in Professor Fumiya Iida’s Bio-Inspired Robotics Laboratory, and his colleagues devised eight simple salad recipes and filmed themselves making them. They then used a publicly available neural network to train their robot chef. The neural network had already been programmed to identify a range of different objects, including the fruits and vegetables used in the eight salad recipes (broccoli, carrot, apple, banana and orange).
    Using computer vision techniques, the robot analysed each frame of video and was able to identify the different objects and features, such as a knife and the ingredients, as well as the human demonstrator’s arms, hands and face. Both the recipes and the videos were converted to vectors and the robot performed mathematical operations on the vectors to determine the similarity between a demonstration and a vector.
    By correctly identifying the ingredients and the actions of the human chef, the robot could determine which of the recipes was being prepared. The robot could infer that if the human demonstrator was holding a knife in one hand and a carrot in the other, the carrot would then get chopped up.
    Of the 16 videos it watched, the robot recognised the correct recipe 93% of the time, even though it only detected 83% of the human chef’s actions. The robot was also able to detect that slight variations in a recipe, such as making a double portion or normal human error, were variations and not a new recipe. The robot also correctly recognised the demonstration of a new, ninth salad, added it to its cookbook and made it.
    “It’s amazing how much nuance the robot was able to detect,” said Sochacki. “These recipes aren’t complex — they’re essentially chopped fruits and vegetables, but it was really effective at recognising, for example, that two chopped apples and two chopped carrots is the same recipe as three chopped apples and three chopped carrots.”
    The videos used to train the robot chef are not like the food videos made by some social media influencers, which are full of fast cuts and visual effects, and quickly move back and forth between the person preparing the food and the dish they’re preparing. For example, the robot would struggle to identify a carrot if the human demonstrator had their hand wrapped around it — for the robot to identify the carrot, the human demonstrator had to hold up the carrot so that the robot could see the whole vegetable.
    “Our robot isn’t interested in the sorts of food videos that go viral on social media — they’re simply too hard to follow,” said Sochacki. “But as these robot chefs get better and faster at identifying ingredients in food videos, they might be able to use sites like YouTube to learn a whole range of recipes.”
    The research was supported in part by Beko plc and the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI). More

  • in

    The digital dark matter clouding AI

    Artificial intelligence has entered our daily lives. First, it was ChatGPT. Now, it’s AI-generated pizza and beer commercials. While we can’t trust AI to be perfect, it turns out that sometimes we can’t trust ourselves with AI either.
    Cold Spring Harbor Laboratory (CSHL) Assistant Professor Peter Koo has found that scientists using popular computational tools to interpret AI predictions are picking up too much “noise,” or extra information, when analyzing DNA. And he’s found a way to fix this. Now, with just a couple new lines of code, scientists can get more reliable explanations out of powerful AIs known as deep neural networks. That means they can continue chasing down genuine DNA features. Those features might just signal the next breakthrough in health and medicine. But scientists won’t see the signals if they’re drowned out by too much noise.
    So, what causes the meddlesome noise? It’s a mysterious and invisible source like digital “dark matter.” Physicists and astronomers believe most of the universe is filled with dark matter, a material that exerts gravitational effects but that no one has yet seen. Similarly, Koo and his team discovered the data that AI is being trained on lacks critical information, leading to significant blind spots. Even worse, those blind spots get factored in when interpreting AI predictions of DNA function.
    Koo says: “The deep neural network is incorporating this random behavior because it learns a function everywhere. But DNA is only in a small subspace of that. And it introduces a lot of noise. And so we show that this problem actually does introduce a lot of noise across a wide variety of prominent AI models.”
    The digital dark matter is a result of scientists borrowing computational techniques from computer vision AI. DNA data, unlike images, is confined to a combination of four nucleotide letters: A, C, G, T. But image data in the form of pixels can be long and continuous. In other words, we’re feeding AI an input it doesn’t know how to handle properly.
    By applying Koo’s computational correction, scientists can interpret AI’s DNA analyses more accurately.
    Koo says: “We end up seeing sites that become much more crisp and clean, and there is less spurious noise in other regions. One-off nucleotides that are deemed to be very important all of a sudden disappear.”
    Koo believes noise disturbance affects more than AI-powered DNA analyzers. He thinks it’s a widespread affliction among computational processes involving similar types of data. Remember, dark matter is everywhere. Thankfully, Koo’s new tool can help bring scientists out of the darkness and into the light. More