More stories

  • in

    Wireless olfactory feedback system to let users smell in the VR world

    A research team co-led by researchers from City University of Hong Kong (CityU) recently invented a novel, wireless, skin-interfaced olfactory feedback system that can release various odours with miniaturised odour generators (OGs). The new technology integrates odours into virtual reality (VR)/augmented reality (AR) to provide a more immersive experience, with broad applications ranging from 4D movie watching and medical treatment to online teaching.
    “Recent human machine interfaces highlight the importance of human sensation feedback, including vision, audio and haptics, associated with wide applications in entertainment, medical treatment and VR/AR. Olfaction also plays a significant role in human perceptual experiences,” said Dr Yu Xinge, Associate Professor in the Department of Biomedical Engineering at CityU, who co-led the study. “However, the current olfaction-generating technologies are associated mainly with big instruments to generate odours in a closed area or room, or an in-built bulky VR set.”
    In view of this, Dr Yu and his collaborators from Beihang University developed a new-generation, wearable, olfaction feedback system with wireless, programmable capabilities based on arrays of flexible and miniaturised odour generators.
    They created two designs to release odours on demand through the new olfaction feedback devices, which are made of soft, miniaturised, lightweight substrates. The first one is a small, skin-integrated, patch-like device comprising two OGs, which can be directly mounted on the human upper lip. With an extremely short distance between the OGs and the user’s nose, it can provide an ultra-fast olfaction response. Another design is a flexible facemask design with nine OGs of different odour types, which can provide hundreds of odour combinations.
    The magic of odour generators is based on a subtle heating platform and a mechanical thermal actuator. By heating and melting odorous paraffin wax on OGs to cause phase change, different odours of adjustable concentration are released. To stop the odour, the odour generators can cool down the temperature of the wax by controlling the motion of the thermal actuator.
    By using different paraffin waxes, the research team was able to make about 30 different scents in total, from herbal rosemary and fruity pineapple to sweet baked pancakes. Even less-than-pleasant scents, like stinky durian, can be created. The 11 volunteers were able to recognise the scents generated by the OGs with an average success rate of 93 percent.

    The new system supports long-term utilisation without frequent replacement and maintenance, and enables interaction with users for various applications. Most importantly, the olfactory interface can support wireless and programmable operation, and can interact with users in various applications. It can respond rapidly to burst or suppress odours and for accurate odour concentration control. And the odour sources are easily accessible and biocompatible.
    In their experiments, demonstrations in 4D movie watching, medical treatment, human emotion control and VR/AR experience in online teaching exhibited the great potential of the new olfaction interfaces in various applications.
    For instance, the new wireless olfaction system can interact between the user and a virtual subject when the user is “walking” in a virtual garden by releasing various fruit fragrances. The new technology also showed potential for helping amnesic patients recall lost memories, as odour perception is modulated by experience, leading to the recall of emotional memories.
    “The new olfaction systems provide a new alternative option for users to realise the olfaction display in a virtual environment. The fast response rate in releasing odours, the high odour generator integration density, and two wearable designs ensure great potential for olfaction interfaces in various applications, ranging from entertainment and education to healthcare and human machine interfaces,” said Dr Yu.
    In the next step, he and his research team will focus on developing a next-generation olfaction system with a shorter response time, smaller size, and higher integration density for VR, AR and mixed reality (MR) applications.
    The findings were published in the scientific journal Nature Communications under the title “Soft, Miniaturized, Wireless Olfactory Interface for Virtual Reality”.
    The corresponding authors are Dr Yu and Dr Li Yuhang from the Institute of Solid Mechanics at Beihang University. The first co-authors are Dr Liu Yiming, a postdoc on Dr Yu’s research team, Mr Yiu Chunki and Mr Wooyoung Park, PhD students supervised by Dr Yu, and Dr Zhao Zhao, a postdoc on Dr Li’s research team.
    The research was supported mainly by the National Natural Science Foundation of China, CityU, and the Research Grants Council of the HKSAR. More

  • in

    Illuminating the molecular ballet in living cells

    Researchers at Kyoto University, Okinawa Institute of Science and Technology Graduate University (OIST), and Photron Limited in Japan have developed the world’s fastest camera capable of detecting fluorescence from single molecules. They describe the technology and examples of its power in two articles published in the same issue of the Journal of Cell Biology.
    “Our work with this camera will help scientists understand how cancer spreads and help develop new drugs for treating cancer,” says bio-imaging expert Takahiro Fujiwara, who led the research at the Institute for Integrated Cell-Material Sciences (iCeMS).
    Single fluorescent-molecule imaging (SFMI) uses a fluorescent molecule as a reporter tag that can bind to molecules of interest in a cell and reveal where they are and how they are moving and binding to each other. The team’s ultra-fast camera allows the highest resolution in time ever achieved by SFMI. It can detect single molecule movements that are 1,000 times faster than the normal video frame rate. Specifically, it can detect a molecule with an attached fluorescent tag every 33 microseconds with 34 nanometre precision in position, or every 100 microseconds at 20 nanometre precision.
    “We can now observe how individual molecules dance within living cells, as if we are watching a ballet performance in a theatre,” says Fujiwara. He emphasises that previous SFMI techniques were like watching the ballet once every 30 seconds, so the audience had to guess the story from such sparse observations. It was extremely difficult and the guesses were often entirely wrong.
    Furthermore, the ultrafast camera that the team developed tremendously improved the time resolution of a previous super spatial resolution method, which was recognized with the Nobel award in Chemistry in 2014. In this earlier method, the positions of individual molecules are recorded as small dots of approximately 20 nm, forming images like the pointillism paintings by the new impressionists, led by Georges Seurat. However, the problem of the pointillism under the microscope has been that the image formation is extremely slow and it often takes more than 10 minutes to obtain a single image, and thus the specimens had to be chemically fixed dead cells. With the developed ultrafast camera, the image can be formed in 10 seconds, about 60 times faster, allowing observations of live cells.
    The team further demonstrated the power of their camera by examining the localization and movement of a receptor protein involved in cancers and a cellular structure called the focal adhesion. The focal adhesion is a complex of protein molecules that connects bundles of structural proteins inside cells to the material outside cells called the extracellular matrix. It can play a significant role in the cellular mechanical interactions with its environment, allowing cancer cells to move and metastasize.
    “In one investigation we found that a cancer-promoting receptor that binds to signalling molecules is confined within a specific cellular compartment for a longer time when it is activated. In another, we revealed ultrafine structures and molecular movements within the focal adhesion that are involved in cancer cell activities,” says Akihiro Kusumi, the corresponding author, who is a professor at OIST and professor emeritus of Kyoto University. The results allowed the team to propose a refined model of focal adhesion structure and activity.
    Many research teams worldwide are interested in developing drugs that can interfere with the role of focal adhesions in cancer. The ultrafast camera developed by the team in collaboration with Mr. Takeuchi of Photron Limited, a camera manufacturer in Japan, will assist these efforts by revealing deeper understanding of how these structures move and interact with other structures inside and outside of cells. More

  • in

    Robot ‘chef’ learns to recreate recipes from watching food videos

    Researchers have trained a robotic ‘chef’ to watch and learn from cooking videos, and recreate the dish itself.
    The researchers, from the University of Cambridge, programmed their robotic chef with a ‘cookbook’ of eight simple salad recipes. After watching a video of a human demonstrating one of the recipes, the robot was able to identify which recipe was being prepared and make it.
    In addition, the videos helped the robot incrementally add to its cookbook. At the end of the experiment, the robot came up with a ninth recipe on its own. Their results, reported in the journal IEEE Access, demonstrate how video content can be a valuable and rich source of data for automated food production, and could enable easier and cheaper deployment of robot chefs.
    Robotic chefs have been featured in science fiction for decades, but in reality, cooking is a challenging problem for a robot. Several commercial companies have built prototype robot chefs, although none of these are currently commercially available, and they lag well behind their human counterparts in terms of skill.
    Human cooks can learn new recipes through observation, whether that’s watching another person cook or watching a video on YouTube, but programming a robot to make a range of dishes is costly and time-consuming.
    “We wanted to see whether we could train a robot chef to learn in the same incremental way that humans can — by identifying the ingredients and how they go together in the dish,” said Grzegorz Sochacki from Cambridge’s Department of Engineering, the paper’s first author.

    Sochacki, a PhD candidate in Professor Fumiya Iida’s Bio-Inspired Robotics Laboratory, and his colleagues devised eight simple salad recipes and filmed themselves making them. They then used a publicly available neural network to train their robot chef. The neural network had already been programmed to identify a range of different objects, including the fruits and vegetables used in the eight salad recipes (broccoli, carrot, apple, banana and orange).
    Using computer vision techniques, the robot analysed each frame of video and was able to identify the different objects and features, such as a knife and the ingredients, as well as the human demonstrator’s arms, hands and face. Both the recipes and the videos were converted to vectors and the robot performed mathematical operations on the vectors to determine the similarity between a demonstration and a vector.
    By correctly identifying the ingredients and the actions of the human chef, the robot could determine which of the recipes was being prepared. The robot could infer that if the human demonstrator was holding a knife in one hand and a carrot in the other, the carrot would then get chopped up.
    Of the 16 videos it watched, the robot recognised the correct recipe 93% of the time, even though it only detected 83% of the human chef’s actions. The robot was also able to detect that slight variations in a recipe, such as making a double portion or normal human error, were variations and not a new recipe. The robot also correctly recognised the demonstration of a new, ninth salad, added it to its cookbook and made it.
    “It’s amazing how much nuance the robot was able to detect,” said Sochacki. “These recipes aren’t complex — they’re essentially chopped fruits and vegetables, but it was really effective at recognising, for example, that two chopped apples and two chopped carrots is the same recipe as three chopped apples and three chopped carrots.”
    The videos used to train the robot chef are not like the food videos made by some social media influencers, which are full of fast cuts and visual effects, and quickly move back and forth between the person preparing the food and the dish they’re preparing. For example, the robot would struggle to identify a carrot if the human demonstrator had their hand wrapped around it — for the robot to identify the carrot, the human demonstrator had to hold up the carrot so that the robot could see the whole vegetable.
    “Our robot isn’t interested in the sorts of food videos that go viral on social media — they’re simply too hard to follow,” said Sochacki. “But as these robot chefs get better and faster at identifying ingredients in food videos, they might be able to use sites like YouTube to learn a whole range of recipes.”
    The research was supported in part by Beko plc and the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI). More

  • in

    The digital dark matter clouding AI

    Artificial intelligence has entered our daily lives. First, it was ChatGPT. Now, it’s AI-generated pizza and beer commercials. While we can’t trust AI to be perfect, it turns out that sometimes we can’t trust ourselves with AI either.
    Cold Spring Harbor Laboratory (CSHL) Assistant Professor Peter Koo has found that scientists using popular computational tools to interpret AI predictions are picking up too much “noise,” or extra information, when analyzing DNA. And he’s found a way to fix this. Now, with just a couple new lines of code, scientists can get more reliable explanations out of powerful AIs known as deep neural networks. That means they can continue chasing down genuine DNA features. Those features might just signal the next breakthrough in health and medicine. But scientists won’t see the signals if they’re drowned out by too much noise.
    So, what causes the meddlesome noise? It’s a mysterious and invisible source like digital “dark matter.” Physicists and astronomers believe most of the universe is filled with dark matter, a material that exerts gravitational effects but that no one has yet seen. Similarly, Koo and his team discovered the data that AI is being trained on lacks critical information, leading to significant blind spots. Even worse, those blind spots get factored in when interpreting AI predictions of DNA function.
    Koo says: “The deep neural network is incorporating this random behavior because it learns a function everywhere. But DNA is only in a small subspace of that. And it introduces a lot of noise. And so we show that this problem actually does introduce a lot of noise across a wide variety of prominent AI models.”
    The digital dark matter is a result of scientists borrowing computational techniques from computer vision AI. DNA data, unlike images, is confined to a combination of four nucleotide letters: A, C, G, T. But image data in the form of pixels can be long and continuous. In other words, we’re feeding AI an input it doesn’t know how to handle properly.
    By applying Koo’s computational correction, scientists can interpret AI’s DNA analyses more accurately.
    Koo says: “We end up seeing sites that become much more crisp and clean, and there is less spurious noise in other regions. One-off nucleotides that are deemed to be very important all of a sudden disappear.”
    Koo believes noise disturbance affects more than AI-powered DNA analyzers. He thinks it’s a widespread affliction among computational processes involving similar types of data. Remember, dark matter is everywhere. Thankfully, Koo’s new tool can help bring scientists out of the darkness and into the light. More

  • in

    Shining a light on neuromorphic computing

    AI, machine learning, and ChatGPT may be relatively new buzzwords in the public domain, but developing a computer that functions like the human brain and nervous system — both hardware and software combined — has been a decades-long challenge. Engineers at the University of Pittsburgh are today exploring how optical “memristors” may be a key to developing neuromorphic computing.
    Resistors with memory, or memristors, have already demonstrated their versatility in electronics, with applications as computational circuit elements in neuromorphic computing and compact memory elements in high-density data storage. Their unique design has paved the way for in-memory computing and captured significant interest from scientists and engineers alike.
    A new review article published in Nature Photonics, titled “Integrated Optical Memristors,” sheds light on the evolution of this technology — and the work that still needs to be done for it to reach its full potential. Led by Nathan Youngblood, assistant professor of electrical and computer engineering at the University of Pittsburgh Swanson School of Engineering, the article explores the potential of optical devices which are analogs of electronic memristors. This new class of device could play a major role in revolutionizing high-bandwidth neuromorphic computing, machine learning hardware, and artificial intelligence in the optical domain.
    “Researchers are truly captivated by optical memristors because of their incredible potential in high-bandwidth neuromorphic computing, machine learning hardware, and artificial intelligence,” explained Youngblood. “Imagine merging the incredible advantages of optics with local information processing. It’s like opening the door to a whole new realm of technological possibilities that were previously unimaginable.”
    The review article presents a comprehensive overview of recent progress in this emerging field of photonic integrated circuits. It explores the current state-of-the-art and highlights the potential applications of optical memristors, which combine the benefits of ultrafast, high-bandwidth optical communication with local information processing. However, scalability emerged as the most pressing issue that future research should address.
    “Scaling up in-memory or neuromorphic computing in the optical domain is a huge challenge. Having a technology that is fast, compact, and efficient makes scaling more achievable and would represent a huge step forward,” explained Youngblood.
    “One example of the limitations is that if you were to take phase change materials, which currently have the highest storage density for optical memory, and try to implement a relatively simplistic neural network on-chip, it would take a wafer the size of a laptop to fit all the memory cells needed,” he continued. “Size matters for photonics, and we need to find a way to improve the storage density, energy efficiency, and programming speed to do useful computing at useful scales.”
    Using Light to Revolutionize Computing
    Optical memristors can revolutionize computing and information processing across several applications. They can enable active trimming of photonic integrated circuits (PICs), allowing for on-chip optical systems to be adjusted and reprogrammed as needed without continuously consuming power. They also offer high-speed data storage and retrieval, promising to accelerate processing, reduce energy consumption, and enable parallel processing.
    Optical memristors can even be used for artificial synapses and brain-inspired architectures. Dynamic memristors with nonvolatile storage and nonlinear output replicate the long-term plasticity of synapses in the brain and pave the way for spiking integrate-and-fire computing architectures.
    Research to scale up and improve optical memristor technology could unlock unprecedented possibilities for high-bandwidth neuromorphic computing, machine learning hardware, and artificial intelligence.
    “We looked at a lot of different technologies. The thing we noticed is that we’re still far away from the target of an ideal optical memristor-something that is compact, efficient, fast, and changes the optical properties in a significant manner,” Youngblood said. “We’re still searching for a material or a device that actually meets all these criteria in a single technology in order for it to drive the field forward.” More

  • in

    Researchers demonstrate secure information transfer using spatial correlations in quantum entangled beams of light

    Researchers at the University of Oklahoma led a study recently published in Science Advances that proves the principle of using spatial correlations in quantum entangled beams of light to encode information and enable its secure transmission.
    Light can be used to encode information for high-data rate transmission, long-distance communication and more. But for secure communication, encoding large amounts of information in light has additional challenges to ensure the privacy and integrity of the data being transferred.
    Alberto Marino, the Ted S. Webb Presidential Professor in the Homer L. Dodge College of Arts, led the research with OU doctoral student and the study’s first author Gaurav Nirala and co-authors Siva T. Pradyumna and Ashok Kumar. Marino also holds positions with OU’s Center for Quantum Research and Technology and with the Quantum Science Center, Oak Ridge National Laboratory.
    “The idea behind the project is to be able to use the spatial properties of the light to encode large amounts of information, just like how an image contains information. However, to be able to do so in a way that is compatible with quantum networks for secure information transfer. When you consider an image, it can be constructed by combining basic spatial patterns know as modes, and depending on how you combine these modes, you can change the image or encoded information,” Marino said.
    “What we’re doing here that is new and different is that we’re not just using those modes to encode information; we’re using the correlations between them,” he added. “We’re using the additional information on how those modes are linked to encode the information.”
    The researchers used two entangled beams of light, meaning that the light waves are interconnected with correlations that are stronger than those that can be achieved with classical light and remain interconnected despite their distance apart.
    “The advantage of the approach we introduce is that you’re not able to recover the encoded information unless you perform joint measurements of the two entangled beams,” Marino said. “This has applications such as secure communication, given that if you were to measure each beam by itself, you would not be able to extract any information. You have to obtain the shared information between both of the beams and combine it in the right way to extract the encoded information.”
    Through a series of images and correlation measurements, the researchers demonstrated results of successfully encoding information in these quantum-entangled beams of light. Only when the two beams were combined using the methods intended did the information resolve into recognizable information encoded in the form of images.
    “The experimental result describes how one can transfer spatial patterns from one optical field to two new optical fields generated using a quantum mechanical process called four-wave mixing,” said Nirala. “The encoded spatial pattern can be retrieved solely by joint measurements of generated fields. One interesting aspect of this experiment is that it offers a novel method of encoding information in light by modifying the correlation between various spatial modes without impacting time-correlations.”
    “What this could enable, in principle, is the ability to securely encode and transmit a lot of information using the spatial properties of the light, just like how an image contains a lot more information than just turning the light on and off,” Marino said. “Using the spatial correlations is a new approach to encode information.”
    “Information encoding in the spatial correlations of entangled twin beams” was published in Science Advances on June 2, 2023. More

  • in

    Quantum computers are better at guessing, new study demonstrates

    Daniel Lidar, the Viterbi Professor of Engineering at USC and Director of the USC Center for Quantum Information Science & Technology, and first author Dr. Bibek Pokharel, a Research Scientist at IBM Quantum, achieved this quantum speedup advantage in the context of a “bitstring guessing game.” They managed strings up to 26 bits long, significantly larger than previously possible, by effectively suppressing errors typically seen at this scale. (A bit is a binary number that is either zero or one).
    Quantum computers promise to solve certain problems with an advantage that increases as the problems increase in complexity. However, they are also highly prone to errors, or noise. The challenge, says Lidar, is “to obtain an advantage in the real world where today’s quantum computers are still ‘noisy.'” This noise-prone condition of current quantum computing is termed the “NISQ” (Noisy Intermediate-Scale Quantum) era, a term adapted from the RISC architecture used to describe classical computing devices. Thus, any present demonstration of quantum speed advantage necessitates noise reduction.
    The more unknown variables a problem has, the harder it usually is for a computer to solve. Scholars can evaluate a computer’s performance by playing a type of game with it to see how quickly an algorithm can guess hidden information. For instance, imagine a version of the TV game Jeopardy, where contestants take turns guessing a secret word of known length, one whole word at a time. The host reveals only one correct letter for each guessed word before changing the secret word randomly.
    In their study, the researchers replaced words with bitstrings. A classical computer would, on average, require approximately 33 million guesses to correctly identify a 26-bit string. In contrast, a perfectly functioning quantum computer, presenting guesses in quantum superposition, could identify the correct answer in just one guess. This efficiency comes from running a quantum algorithm developed more than 25 years ago by computer scientists Ethan Bernstein and Umesh Vazirani. However, noise can significantly hamper this exponential quantum advantage.
    Lidar and Pokharel achieved their quantum speedup by adapting a noise suppression technique called dynamical decoupling. They spent a year experimenting, with Pokharel working as a doctoral candidate under Lidar at USC. Initially, applying dynamical decoupling seemed to degrade performance. However, after numerous refinements, the quantum algorithm functioned as intended. The time to solve problems then grew more slowly than with any classical computer, with the quantum advantage becoming increasingly evident as the problems became more complex.
    Lidar notes that “currently, classical computers can still solve the problem faster in absolute terms.” In other words, the reported advantage is measured in terms of the time-scaling it takes to find the solution, not the absolute time. This means that for sufficiently long bitstrings, the quantum solution will eventually be quicker.
    The study conclusively demonstrates that with proper error control, quantum computers can execute complete algorithms with better scaling of the time it takes to find the solution than conventional computers, even in the NISQ era. More

  • in

    Unveiling the nanoscale frontier: innovating with nanoporous model electrodes

    Researchers at Tohoku University and Tsinghua University have introduced a next-generation model membrane electrode that promises to revolutionize fundamental electrochemical research. This innovative electrode, fabricated through a meticulous process, showcases an ordered array of hollow giant carbon nanotubes (gCNTs) within a nanoporous membrane, unlocking new possibilities for energy storage and electrochemical studies.
    The key breakthrough lies in the construction of this novel electrode. The researchers developed a uniform carbon coating technique on anodic aluminum oxide (AAO) formed on an aluminum substrate, with the barrier layer eliminated. The resulting conformally carbon-coated layer exhibits vertically aligned gCNTs with nanopores ranging from 10 to 200 nm in diameter and 2 μm to 90 μm in length, covering small electrolyte molecules to bio-related large matters such as enzymes and exosomes. Unlike traditional composite electrodes, this self-standing model electrode eliminates inter-particle contact, ensuring minimal contact resistance — something essential for interpreting the corresponding electrochemical behaviors.
    “The potential of this model electrode is immense,” stated Dr. Zheng-Ze Pan, one of the corresponding authors of the study. “By employing the model membrane electrode with its extensive range of nanopore dimensions, we can attain profound insights into the intricate electrochemical processes transpiring within porous carbon electrodes, along with their inherent correlations to the nanopore dimensions.”
    Moreover, the gCNTs are composed of low-crystalline stacked graphene sheets, offering unparalleled access to the electrical conductivity within low-crystalline carbon walls. Through experimental measurements and the utilization of an in-house temperature-programmed desorption system, the researchers constructed an atomic-scale structural model of the low-crystalline carbon walls, enabling detailed theoretical simulations. Dr. Alex Aziz, who carried out the simulation part for this research, points out, “Our advanced simulations provide a unique lens to estimate electron transitions within amorphous carbons, shedding light on the intricate mechanisms governing their electrical behavior.”
    This project was led by Prof. Dr. Hirotomo Nishihara, the Principal Investigator of the Device/System Group at Advanced Institute for Materials Research (WPI-AIMR). The findings are detailed in one of materials science’s top-level journal, ” Advanced Functional Materials.
    Ultimately, the study represents a significant step forward in our understanding of amorphous-based porous carbon materials and their applications in probing various electrochemical systems. More