More stories

  • in

    Let me check my phone again

    New research conducted by students and a professor at the University of Cincinnati Blue Ash College finds that smartphone usage can increase and even become unhealthy for those who have obsessive-compulsive disorder (OCD), a psychiatric disorder with symptoms related to unwanted and distressing thoughts that can lead to repetitive and disruptive behaviors.
    UC Blue Ash undergraduate students Kaley Aukerman, Madi Kenna and Ryan Padgett recently co-authored the research that was published online in Current Psychology. It evaluates the impact of OCD symptoms in predicting how someone would score in Problematic Smartphone Use (PSU).
    The students worked on the project with Alex Holte, PhD, assistant professor of psychology at UC Blue Ash. They surveyed more than 400 people and asked them to complete multiple measures assessing various levels of obsessive-compulsive behavior, fear of missing out, inhibitory anxiety, boredom proneness and PSU.
    The research found that individuals with clinically significant levels of OCD are more prone to have PSU in comparison to those with non-clinical levels of OCD. The group also documented that fear of missing out and boredom influenced the relationship between OCD and PSU.
    “There is a theoretical model known as compensatory internet use theory and it suggests that people will compensate for negative emotions by using technology,” says Holte. “Individuals who have OCD desire certainty. So, they might have a fear related to their OCD that they can use their phone to check and confirm or deny that fear.”
    The study also shows the chain of actions that can occur by theorizing OCD predicted boredom proneness, fear of missing out and inhibitory anxiety. These factors can lead someone with OCD to checking and re-checking their phone over and over.
    The research conducted by the students opens new doors in studying how people with OCD can be impacted by their smartphone use and how using a smartphone can become a behavioral addiction. Holte said he felt the findings were important enough to submit for publication and was pleasantly surprised when Current Psychology responded favorably, and quickly. The research was published online this past fall, just months after it was submitted.

    “It is really rare for undergrad students to get published, just because the publication process typically takes a long time,” said Holte. “I think my first publication took two or three years after I submitted.”
    For Aukerman, Kenna and Padgett, having the research published was a nice surprise, but being part of the research process and learning how to document their findings has been a valuable learning experience.
    “The really big jump is getting used to scientific literature and being able to write and format it, because it’s not your simple conversation,” Padgett said. “Professor Holte was really good in helping us take our research and describe it in detail, in the sort of way that is expected for scientific research.”
    Padgett is studying neuroscience with plans to eventually pursue a master’s degree in psychology or neuropsychology. He said he appreciates the mentoring that Professor Holte has provided and is excited to continue learning about the research process.
    Next steps for the students and professor will be to study how some people look at smartphones as a refuge that takes them away from their troubles, while others consider it a burden that requires their frequent attention. More

  • in

    In the driver’s seat: Study explores how we interact with remote drivers

    Newcastle University research is helping shed light on the important interaction between users and remote drivers that oversee the operation of automated vehicles.
    Automated vehicles (AVs), also known as driverless vehicles, hold the promise of transforming mobility, offering numerous benefits such as safer roads, increased accessibility, enhanced productivity, economic growth, and contributions to decarbonisation.
    While lower-level automation systems provide assistance to drivers, higher-level automation (SAE Level 4) allows vehicles to operate without on-board driver input. A crucial failsafe mechanism for Level 4 Automated Vehicles (L4 AV) involves remote driving through a teleoperation system controlled by a remote driver. However, understanding end-users’ needs and requirements in this context remains a significant research gap.
    Publishing their findings in the journal Transportation Research Part F: Traffic Psychology and Behaviour,an international research team led by Newcastle University studied the preferences of potential end-users for a 5G-enabled L4 AV with a remote teleoperation system as a failsafe mechanism.
    The researchers conducted qualitative semi-structured interviews with 29 potential end-users to explore the interaction between drivers, automation, and remote drivers in L4 AVs.
    The results show that end-users support the failsafe feature of remote driving, envisioning positive applications for night driving, long distances, motorways, and more. The exploration of L4 AV as a ‘designated driver’ to reduce alcohol-impaired driving garnered interest, and concerns were raised about the reliability of the teleoperation system, remote driver performance, 5G network connection, cybersecurity, and privacy issues.
    The findings reveal that end-users expressed a desire to understand how remote teleoperator drivers operate the vehicle remotely, highlighting the importance of clear communication.

    The study participants also indicated that they prefer drivers to be focused and not multitasking during teleoperation. In addition, they require remote drivers based in the same country as the L4 AV to prevent issues such as unfamiliar road layouts, different traffic rules, cultural driving style variations, liability concerns, and time differences from affecting performance.
    Study lead author, Dr Shuo Li, Research Associate at Newcastle University’s School of Engineering, said: “As we journey into the realm of connected and automated vehicles, our research provides comprehensive insights and highlights key aspects of the new driver-automation-remote driver interaction in 5G-enabled Level 4 Automated Vehicles. Offering end-users a transparent, qualified, and location-aware remote driving experience is not only an added feature but also crucial for safety and acceptance of automated mobility.”
    Study co-author, Professor Phil Blythe CBE, Professor of Intelligent Transport Systems, and head of the Future Mobility Group, Newcastle University, added: “Newcastle University and it’s regional partners are at the leading edge of investigating what is needed to practically and safely introduce Automated Vehicles and in particular the challenge of Connected and Automated Logistics — which will deliver significant benefits to the region and the sector in general. These research findings on the use of remote, teleoperations to supervise driverless AV’s is a critical cog in the automation machine and will, through our on-going work, also inform on workload and thus potentially how many vehicles an individual teleoperator can safely handle. Overall this is part of our wider objective to ensure the Newcastle University and the NE remain at the forefront of automation and future logistics.”
    The experts recommend future research that explores the potential role of L4 AV as a ‘designated driver’ and its impact on road safety. This work has been funded through DCMS 5G CAL and CCAV and Innovate UK V-CAL projects. These projects are regional consortium developing the concept and technologies for Connected and Autonomous Logistics and demonstrating them on routes between VANTEC logistics and Nissan. The work is also supported by the CCAV and Innovate UK SAMS project. The project aims to redefine urban mobility by deploying and testing autonomous zero-emission shuttles in a real-world setting. More

  • in

    Clinical predictive models created by AI are accurate but study-specific, researchers find

    Scientists from Yale and the University of Cologne were able to show that statistical models created by artificial intelligence (AI) predict very accurately whether a medication responds in people with schizophrenia. However, the models are highly context-dependent and cannot be generalized.
    In a recent study, scientists have been investigating the accuracy of AI models that predict whether people with schizophrenia will respond to antipsychotic medication.
    Statistical models from the field of artificial intelligence (AI) have great potential to improve decision-making related to medical treatment. However, data from medical treatment that can be used for training these models are not only rare, but also expensive. Therefore, the predictive accuracy of statistical models has so far only been demonstrated in a few data sets of limited size. In the current work, the scientists are investigating the potential of AI models and testing the accuracy of the prediction of treatment response to antipsychotic medication for schizophrenia in several independent clinical trials.
    The results of the new study, in which researchers from the Faculty of Medicine of the University of Cologne and Yale were involved, show that the models were able to predict patient outcomes with high accuracy within the trial in which they were developed. However, when used outside the original trial, they did not show better performance than random predictions. Pooling data across trials did not improve predictions either.The study ‘Illusory generalizability of clinical prediction models’ was published in Science.
    The study was led by leading scientists from the field of precision psychiatry. This is an area of psychiatry in which data-related models, targeted therapies and suitable medications for individuals or patient groups are supposed to be determined.
    “Our goal is to use novel models from the field of AI to treat patients with mental health problems in a more targeted manner,” says Dr Joseph Kambeitz, Professor of Biological Psychiatry at the Faculty of Medicine of the University of Cologne and the University Hospital Cologne. “Although numerous initial studies prove the success of such AI models, a demonstration of the robustness of these models has not yet been made.”
    And this safety is of great importance for everyday clinical use.
    “We have strict quality requirements for clinical models and we also have to ensure that models in different contexts provide good predictions,” says Kambeitz. The models should provide equally good predictions, whether they are used in a hospital in the USA, Germany or Chile.
    The results of the study show that a generalization of predictions of AI models across different study centres cannot be ensured at the moment. This is an important signal for clinical practice and shows that further research is needed to actually improve psychiatric care. In ongoing studies, the researchers hope to overcome these obstacles. In cooperation with partners from the USA, England and Australia, they are working on the one hand to examine large patient groups and data sets in order to improve the accuracy of AI models and on the use of other data modalities such as biological samples or new digital markers such as language, motion profiles and smartphone usage. More

  • in

    Bridging light and electrons

    When light goes through a material, it often behaves in unpredictable ways. This phenomenon is the subject of an entire field of study called “nonlinear optics,” which is now integral to technological and scientific advances from laser development and optical frequency metrology, to gravitational wave astronomy and quantum information science.
    In addition, recent years have seen nonlinear optics applied in optical signal processing, telecommunications, sensing, spectroscopy, light detection and ranging. All these applications involve the miniaturization of devices that manipulate light in nonlinear ways onto a small chip, enabling complex light interactions chip-scale.
    Now, a team of scientists at EPFL and the Max Plank Institute has brought nonlinear optical phenomena into a transmission electron microscope (TEM), a type of microscope that uses electrons for imaging instead of light. The study was led by Professor Tobias J. Kippenberg at EPFL and Professor Claus Ropers, Director of the Max Planck Institute for Multidisciplinary Sciences. It is now published in Science.
    At the heart of the study are “Kerr solitons,” waves of light that hold their shape and energy as they move through a material, like a perfectly formed surf wave traveling across the ocean. This study used a particular type of Kerr solitons called “dissipative,” which are stable, localized pulses of light that last tens of femtoseconds (a quadrillionth of a second) and form spontaneously in the microresonator. Dissipative Kerr solitons can also interact with electrons, which made them crucial for this study.
    The researchers formed dissipative Kerr solitons inside a photonic microresonator, a tiny chip that traps and circulates light inside a reflective cavity, creating the perfect conditions for these waves. “We generated various nonlinear spatiotemporal light patterns in the microresonator driven by a continuous-wave laser,” explains EPFL researcher Yujia Yang, who led the study. “These light patterns interacted with a beam of electrons passing by the photonic chip, and left fingerprints in the electron spectrum.”
    Specifically, the approach demonstrated the coupling between free electrons and dissipative Kerr solitons, which allowed the researchers to probe soliton dynamics in the microresonator cavity and perform ultrafast modulation of electron beams.
    “Our ability to generate dissipative Kerr solitons [DKS] in a TEM extends the use of microresonator-base frequency combs to unexplored territories,” says Kippenberg. “The electron-DKS interaction could enable high repetition-rate ultrafast electron microscopy and particle accelerators empowered by a small photonic chip.”
    Ropers adds: “Our results show electron microscopy could be a powerful technique for probing nonlinear optical dynamics at the nanoscale. This technique is non-invasive and able to directly access the intracavity field, key to understanding nonlinear optical physics and developing nonlinear photonic devices.”
    The photonic chips were fabricated in the Center of MicroNanoTechnology (CMi) and the Institute of Physics cleanroom at EPFL. The experiments were conducted at the Göttingen Ultrafast Transmission Electron Microscopy (UTEM) Lab. More

  • in

    Artificial muscle device produces force 34 times its weight

    Soft robots, medical devices, and wearable devices have permeated our daily lives. KAIST researchers have developed a fluid switch using ionic polymer artificial muscles that operates at ultra-low power and produces a force 34 times greater than its weight. Fluid switches control fluid flow, causing the fluid to flow in a specific direction to invoke various movements.
    KAIST (President Kwang-Hyung Lee) announced on the 4th of January that a research team under Professor IlKwon Oh from the Department of Mechanical Engineering has developed a soft fluidic switch that operates at ultra-low voltage and can be used in narrow spaces.
    Artificial muscles imitate human muscles and provide flexible and natural movements compared to traditional motors, making them one of the basic elements used in soft robots, medical devices, and wearable devices. These artificial muscles create movements in response to external stimuli such as electricity, air pressure, and temperature changes, and in order to utilize artificial muscles, it is important to control these movements precisely.
    Switches based on existing motors were difficult to use within limited spaces due to their rigidity and large size. In order to address these issues, the research team developed an electro-ionic soft actuator that can control fluid flow while producing large amounts of force, even in a narrow pipe, and used it as a soft fluidic switch.
    The ionic polymer artificial muscle developed by the research team is composed of metal electrodes and ionic polymers, and it generates force and movement in response to electricity. A polysulfonated covalent organic framework (pS-COF) made by combining organic molecules on the surface of the artificial muscle electrode was used to generate an impressive amount of force relative to its weight with ultra-low power (~0.01V).
    As a result, the artificial muscle, which was manufactured to be as thin as a hair with a thickness of 180 µm, produced a force more than 34 times greater than its light weight of 10 mg to initiate smooth movement. Through this, the research team was able to precisely control the direction of fluid flow with low power.
    Professor IlKwon Oh, who led this research, said, “The electrochemical soft fluidic switch that operate at ultra-low power can open up many possibilities in the fields of soft robots, soft electronics, and microfluidics based on fluid control.” He added, “From smart fibers to biomedical devices, this technology has the potential to be immediately put to use in a variety of industrial settings as it can be easily applied to ultra-small electronic systems in our daily lives.”
    The results of this study, in which Dr. Manmatha Mahato, a research professor in the Department of Mechanical Engineering at KAIST, participated as the first author, were published in the international academic journal Science Advances on December 13, 2023. (Paper title: Polysulfonated Covalent Organic Framework as Active Electrode Host for Mobile Cation Guests in Electrochemical Soft Actuator)
    This research was conducted with support from the National Research Foundation of Korea’s Leader Scientist Support Project (Creative Research Group) and Future Convergence Pioneer Project. More

  • in

    Transparent brain implant can read deep neural activity from the surface

    Researchers at the University of California San Diego have developed a neural implant that provides information about activity deep inside the brain while sitting on its surface. The implant is made up of a thin, transparent and flexible polymer strip that is packed with a dense array of graphene electrodes. The technology, tested in transgenic mice, brings the researchers a step closer to building a minimally invasive brain-computer interface (BCI) that provides high-resolution data about deep neural activity by using recordings from the brain surface.
    The work was published on Jan. 11 in Nature Nanotechnology.
    “We are expanding the spatial reach of neural recordings with this technology,” said study senior author Duygu Kuzum, a professor in the Department of Electrical and Computer Engineering at the UC San Diego Jacobs School of Engineering. “Even though our implant resides on the brain’s surface, its design goes beyond the limits of physical sensing in that it can infer neural activity from deeper layers.”
    This work overcomes the limitations of current neural implant technologies. Existing surface arrays, for example, are minimally invasive, but they lack the ability to capture information beyond the brain’s outer layers. In contrast, electrode arrays with thin needles that penetrate the brain are capable of probing deeper layers, but they often lead to inflammation and scarring, compromising signal quality over time.
    The new neural implant developed at UC San Diego offers the best of both worlds.
    The implant is a thin, transparent and flexible polymer strip that conforms to the brain’s surface. The strip is embedded with a high-density array of tiny, circular graphene electrodes, each measuring 20 micrometers in diameter. Each electrode is connected by a micrometers-thin graphene wire to a circuit board.
    In tests on transgenic mice, the implant enabled the researchers to capture high-resolution information about two types of neural activity-electrical activity and calcium activity-at the same time. When placed on the surface of the brain, the implant recorded electrical signals from neurons in the outer layers. At the same time, the researchers used a two-photon microscope to shine laser light through the implant to image calcium spikes from neurons located as deep as 250 micrometers below the surface. The researchers found a correlation between surface electrical signals and calcium spikes in deeper layers. This correlation enabled the researchers to use surface electrical signals to train neural networks to predict calcium activity — not only for large populations of neurons, but also individual neurons — at various depths.

    “The neural network model is trained to learn the relationship between the surface electrical recordings and the calcium ion activity of the neurons at depth,” said Kuzum. “Once it learns that relationship, we can use the model to predict the depth activity from the surface.”
    An advantage of being able to predict calcium activity from electrical signals is that it overcomes the limitations of imaging experiments. When imaging calcium spikes, the subject’s head must be fixed under a microscope. Also, these experiments can only last for an hour or two at a time.
    “Since electrical recordings do not have these limitations, our technology makes it possible to conduct longer duration experiments in which the subject is free to move around and perform complex behavioral tasks,” said study co-first author Mehrdad Ramezani, an electrical and computer engineering Ph.D. student in Kuzum’s lab. “This can provide a more comprehensive understanding of neural activity in dynamic, real-world scenarios.”
    Designing and fabricating the neural implant
    The technology owes its success to several innovative design features: transparency and high electrode density combined with machine learning methods.
    “This new generation of transparent graphene electrodes embedded at high density enables us to sample neural activity with higher spatial resolution,” said Kuzum. “As a result, the quality of signals improves significantly. What makes this technology even more remarkable is the integration of machine learning methods, which make it possible to predict deep neural activity from surface signals.”
    This study was a collaborative effort among multiple research groups at UC San Diego. The team, led by Kuzum, one of the world leaders in developing multimodal neural interfaces, includes nanoengineering professor Ertugrul Cubukcu, who specializes in advanced micro- and nanofabrication techniques for graphene materials; electrical and computer engineering professor Vikash Gilja, whose lab integrates domain-specific knowledge from the fields of basic neuroscience, signal processing, and machine learning to decode neural signals; and neurobiology and neurosciences professor Takaki Komiyama, whose lab focuses on investigating neural circuit mechanisms that underlie flexible behaviors.

    Transparency is one of the key features of this neural implant. Traditional implants use opaque metal materials for their electrodes and wires, which block the view of neurons beneath the electrodes during imaging experiments. In contrast, an implant made using graphene is transparent, which provides a completely clear field of view for a microscope during imaging experiments.
    “Seamless integration of recording electrical signals and optical imaging of the neural activity at the same time is only possible with this technology,” said Kuzum. “Being able to conduct both experiments at the same time gives us more relevant data because we can see how the imaging experiments are time-coupled to the electrical recordings.”
    To make the implant completely transparent, the researchers used super thin, long graphene wires instead of traditional metal wires to connect the electrodes to the circuit board. However, fabricating a single layer of graphene as a thin, long wire is challenging because any defect will render the wire nonfunctional, explained Ramezani. “There may be a gap in the graphene wire that prevents the electrical signal from flowing through, so you basically end up with a broken wire.”
    The researchers addressed this issue using a clever technique. Instead of fabricating the wires as a single layer of graphene, they fabricated them as a double layer doped with nitric acid in the middle. “By having two layers of graphene on top of one another, there’s a good chance that defects in one layer will be masked by the other layer, ensuring the creation of fully functional, thin and long graphene wires with improved conductivity,” said Ramezani.
    According to the researchers, this study demonstrates the most densely packed transparent electrode array on a surface-sitting neural implant to date. Achieving high density required fabricating extremely small graphene electrodes. This presented a considerable challenge, as shrinking graphene electrodes in size increases their impedance — this hinders the flow of electrical current needed for recording neural activity. To overcome this obstacle, the researchers used a microfabrication technique developed by Kuzum’s lab that involves depositing platinum nanoparticles onto the graphene electrodes. This approach significantly improved electron flow through the electrodes while keeping them tiny and transparent.
    Next steps
    The team will next focus on testing the technology in different animal models, with the ultimate goal of human translation in the future.
    Kuzum’s research group is also dedicated to using the technology to advance fundamental neuroscience research. In that spirit, they are sharing the technology with labs across the U.S. and Europe, contributing to diverse studies ranging from understanding how vascular activity is coupled to electrical activity in the brain to investigating how place cells in the brain are so efficient at creating spatial memory. To make this technology more widely available, Kuzum’s team has applied for a National Institutes of Health (NIH) grant to fund efforts in scaling up production and facilitating its adoption by researchers worldwide.
    “This technology can be used for so many different fundamental neuroscience investigations, and we are eager to do our part to accelerate progress in better understanding the human brain,” said Kuzum.
    Paper title: “High-density Transparent Graphene Arrays for Predicting Cellular Calcium Activity at Depth from Surface Potential Recordings.” Co-authors include Jeong-Hoon Kim*, Xin Liu, Chi Ren, Abdullah Alothman, Chawina De-Eknamkul and Madison N. Wilson, all at UC San Diego.
    *Study co-first author
    This research was supported by the Office of Naval Research (N000142012405, N000142312163 and N000141912545), the National Science Foundation (ECCS-2024776, ECCS-1752241 and ECCS-1734940) and the National Institutes of Health (R21 EY029466, R21 EB026180, DP2 EB030992, R01 NS091010A, R01 EY025349, R01 DC014690, R21 NS109722 AND P30 EY022589), Pew Charitable Trusts, and David and Lucile Packard Foundation. This work was performed in part at the San Diego Nanotechnology Infrastructure (SDNI) at UC San Diego, a member of the National Nanotechnology Coordinated Infrastructure, which is supported by the National Science Foundation (grant ECCS-1542148). More

  • in

    Revolutionizing real-time data processing with edge computing and reservoir technology

    Traditional cloud computing faces various challenges when processing large amounts of data in real time. “Edge” computing is a promising alternative and can benefit from devices known as physical reservoirs. Researchers have now developed a novel memristor device for this purpose. It responds to electrical and optical signals and overcomes real-time processing limitations. When tested, it achieved up to 90.2% accuracy in digit identification, demonstrating its potential for applications in artificial intelligence systems and beyond.
    Every day, a significant amount of data related to weather, traffic, and social media undergo real-time processing. In traditional cloud computing, this processing occurs on the cloud, raising concerns about issues such as leaks, communication delays, slow speeds, and higher power consumption. Against this backdrop, “edge computing” presents a promising alternative solution. Located near users, it aims to distribute computations, thereby reducing the load and speeding up data processing. Specifically, edge AI, which involves AI processing at the edge, is expected to find applications in, for example, self-driving cars and machine anomaly prediction in factories.
    However, for effective edge computing, efficient and computationally cost-effective technology is needed. One promising option is reservoir computing, a computational method designed for processing signals that are recorded over time. It can transform these signals into complex patterns using reservoirs that respond nonlinearly to them. In particular, physical reservoirs, which use the dynamics of physical systems, are both computationally cost-effective and efficient. However, their ability to process signals in real time is limited by the natural relaxation time of the physical system. This limits real-time processing and requires adjustments for best learning performance.
    Recently, Professor Kentaro Kinoshita, a member of the Faculty of Advanced Engineering and the Department of Applied Physics at the Tokyo University of Science (TUS), and Mr. Yutaro Yamazaki from the Graduate School of Science and the same department at TUS developed an optical device with features that support physical reservoir computing and allow real-time signal processing across a broad range of timescales within a single device. Their findings were published in Advanced Scienceon 20 November 2023.
    Speaking of their motivation for the study, Prof. Kinoshita explains: “The devices developed in this research will enable a single device to process time-series signals with various timescales generated in our living environment in real time. In particular, we hope to realize an AI device to utilize in the edge domain.”
    In their study, the duo created a special device using Sn-doped In2O3 and Nb-doped SrTiO3 (denoted as ITO/Nb:STO), which responds to both electrical and optical signals. They tested the electrical features of the device to confirm that it functions as a memristor (a memory device that can change its electrical resistance). The team also explored the influence of ultraviolet light on ITO/Nb:STO by varying the voltage and observing changes in the current. The results suggested that this device can modify the relaxation time of the photo-induced current according to the voltage, making it a potential candidate for a physical reservoir.
    Furthermore, the team tested the effectiveness of ITO/Nb:STO as a physical reservoir by using it for classifying handwritten digit images in the MNIST (Modified National Institute of Standards and Technology) dataset. To their delight, the device achieved a classification accuracy of up to 90.2%. Additionally, to understand the role of the physical reservoir, the team ran experiments without it, which resulted in a relatively lower classification accuracy of 85.1%. These findings show that the ITO/Nb:STO junction device improves classification accuracy while keeping computational costs lower, proving its value as a physical reservoir.
    “In the past, our research group has focused on research and development of materials applicable to physical reservoir computing. Accordingly, we fabricated these devices with the aim to realize a physical reservoir in which the relaxation time of photo-induced current can be arbitrarily controlled by voltage,” says Prof. Kinoshita.
    In summary, this study presents a novel memristor device capable of adjusting its response timescale through voltage variation, exhibiting enhanced learning capabilities, which makes it promising for applications at the edge as an AI device for edge computing. This, in turn, could pave the way for single devices that can effectively handle signals of varied durations found in real-world environments. More

  • in

    Generating stable qubits at room temperature

    In a study published in Science Advances, a group of researchers led by Associate Professor Nobuhiro Yanai from Kyushu University’s Faculty of Engineering, in collaboration with Associate Professor Kiyoshi Miyata from Kyushu University and Professor Yasuhiro Kobori of Kobe University, reports that they have achieved quantum coherence at room temperature: the ability of a quantum system to maintain a well-defined state over time without getting affected by surrounding disturbances
    This breakthrough was made possible by embedding a chromophore, a dye molecule that absorbs light and emits color, in a metal-organic framework, or MOF, a nanoporous crystalline material composed of metal ions and organic ligands.
    Their findings mark a crucial advancement for quantum computing and sensing technologies. While quantum computing is positioned as the next major advancement of computing technology, quantum sensing is a sensing technology that utilizes the quantum mechanical properties of qubits (quantum analogs of bits in classical computing that can exist in a superposition of 0 and 1).
    Various systems can be employed to implement qubits, with one approach being the utilization of intrinsic spin — a quantum property related to a particle’s magnetic moment — of an electron. Electrons have two spin states: spin up and spin down. Qubits based on spin can exist in a combination of these states and can be “entangled,” allowing the state of one qubit to be inferred from another.
    By leveraging the extremely sensitive nature of a quantum entangled state to environmental noise, quantum sensing technology is expected to enable sensing with higher resolution and sensitivity compared to traditional techniques. However, so far, it has been challenging to entangle four electrons and make them respond to external molecules, that is, achieve quantum sensing using a nanoporous MOF.
    Notably, chromophores can be used to excite electrons with desirable electron spins at room temperatures through a process called singlet fission. However, at room temperature causes the quantum information stored in qubits to lose quantum superposition and entanglement. As a result, it is usually only possible to achieve quantum coherence at liquid nitrogen level temperatures.
    To suppress the molecular motion and achieve room-temperature quantum coherence, the researchers introduced a chromophore based on pentacene (polycyclic aromatic hydrocarbon consisting of five linearly fused benzene rings) in a UiO-type MOF. “The MOF in this work is a unique system that can densely accumulate chromophores. Additionally, the nanopores inside the crystal enable the chromophore to rotate, but at a very restrained angle,” says Yanai.
    The MOF structure facilitated enough motion in the pentacene units to allow the electrons to transition from the triplet state to a quintet state, while also sufficiently suppressing motion at room temperature to maintain quantum coherence of the quintet multiexciton state. Upon photoexciting electrons with microwave pulses, the researchers could observe the quantum coherence of the state for over 100 nanoseconds at room temperature. “This is the first room-temperature quantum coherence of entangled quintets,” remarks an excited Kobori.
    While the coherence was observed only for nanoseconds, the findings will pave the way for designing materials for the generation of multiple qubits at room temperatures. “It will be possible to generate quintet multiexciton state qubits more efficiently in the future by searching for guest molecules that can induce more such suppressed motions and by developing suitable MOF structures,” speculates Yanai. “This can open doors to room-temperature molecular quantum computing based on multiple quantum gate control and quantum sensing of various target compounds.” More