More stories

  • in

    Breakthrough in organic semiconductor synthesis paves the way for advanced electronic devices

    A team of researchers led by Professor Young S. Park at UNIST’s Department of Chemistry has achieved a significant breakthrough in the field of organic semiconductors. Their successful synthesis and characterization of a novel molecule called “BNBN anthracene” has opened up new possibilities for the development of advanced electronic devices.
    Organic semiconductors play a crucial role in improving the movement and light properties of electrons in carbon-centered organic electronic devices. The team’s research focused on enhancing the chemical diversity of these semiconductors by replacing carbon-carbon (C−C) bonds with isoelectronic boron-nitrogen (B−N) bonds. This substitution allows for precise modulation of the electronic properties without significant structural changes.
    The researchers successfully synthesized the BNBN anthracene derivative, which contains a continuous BNBN unit formed by converting the BOBN unit at the zigzag edge. Compared to conventional anthracene derivatives composed solely of carbon, the BNBN anthracene exhibited significant variations in the C−C bond length and a larger highest occupied molecular orbital-lowest unoccupied molecular orbital energy gap.
    In addition to its unique properties, the BNBN anthracene derivative demonstrated promising potential for application in organic electronics. When used as the blue host in an organic light-emitting diode (OLED), the BOBN anthracene exhibited a remarkably low driving voltage of 3.1V, along with higher efficiency in terms of current utilization, energy efficiency, and light emission.
    The research team further confirmed the properties of the BNBN anthracene derivative by studying its crystal structure using an X-ray diffractometer. This analysis revealed structural changes, such as bonding length and angle, resulting from the boron-nitrogen (BN) bonding.
    “Our study on anthracene, a type of acene widely recognized as an organic semiconductor, has laid the groundwork for future advancements in the field,” commented Songhua Jeong (Combined MS/Ph.D. Program of Chemistry, UNIST), the first author of this study. “The continuous BN bonding synthesized through this research holds great potential for applications in organic semiconductors.”
    Professor Park emphasized the significance of this breakthrough, stating, “The synthesis and characterization of compounds with continuous boron-nitrogen (BN) bonds contribute to fundamental research in chemistry. It provides a valuable tool for synthesizing new compounds and controlling their electronic properties.”
    The research findings, which also involve the contributions of Professor Joonghan Kim’s team from the Catholic University of Korea, Professor Wonyoung Choe’s team from the Department of Chemistry at UNIST, and a research team from SFC Co., Ltd., were published online on December 11 in the journal, Angewande Chemie International Edition. The study received support from the mid-sized research enterprise SFC and was promoted by the National Research Foundation (NRF) of the Ministry of Science and ICT, under the projects of the Ministry of Trade, Industry, and Energy. More

  • in

    Piezo composites with carbon fibers for motion sensors

    An international research group has engineered a novel high-strength flexible device by combining piezoelectric composites with unidirectional carbon fiber (UDCF), an anisotropic material that provides strength only in the direction of the fibers. The new device transforms kinetic energy from the human motion into electricity, providing an efficient and reliable means for high-strength and self-powered sensors.
    Details of the group’s research were published in the journal Small on Dec.14, 2023.
    Motion diction involves converting energy from the human motion into measurable electrical signals and is something crucial for ensuring a sustainable future.
    “Everyday items, from protective gears to sports equipment, are connected to the internet as part of the Internet of Things (IoT), and many of them are equipped with sensors that collect data,” says Fumio Narita, co-author of the study and professor at Tohoku University’s Graduate School of Environmental Studies. “And effective integration of these IoT devices into personal gear requires innovative solutions in power management and material design to ensure durability, flexibility.”
    Mechanical energy can be utilized thanks to piezoelectric materials’ ability to generate electricity when physically stressed. Meanwhile, carbon fiber lends itself to applications in the aerospace and automotive industries, sports equipment, and medical equipment because of its durability and lightness.
    “We wondered if personal protective equipment, made flexible using a combination of carbon fiber and a piezoelectric composite, could offer comfort, more durability, and sensing capabilities,” says Narita.
    The group fabricated the device using a combination of unidirectional carbon fiber fabric (UDCF) and potassium sodium niobate (KNN) nanoparticles mixed with epoxy (EP) resin. The UDCF served as both an electrode and a directional reinforcement.
    The so-called UDCF/KNN-EP device lived up to its expectations. Tests revealed that it could maintain high performance even after being stretched more than 1000 times. It has been proven that it can withstand a much higher load when pulled along the fiber direction compared to other flexible materials. Additionally, when subjected to impacts and stretching perpendicular to the fiber direction, it surpasses other piezoelectric polymers in terms of energy output density. Notably, the mechanical and piezoelectric responses of UDCF/KNN-EP were analyzed using multiscale simulations in collaboration with Professor Uetsuji’s group at the Osaka Institute of Technology.
    The UDCF/KNN-EP will help propel the development of flexible self-powered IoT sensors, leading to advanced multifunctional IoT devices.
    Narita and his colleagues are also excited about the technological advancements of their breakthrough. “CF/KNN-EP was integrated into sports equipment and accurately detected the impact from catching a baseball and a person’s step frequency. In our work, the high strength of CFs was leveraged to improve the sustainability and reliability of battery-free sensors while maintaining their directional stretchability and provides valuable insights and guidance for future research in the field of motion detection.” More

  • in

    Inside the matrix: Nanoscale patterns revealed within model research organism

    Species throughout the animal kingdom feature vital interfaces between the outermost layers of their bodies and the environment. Intricate microscopic structures — featured on the outer skin layers of humans, as one example — are known to assemble in matrix patterns.
    But how these complex structures, known as apical extracellular matrices (aECMs) are assembled into elaborately woven architectures has remained an elusive question.
    Now, following years of research and the power of a technologically advanced instrument, University of California San Diego scientists have unraveled the underpinnings of such matrices in a tiny nematode. The roundworm Caenorhabditis elegans has been studied extensively for decades due to its transparent structure that allows researchers to peer inside its body and examine its skin.
    Described in the journal Nature Communications, School of Biological Sciences researchers have now deciphered the assemblage of aECM patterns in roundworms at the nanoscale. A powerful, super-resolution microscope helped reveal previously unseen patterns related to columns, known as struts, that are key to the proper development and functioning of aECMs.
    “Struts are like tiny pillars that connect the different layers of the matrix and serve as a type of scaffolding,” said Andrew Chisholm, a professor in the School of Biological Sciences and the paper’s senior author.
    Although roundworms serve as a model organism for laboratory studies due to their simple, transparent bodies, below the surface they feature intricate architectures. They also have nearly 20,000 genes, not unlike the number of human genes, and therefore provide lessons on structure and function of more advanced organisms.
    Focusing on the roundworm exoskeleton known as the cuticle, the researchers found that defects in struts result in unnatural layer swelling, or “blistering.” Within the cuticle layer, the research study focused on collagens, which are the most abundant family of proteins in our bodies and help keep bodily materials conjoined.
    “The struts hold the critical layers together,” said Chisholm. “Without them, the layers separate and cause disorders such as blistering. In blistering mutants you don’t see any struts.”
    Conventional laboratory instruments had previously imaged struts without detail, often resulting in undefined blobs. But through Biological Sciences Assistant Professor Andreas Ernst’s laboratory they accessed advanced instrumentation — known as 3D-structured illumination super resolution microscopy (3D-SIM) — which put the struts into stunning focus and allowed their functions to be more easily defined. The researchers were then able to solve the nanoscale organization of struts and previously undocumented levels of patterning in the cuticle layer.
    “We could see exactly where these proteins were going in the matrix,” said Chisholm. “This is potentially a paradigm for how the matrix assembles into very complex structures and very intricate patterning.” More

  • in

    Wireless tracking system could help improve the XR experience

    A new technology developed by engineers at the University of California San Diego has the potential to make the extended reality (XR) experience smoother and more seamless. The technology consists of an asset localization system that uses wireless signals to track physical objects with centimeter-level accuracy in real time, and then generates a virtual representation of these objects. Applications of this technology range from enhancing virtual gaming experiences to improving workplace safety.
    The team, led by Dinesh Bharadia, a professor in the Department of Electrical and Computer Engineering at the UC San Diego Jacobs School of Engineering, presented the technology at the ACM Conference on Embedded Networked Sensor Systems (SenSys 2023) held in Istanbul, Turkey.
    Existing localization methods encounter significant limitations. For example, many XR applications use cameras to localize objects, whether it be through virtual reality (VR) devices, augmented reality (AR) glasses or smartphone cameras, said study co-first author Aditya Arun, who is an electrical and computer engineering Ph.D. student in Bharadia’s lab.
    “However, these camera-based methods are unreliable in highly dynamic scenarios with visual obstructions, rapidly changing environments or poor lighting conditions,” said Arun. Meanwhile, wireless technologies such as WiFi and Bluetooth Low Energy (BLE) often fall short in providing the required accuracy, and ultrawide-band (UWB) technology involves complex setup and configuration.
    The new asset localization system developed by Bharadia’s team at UC San Diego, in collaboration with Shunsuke Saruwatari at Osaka University, Japan, overcomes these limitations by providing accurate, real-time localization of objects with centimeter-level accuracy, even in dynamic and poorly lit environments. The system is also packaged in an easily deployable and compact module, measuring one meter in size, that could be incorporated into electronic devices like televisions or sound bars with minimal setup.
    The researchers built their system by harnessing the power of wireless signals in the sub-6 GHz regime. “Unlike camera-based methods, these wireless signals are less affected by visual blockages and continue to operate even in non-line-of-sight conditions,” said Arun.
    The system uses wireless signals to pinpoint battery-operated UWB tags that are attached to objects. It consists of two main components. One is a UWB tag that transmits a beacon signal for localization. The other component is a localization module equipped with six UWB receivers that are time and phase-synchronized to receive the beacon signal. As this signal travels, it reaches each receiver at a slightly different phase and time. The system combines these differences in a clever way to accurately measure the tag’s location in 2D space.
    In tests, the researchers used their system to play a life-size chess game using everyday objects. They retrofitted mugs with off-the-shelf UWB tags, transforming them into virtual chess pieces. As the pieces were moved around on a table, the system was able to smoothly track their movements in real time with centimeter-level accuracy.
    “We found that our system achieves 90th percentile accuracy in dynamic scenarios and performs at least eight times better than state-of-the-art localization systems,” said Arun.
    The team is currently refining the system. Next steps include improving the PCB design to make the system more robust, reducing the number of receivers to improve energy efficiency, and adding antennas along the vertical axis to support full 3D localization. More

  • in

    Blue PHOLEDs: Final color of efficient OLEDs finally viable in lighting

    Lights could soon use the full color suite of perfectly efficient organic light-emitting diodes, or OLEDs, that last tens of thousands of hours, thanks to an innovation from physicists and engineers at the University of Michigan.
    The U-M team’s new phosphorescent OLEDs, commonly referred to as PHOLEDs, can maintain 90% of the blue light intensity for 10-14 times longer than other designs that emit similar deep blue colors. That kind of lifespan could finally make blue PHOLEDs hardy enough to be commercially viable in lights that meet the Department of Energy’s 50,000-hour lifetime target. Without a stable blue PHOLED, OLED lights need to use less-efficient technology to create white light.
    The lifetime of the new blue PHOLEDs currently is only long enough to use as lighting, but the same design principle could be combined with other light-emitting materials to create blue PHOLEDs hardy enough for TVs, phone screens and computer monitors. Display screens with blue PHOLEDs could potentially increase a device’s battery life by 30%.
    “Achieving long-lived blue PHOLEDs has been a focus of the display and lighting industries for over 20 years. It is probably the most important and urgent challenge facing the field of organic electronics,” said Stephen Forrest, the Peter A. Franken Distinguished University Professor of Electrical and Computer Engineering at the University of Michigan. He is also the corresponding author of the study published today in Nature.
    PHOLEDs have nearly 100% internal quantum efficiency, meaning all of the electricity entering the device is used to create light. As a result, lights and display screens equipped with PHOLEDs can run brighter colors for longer periods of time with less power and carbon emissions.
    Before the U-M team’s research, the best blue PHOLEDs weren’t durable enough to be used in either lighting or displays. Only red and green PHOLEDs are stable enough to use in devices today, but blue is needed to complete the trio of colors in OLED “RGB” displays and white OLED lights. Red, green and blue light can be combined at different relative brightness to produce any color desired in display pixels and light panels.
    So far, the workaround in OLED displays has been to use older, fluorescent OLEDs to produce the blue colors, but the internal quantum efficiency of that technology is much lower. Only a quarter of the electric current entering the fluorescent blue device produces light.

    “A lot of the display industry’s solutions are upgrades to fluorescent OLEDs, which is still an alternative solution,” said study first author Haonan Zhao, a doctoral student in physics and electrical and computer engineering. “I think a lot of companies would prefer to use blue PHOLEDs, if they had the choice.”
    To make blue light, electricity excites heavy metal-containing phosphorescent organic molecules. Sometimes, the excited molecules come into contact before emitting the light, transferring all of the pair’s stored energy into one molecule. Because the energy of blue light is so high, the transferred energy, which is double that of the single excited molecule, can break chemical bonds and degrade the organic material.
    One way around this problem is to use materials that emit a broader spectrum of colors, which lowers the total amount of energy in the excited states. But such materials appear cyan or even green, rather than a deep blue.
    The U-M team got around this issue by sandwiching cyan material between two mirrors. By perfectly tuning the space between the mirrors, only the deepest blue light waves can persist and eventually emit from the mirror chamber.
    Further tuning the optical properties of the organic, light-emitting layer to an adjacent metal electrode introduced a new quantum mechanical state called a plasmon-exciton-polariton, or PEP. This new state allows the organic material to emit light very fast, thus further decreasing the opportunity for excited states to collide and destroy the light-emitting material.
    “In our device, the PEP is introduced because the excited states in the electron transporting material are synchronized with the light waves and the electron vibrations in the metal cathode,” said study co-author Claire Arneson, a doctoral student in physics and electrical and computer engineering.
    The research was funded by the U.S. Department of Energy and Universal Display Corp., in which Forrest has an equity interest. U-M also has a royalty-bearing license agreement with, and a financial interest in, Universal Display Corp. Forrest is also the Paul G. Goebel Professor of Engineering and a professor of physics. Dejiu Fan, the other author on the paper, is an alumnus of electrical and computer engineering. More

  • in

    360-degree head-up display view could warn drivers of road obstacles in real time

    Researchers have developed an augmented reality head-up display that could improve road safety by displaying potential hazards as high-resolution three-dimensional holograms directly in a driver’s field of vision in real time.
    Current head-up display systems are limited to two-dimensional projections onto the windscreen of a vehicle, but researchers from the Universities of Cambridge, Oxford and University College London (UCL) developed a system using 3D laser scanner and LiDAR data to create a fully 3D representation of London streets.
    The system they developed can effectively ‘see through’ objects to project holographic representations of road obstacles that are hidden from the driver’s field of view, aligned with the real object in both size and distance. For example, a road sign blocked from view by a large truck would appear as a 3D hologram so that the driver knows exactly where the sign is and what information it displays.
    The 3D holographic projection technology keeps the driver’s focus on the road instead of the windscreen, and could improve road safety by projecting road obstacles and potential hazards in real time from any angle. The results are reported in the journal Advanced Optical Materials.
    Every day, around 16,000 people are killed in traffic accidents caused by human error. Technology could be used to reduce this number and improve road safety, in part by providing information to drivers about potential hazards. Currently, this is mostly done using head-up displays, which can provide information such as current speed or driving directions.
    “The idea behind a head-up display is that it keeps the driver’s eyes up, because even a fraction of a second not looking at the road is enough time for a crash to happen,” said Jana Skirnewskaja from Cambridge’s Department of Engineering, the study’s first author. “However, because these are two-dimensional images, projected onto a small area of the windscreen, the driver can be looking at the image, and not actually looking at the road ahead of them.”
    For several years, Skirnewskaja and her colleagues have been working to develop alternatives to head-up displays (HUDs) that could improve road safety by providing more accurate information to drivers while keeping their eyes on the road.

    “We want to project information anywhere in the driver’s field of view, but in a way that isn’t overwhelming or distracting,” said Skirnewskaja. “We don’t want to provide any information that isn’t directly related to the driving task at hand.”
    The team developed an augmented reality holographic point cloud video projection system to display objects aligned with real-life objects in size and distance within the driver’s field of view. The system combines data from a 3D holographic setup with LiDAR (light detection and ranging) data. LiDAR uses a pulsed light source to illuminate an object and the reflected light pulses are then measured to calculate how far the object is from the light source.
    The researchers tested the system by scanning Malet Street on the UCL campus in central London. Information from the LiDAR point cloud was transformed into layered 3D holograms, consisting of as many as 400,000 data points. The concept of projecting a 360° obstacle assessment for drivers stemmed from meticulous data processing, ensuring clear visibility of each object’s depth.
    The researchers sped up the scanning process so that the holograms were generated and projected in real-time. Importantly, the scans can provide dynamic information, since busy streets change from one moment to the next.
    “The data we collected can be shared and stored in the cloud, so that any drivers passing by would have access to it — it’s like a more sophisticated version of the navigation apps we use every day to provide real-time traffic information,” said Skirnewskaja. “This way, the system is dynamic and can adapt to changing conditions, as hazards or obstacles move on or off the street.”
    While more data collection from diverse locations enhances accuracy, the researchers say the unique contribution of their study lies in enabling a 360° view by judiciously choosing data points from single scans of specific objects, such as trucks or buildings, enabling a comprehensive assessment of road hazards.

    “We can scan up to 400,000 data points for a single object, but obviously that is quite data-heavy and makes it more challenging to scan, extract and project data about that object in real time,” said Skirnewskaja. “With as little as 100 data points, we can know what the object is and how big it is. We need to get just enough information so that the driver knows what’s around them.”
    Earlier this year, Skirnewskaja and her colleagues conducted a virtual demonstration with virtual reality headsets loaded with the LiDAR data of the system at the Science Museum in London. User feedback from the sessions helped the researchers improve the system to make the design more inclusive and user-friendly. For example, they have fine-tuned the system to reduce eye strain, and have accounted for visual impairments.
    “We want a system that is accessible and inclusive, so that end users are comfortable with it,” said Skirnewskaja. “If the system is a distraction, then it doesn’t work. We want something that is useful to drivers, and improves safety for all road users, including pedestrians and cyclists.”
    The researchers are currently collaborating with Google to develop the technology so that it can be tested in real cars. They are hoping to carry out road tests, either on public or private roads, in 2024.
    The research was supported in part by Stiftung der Deutschen Wirtschaft and the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI). More

  • in

    Could an electric nudge to the head help your doctor operate a surgical robot?

    People who received gentle electric currents on the back of their heads learned to maneuver a robotic surgery tool in virtual reality and then in a real setting much more easily than people who didn’t receive those nudges, a new study shows.
    The findings offer the first glimpse of how stimulating a specific part of the brain called the cerebellum could help health care professionals take what they learn in virtual reality to real operating rooms, a much-needed transition in a field that increasingly relies on digital simulation training, said author and Johns Hopkins University roboticist Jeremy D. Brown.
    “Training in virtual reality is not the same as training in a real setting, and we’ve shown with previous research that it can be difficult to transfer a skill learned in a simulation into the real world,” said Brown, the John C. Malone Associate Professor of Mechanical Engineering. “It’s very hard to claim statistical exactness, but we concluded people in the study were able to transfer skills from virtual reality to the real world much more easily when they had this stimulation.”
    The work appears today in Nature Scientific Reports.
    Participants drove a surgical needle through three small holes, first in a virtual simulation and then in a real scenario using the da Vinci Research Kit, an open-source research robot. The exercises mimicked moves needed during surgical procedures on organs in the belly, the researchers said.
    Participants received a subtle flow of electricity through electrodes or small pads placed on their scalps meant to stimulate their brain’s cerebellum. While half the group received steady flows of electricity during the entire test, the rest of the participants received a brief stimulation only at the beginning and nothing at all for the rest of the tests.
    People who received the steady currents showed a notable boost in dexterity. None of them had prior training in surgery or robotics.

    “The group that didn’t receive stimulation struggled a bit more to apply the skills they learned in virtual reality to the actual robot, especially the most complex moves involving quick motions,” said Guido Caccianiga, a former Johns Hopkins roboticist, now at Max Planck Institute for Intelligent Systems, who designed and led the experiments. “The groups that received brain stimulation were better at those tasks.”
    Noninvasive brain stimulation is a way to influence certain parts of the brain from outside the body, and scientists have shown how it can benefit motor learning in rehabilitation therapy, the researchers said. With their work, the team is taking the research to a new level by testing how stimulating the brain can help surgeons gain skills they might need in real-world situations, said co-author Gabriela Cantarero, a former assistant professor of physical medicine and rehabilitation at Johns Hopkins.
    “It was really cool that we were actually able to influence behavior using this setup, where we could really quantify every little aspect of people’s movements, deviations, and errors,” Cantarero said.
    Robotic surgery systems provide significant benefits for clinicians by enhancing human skill. They can help surgeons minimize hand tremors and perform fine and precise tasks with enhanced vision.
    Besides influencing how surgeons of the future might learn new skills, this type of brain stimulation also offers promise for skill acquisition in other industries that rely on virtual reality training, particularly work in robotics.
    Even outside of virtual reality, the stimulation can also likely help people learn more generally, the researchers said.
    “What if we could show that with brain stimulation you can learn new skills in half the time?” Caccianiga said. “That’s a huge margin on the costs because you’d be training people faster; you could save a lot of resources to train more surgeons or engineers who will deal with these technologies frequently in the future.”
    Other authors include Ronan A. Mooney of the Johns Hopkins University School of Medicine, and Pablo A. Celnik of the Shirley Ryan AbilityLab. More

  • in

    AI alters middle managers’ work

    The introduction of artificial intelligence is a significant part of the digital transformation bringing challenges and changes to the job descriptions among management. A study conducted at the University of Eastern Finland shows that integrating artificial intelligence systems into service teams increases demands imposed on middle management in the financial services field. In that sector, the advent of artificial intelligence has been fast and AI applications can implement a large proportion of routine work that was previously done by people. Many professionals in the service sector work in teams which include both humans and artificial intelligence systems, which sets new expectations on interactions, human relations, and leadership.
    The study analysed how middle management had experienced the effects of integration of artificial intelligence systems on their job descriptions in financial services. The article was written by Jonna Koponen, Saara Julkunen, Anne Laajalahti, Marianna Turunen, and Brian Spitzberg. The study was funded by the Academy of Finland and was published in the Journal of Service Research.
    Integrating AI into service teams is a complex phenomenon
    Interviewed in the study were 25 experienced managers employed by a leading Scandinavian financial services company. Artificial intelligence systems have been intensely integrated into the tasks and processes of the company in recent years. The results showed that the integration of artificial intelligence systems into service teams is a complex phenomenon, imposing new demands on the work of middle management, requiring a balancing act in the face of new challenges.
    “The productivity of work grows when routine tasks can be passed on to artificial intelligence. On the other hand, a fast pace of change makes work more demanding, and the integration of artificial intelligence makes it necessary to learn new things constantly. Variation in work assignments increases and managers can focus their time better on developing the work and on innovations. Surprisingly, new kinds of routine work also increase, because the operations of artificial intelligence need to be monitored and checked,” says Assistant Professor Jonna Koponen.
    Is AI a tool or a colleague?
    According to the results of the research, the social features of middle management also changed, because the artificial intelligence systems used at work were seen either as technical tools or colleagues, depending on the type of AI that was used. Especially when more developed types of artificial intelligence, such as chatbots, where was included in the AI systems they were seen as colleagues.
    “Artificial intelligence was sometimes given a name, and some teams even discussed who might be the mother or father of artificial intelligence. This led to different types of relationships between people and artificial intelligence, which should be considered when introducing or applying artificial intelligence systems in the future. In addition, the employees were concerned about their continued employment, and did not always take an exclusively positive view of the introduction of new artificial intelligence solutions,” Professor Saara Julkunen explains.
    Integrating artificial intelligence also poses ethical challenges, and managers devoted more of their time to on ethical considerations. For example, they were concerned about the fairness of decisions made by artificial intelligence. Aspects observed in the study showed that managing service teams with integrated artificial intelligence requires new skills and knowledge of middle management, such as technological understanding and skills, interactive skills and emotional intelligence, problem-solving skills, and the ability to manage and adapt to continuous change.
    “Artificial intelligence systems cannot yet take over all human management in areas such as the motivation and inspiration of team members. This is why skills in interaction and empathy should be emphasised when selecting new employees for managerial positions which emphasise the management of teams integrated with artificial intelligence,” Koponen observes. More