More stories

  • in

    Blue PHOLEDs: Final color of efficient OLEDs finally viable in lighting

    Lights could soon use the full color suite of perfectly efficient organic light-emitting diodes, or OLEDs, that last tens of thousands of hours, thanks to an innovation from physicists and engineers at the University of Michigan.
    The U-M team’s new phosphorescent OLEDs, commonly referred to as PHOLEDs, can maintain 90% of the blue light intensity for 10-14 times longer than other designs that emit similar deep blue colors. That kind of lifespan could finally make blue PHOLEDs hardy enough to be commercially viable in lights that meet the Department of Energy’s 50,000-hour lifetime target. Without a stable blue PHOLED, OLED lights need to use less-efficient technology to create white light.
    The lifetime of the new blue PHOLEDs currently is only long enough to use as lighting, but the same design principle could be combined with other light-emitting materials to create blue PHOLEDs hardy enough for TVs, phone screens and computer monitors. Display screens with blue PHOLEDs could potentially increase a device’s battery life by 30%.
    “Achieving long-lived blue PHOLEDs has been a focus of the display and lighting industries for over 20 years. It is probably the most important and urgent challenge facing the field of organic electronics,” said Stephen Forrest, the Peter A. Franken Distinguished University Professor of Electrical and Computer Engineering at the University of Michigan. He is also the corresponding author of the study published today in Nature.
    PHOLEDs have nearly 100% internal quantum efficiency, meaning all of the electricity entering the device is used to create light. As a result, lights and display screens equipped with PHOLEDs can run brighter colors for longer periods of time with less power and carbon emissions.
    Before the U-M team’s research, the best blue PHOLEDs weren’t durable enough to be used in either lighting or displays. Only red and green PHOLEDs are stable enough to use in devices today, but blue is needed to complete the trio of colors in OLED “RGB” displays and white OLED lights. Red, green and blue light can be combined at different relative brightness to produce any color desired in display pixels and light panels.
    So far, the workaround in OLED displays has been to use older, fluorescent OLEDs to produce the blue colors, but the internal quantum efficiency of that technology is much lower. Only a quarter of the electric current entering the fluorescent blue device produces light.

    “A lot of the display industry’s solutions are upgrades to fluorescent OLEDs, which is still an alternative solution,” said study first author Haonan Zhao, a doctoral student in physics and electrical and computer engineering. “I think a lot of companies would prefer to use blue PHOLEDs, if they had the choice.”
    To make blue light, electricity excites heavy metal-containing phosphorescent organic molecules. Sometimes, the excited molecules come into contact before emitting the light, transferring all of the pair’s stored energy into one molecule. Because the energy of blue light is so high, the transferred energy, which is double that of the single excited molecule, can break chemical bonds and degrade the organic material.
    One way around this problem is to use materials that emit a broader spectrum of colors, which lowers the total amount of energy in the excited states. But such materials appear cyan or even green, rather than a deep blue.
    The U-M team got around this issue by sandwiching cyan material between two mirrors. By perfectly tuning the space between the mirrors, only the deepest blue light waves can persist and eventually emit from the mirror chamber.
    Further tuning the optical properties of the organic, light-emitting layer to an adjacent metal electrode introduced a new quantum mechanical state called a plasmon-exciton-polariton, or PEP. This new state allows the organic material to emit light very fast, thus further decreasing the opportunity for excited states to collide and destroy the light-emitting material.
    “In our device, the PEP is introduced because the excited states in the electron transporting material are synchronized with the light waves and the electron vibrations in the metal cathode,” said study co-author Claire Arneson, a doctoral student in physics and electrical and computer engineering.
    The research was funded by the U.S. Department of Energy and Universal Display Corp., in which Forrest has an equity interest. U-M also has a royalty-bearing license agreement with, and a financial interest in, Universal Display Corp. Forrest is also the Paul G. Goebel Professor of Engineering and a professor of physics. Dejiu Fan, the other author on the paper, is an alumnus of electrical and computer engineering. More

  • in

    360-degree head-up display view could warn drivers of road obstacles in real time

    Researchers have developed an augmented reality head-up display that could improve road safety by displaying potential hazards as high-resolution three-dimensional holograms directly in a driver’s field of vision in real time.
    Current head-up display systems are limited to two-dimensional projections onto the windscreen of a vehicle, but researchers from the Universities of Cambridge, Oxford and University College London (UCL) developed a system using 3D laser scanner and LiDAR data to create a fully 3D representation of London streets.
    The system they developed can effectively ‘see through’ objects to project holographic representations of road obstacles that are hidden from the driver’s field of view, aligned with the real object in both size and distance. For example, a road sign blocked from view by a large truck would appear as a 3D hologram so that the driver knows exactly where the sign is and what information it displays.
    The 3D holographic projection technology keeps the driver’s focus on the road instead of the windscreen, and could improve road safety by projecting road obstacles and potential hazards in real time from any angle. The results are reported in the journal Advanced Optical Materials.
    Every day, around 16,000 people are killed in traffic accidents caused by human error. Technology could be used to reduce this number and improve road safety, in part by providing information to drivers about potential hazards. Currently, this is mostly done using head-up displays, which can provide information such as current speed or driving directions.
    “The idea behind a head-up display is that it keeps the driver’s eyes up, because even a fraction of a second not looking at the road is enough time for a crash to happen,” said Jana Skirnewskaja from Cambridge’s Department of Engineering, the study’s first author. “However, because these are two-dimensional images, projected onto a small area of the windscreen, the driver can be looking at the image, and not actually looking at the road ahead of them.”
    For several years, Skirnewskaja and her colleagues have been working to develop alternatives to head-up displays (HUDs) that could improve road safety by providing more accurate information to drivers while keeping their eyes on the road.

    “We want to project information anywhere in the driver’s field of view, but in a way that isn’t overwhelming or distracting,” said Skirnewskaja. “We don’t want to provide any information that isn’t directly related to the driving task at hand.”
    The team developed an augmented reality holographic point cloud video projection system to display objects aligned with real-life objects in size and distance within the driver’s field of view. The system combines data from a 3D holographic setup with LiDAR (light detection and ranging) data. LiDAR uses a pulsed light source to illuminate an object and the reflected light pulses are then measured to calculate how far the object is from the light source.
    The researchers tested the system by scanning Malet Street on the UCL campus in central London. Information from the LiDAR point cloud was transformed into layered 3D holograms, consisting of as many as 400,000 data points. The concept of projecting a 360° obstacle assessment for drivers stemmed from meticulous data processing, ensuring clear visibility of each object’s depth.
    The researchers sped up the scanning process so that the holograms were generated and projected in real-time. Importantly, the scans can provide dynamic information, since busy streets change from one moment to the next.
    “The data we collected can be shared and stored in the cloud, so that any drivers passing by would have access to it — it’s like a more sophisticated version of the navigation apps we use every day to provide real-time traffic information,” said Skirnewskaja. “This way, the system is dynamic and can adapt to changing conditions, as hazards or obstacles move on or off the street.”
    While more data collection from diverse locations enhances accuracy, the researchers say the unique contribution of their study lies in enabling a 360° view by judiciously choosing data points from single scans of specific objects, such as trucks or buildings, enabling a comprehensive assessment of road hazards.

    “We can scan up to 400,000 data points for a single object, but obviously that is quite data-heavy and makes it more challenging to scan, extract and project data about that object in real time,” said Skirnewskaja. “With as little as 100 data points, we can know what the object is and how big it is. We need to get just enough information so that the driver knows what’s around them.”
    Earlier this year, Skirnewskaja and her colleagues conducted a virtual demonstration with virtual reality headsets loaded with the LiDAR data of the system at the Science Museum in London. User feedback from the sessions helped the researchers improve the system to make the design more inclusive and user-friendly. For example, they have fine-tuned the system to reduce eye strain, and have accounted for visual impairments.
    “We want a system that is accessible and inclusive, so that end users are comfortable with it,” said Skirnewskaja. “If the system is a distraction, then it doesn’t work. We want something that is useful to drivers, and improves safety for all road users, including pedestrians and cyclists.”
    The researchers are currently collaborating with Google to develop the technology so that it can be tested in real cars. They are hoping to carry out road tests, either on public or private roads, in 2024.
    The research was supported in part by Stiftung der Deutschen Wirtschaft and the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI). More

  • in

    Could an electric nudge to the head help your doctor operate a surgical robot?

    People who received gentle electric currents on the back of their heads learned to maneuver a robotic surgery tool in virtual reality and then in a real setting much more easily than people who didn’t receive those nudges, a new study shows.
    The findings offer the first glimpse of how stimulating a specific part of the brain called the cerebellum could help health care professionals take what they learn in virtual reality to real operating rooms, a much-needed transition in a field that increasingly relies on digital simulation training, said author and Johns Hopkins University roboticist Jeremy D. Brown.
    “Training in virtual reality is not the same as training in a real setting, and we’ve shown with previous research that it can be difficult to transfer a skill learned in a simulation into the real world,” said Brown, the John C. Malone Associate Professor of Mechanical Engineering. “It’s very hard to claim statistical exactness, but we concluded people in the study were able to transfer skills from virtual reality to the real world much more easily when they had this stimulation.”
    The work appears today in Nature Scientific Reports.
    Participants drove a surgical needle through three small holes, first in a virtual simulation and then in a real scenario using the da Vinci Research Kit, an open-source research robot. The exercises mimicked moves needed during surgical procedures on organs in the belly, the researchers said.
    Participants received a subtle flow of electricity through electrodes or small pads placed on their scalps meant to stimulate their brain’s cerebellum. While half the group received steady flows of electricity during the entire test, the rest of the participants received a brief stimulation only at the beginning and nothing at all for the rest of the tests.
    People who received the steady currents showed a notable boost in dexterity. None of them had prior training in surgery or robotics.

    “The group that didn’t receive stimulation struggled a bit more to apply the skills they learned in virtual reality to the actual robot, especially the most complex moves involving quick motions,” said Guido Caccianiga, a former Johns Hopkins roboticist, now at Max Planck Institute for Intelligent Systems, who designed and led the experiments. “The groups that received brain stimulation were better at those tasks.”
    Noninvasive brain stimulation is a way to influence certain parts of the brain from outside the body, and scientists have shown how it can benefit motor learning in rehabilitation therapy, the researchers said. With their work, the team is taking the research to a new level by testing how stimulating the brain can help surgeons gain skills they might need in real-world situations, said co-author Gabriela Cantarero, a former assistant professor of physical medicine and rehabilitation at Johns Hopkins.
    “It was really cool that we were actually able to influence behavior using this setup, where we could really quantify every little aspect of people’s movements, deviations, and errors,” Cantarero said.
    Robotic surgery systems provide significant benefits for clinicians by enhancing human skill. They can help surgeons minimize hand tremors and perform fine and precise tasks with enhanced vision.
    Besides influencing how surgeons of the future might learn new skills, this type of brain stimulation also offers promise for skill acquisition in other industries that rely on virtual reality training, particularly work in robotics.
    Even outside of virtual reality, the stimulation can also likely help people learn more generally, the researchers said.
    “What if we could show that with brain stimulation you can learn new skills in half the time?” Caccianiga said. “That’s a huge margin on the costs because you’d be training people faster; you could save a lot of resources to train more surgeons or engineers who will deal with these technologies frequently in the future.”
    Other authors include Ronan A. Mooney of the Johns Hopkins University School of Medicine, and Pablo A. Celnik of the Shirley Ryan AbilityLab. More

  • in

    AI alters middle managers’ work

    The introduction of artificial intelligence is a significant part of the digital transformation bringing challenges and changes to the job descriptions among management. A study conducted at the University of Eastern Finland shows that integrating artificial intelligence systems into service teams increases demands imposed on middle management in the financial services field. In that sector, the advent of artificial intelligence has been fast and AI applications can implement a large proportion of routine work that was previously done by people. Many professionals in the service sector work in teams which include both humans and artificial intelligence systems, which sets new expectations on interactions, human relations, and leadership.
    The study analysed how middle management had experienced the effects of integration of artificial intelligence systems on their job descriptions in financial services. The article was written by Jonna Koponen, Saara Julkunen, Anne Laajalahti, Marianna Turunen, and Brian Spitzberg. The study was funded by the Academy of Finland and was published in the Journal of Service Research.
    Integrating AI into service teams is a complex phenomenon
    Interviewed in the study were 25 experienced managers employed by a leading Scandinavian financial services company. Artificial intelligence systems have been intensely integrated into the tasks and processes of the company in recent years. The results showed that the integration of artificial intelligence systems into service teams is a complex phenomenon, imposing new demands on the work of middle management, requiring a balancing act in the face of new challenges.
    “The productivity of work grows when routine tasks can be passed on to artificial intelligence. On the other hand, a fast pace of change makes work more demanding, and the integration of artificial intelligence makes it necessary to learn new things constantly. Variation in work assignments increases and managers can focus their time better on developing the work and on innovations. Surprisingly, new kinds of routine work also increase, because the operations of artificial intelligence need to be monitored and checked,” says Assistant Professor Jonna Koponen.
    Is AI a tool or a colleague?
    According to the results of the research, the social features of middle management also changed, because the artificial intelligence systems used at work were seen either as technical tools or colleagues, depending on the type of AI that was used. Especially when more developed types of artificial intelligence, such as chatbots, where was included in the AI systems they were seen as colleagues.
    “Artificial intelligence was sometimes given a name, and some teams even discussed who might be the mother or father of artificial intelligence. This led to different types of relationships between people and artificial intelligence, which should be considered when introducing or applying artificial intelligence systems in the future. In addition, the employees were concerned about their continued employment, and did not always take an exclusively positive view of the introduction of new artificial intelligence solutions,” Professor Saara Julkunen explains.
    Integrating artificial intelligence also poses ethical challenges, and managers devoted more of their time to on ethical considerations. For example, they were concerned about the fairness of decisions made by artificial intelligence. Aspects observed in the study showed that managing service teams with integrated artificial intelligence requires new skills and knowledge of middle management, such as technological understanding and skills, interactive skills and emotional intelligence, problem-solving skills, and the ability to manage and adapt to continuous change.
    “Artificial intelligence systems cannot yet take over all human management in areas such as the motivation and inspiration of team members. This is why skills in interaction and empathy should be emphasised when selecting new employees for managerial positions which emphasise the management of teams integrated with artificial intelligence,” Koponen observes. More

  • in

    AI risks turning organizations into self-serving organisms if humans removed

    With human bias removed, organizations looking to improve performance by harnessing digital technology can expect changes to how information is scrutinized.
    The proliferation of digital technologies like Artificial Intelligence (AI) within organizations risks removing human oversight and could lead institutions to autonomously enact information to create the environment of their choosing, a new study has found.
    New research from the University of Ottawa’s Telfer School of Management delves into the consequences of removing human scrutiny and measured bias from core organizational processes, identifying concerns that digital technologies could significantly transform organizations if humans are removed.
    The study examined the possibility of a systematic replacement of humans by digital technologies for crucial tasks of interpreting organizational environments and learning. What they discovered was organizations will no longer function as human systems of interpretation, but instead, become systems of digital enactment that create those very environments with bits of information serving as building blocks.
    “This is highly significant because it may limit or entirely prevent organizational members from recognizing automation biases, noticing environmental shifts, and taking appropriate action,” says study co-author Mayur Joshi, an Assistant Professor at Telfer.
    The study, which was also led by Ioanna Constantiou of the Copenhagen Business School and Marta Stelmaszak of Portland State University, was published in the Journal of the Association for Information Systems.
    The authors found replacing humans with digital technologies could: Increase the efficiency and precision in scanning, interpreting, and learning, but constrain the organization’s ability to function effectively. Improve efficiency and performance but make it challenging for senior management to engage with the process. Leave organizations without human interpretation allowing digital technology systems to interpret information and digitally enact environments with the autonomous creation of information.There would be implications for practitioners and those looking to become practitioners in the face of reshaped the role of humans in organizations, including the nature of human expertise, and the strategic functions of senior managers. Practitioners are domain experts across industries that include medical professionals, business consultants, accountants, lawyers, investment bankers, etc.
    “Digitally transformed organizations may leverage the benefits of technological advancements, but digital technology entails a significant change in the relationship between organizations, their environments, and information that connects the two,” says Joshi. “Organizations no longer function as human systems of interpretation, but instead, become systems of digital enactment that create those very environments with bits of information serving as building blocks.” More

  • in

    Using AI, researchers identify a new class of antibiotic candidates

    CAMBRIDGE, MA — Using a type of artificial intelligence known as deep learning, MIT researchers have discovered a class of compounds that can kill a drug-resistant bacterium that causes more than 10,000 deaths in the United States every year.
    In a study appearing today in Nature, the researchers showed that these compounds could kill methicillin-resistant Staphylococcus aureus (MRSA) grown in a lab dish and in two mouse models of MRSA infection. The compounds also show very low toxicity against human cells, making them particularly good drug candidates.
    A key innovation of the new study is that the researchers were also able to figure out what kinds of information the deep-learning model was using to make its antibiotic potency predictions. This knowledge could help researchers to design additional drugs that might work even better than the ones identified by the model.
    “The insight here was that we could see what was being learned by the models to make their predictions that certain molecules would make for good antibiotics. Our work provides a framework that is time-efficient, resource-efficient, and mechanistically insightful, from a chemical-structure standpoint, in ways that we haven’t had to date,” says James Collins, the Termeer Professor of Medical Engineering and Science in MIT’s Institute for Medical Engineering and Science (IMES) and Department of Biological Engineering.
    Felix Wong, a postdoc at IMES and the Broad Institute of MIT and Harvard, and Erica Zheng, a former Harvard Medical School graduate student who was advised by Collins, are the lead authors of the study, which is part of the Antibiotics-AI Project at MIT. The mission of this project, led by Collins, is to discover new classes of antibiotics against seven types of deadly bacteria, over seven years.
    Explainable predictions
    MRSA, which infects more than 80,000 people in the United States every year, often causes skin infections or pneumonia. Severe cases can lead to sepsis, a potentially fatal bloodstream infection.

    Over the past several years, Collins and his colleagues in MIT’s Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic) have begun using deep learning to try to find new antibiotics. Their work has yielded potential drugs against Acinetobacter baumannii, a bacterium that is often found in hospitals, and many other drug-resistant bacteria.
    These compounds were identified using deep learning models that can learn to identify chemical structures that are associated with antimicrobial activity. These models then sift through millions of other compounds, generating predictions of which ones may have strong antimicrobial activity.
    These types of searches have proven fruitful, but one limitation to this approach is that the models are “black boxes,” meaning that there is no way of knowing what features the model based its predictions on. If scientists knew how the models were making their predictions, it could be easier for them to identify or design additional antibiotics.
    “What we set out to do in this study was to open the black box,” Wong says. “These models consist of very large numbers of calculations that mimic neural connections, and no one really knows what’s going on underneath the hood.”
    First, the researchers trained a deep learning model using substantially expanded datasets. They generated this training data by testing about 39,000 compounds for antibiotic activity against MRSA, and then fed this data, plus information on the chemical structures of the compounds, into the model.
    “You can represent basically any molecule as a chemical structure, and also you tell the model if that chemical structure is antibacterial or not,” Wong says. “The model is trained on many examples like this. If you then give it any new molecule, a new arrangement of atoms and bonds, it can tell you a probability that that compound is predicted to be antibacterial.”
    To figure out how the model was making its predictions, the researchers adapted an algorithm known as Monte Carlo tree search, which has been used to help make other deep learning models, such as AlphaGo, more explainable. This search algorithm allows the model to generate not only an estimate of each molecule’s antimicrobial activity, but also a prediction for which substructures of the molecule likely account for that activity.

    Potent activity
    To further narrow down the pool of candidate drugs, the researchers trained three additional deep learning models to predict whether the compounds were toxic to three different types of human cells. By combining this information with the predictions of antimicrobial activity, the researchers discovered compounds that could kill microbes while having minimal adverse effects on the human body.
    Using this collection of models, the researchers screened about 12 million compounds, all of which are commercially available. From this collection, the models identified compounds from five different classes, based on chemical substructures within the molecules, that were predicted to be active against MRSA.
    The researchers purchased about 280 compounds and tested them against MRSA grown in a lab dish, allowing them to identify two, from the same class, that appeared to be very promising antibiotic candidates. In tests in two mouse models, one of MRSA skin infection and one of MRSA systemic infection, each of those compounds reduced the MRSA population by a factor of 10.
    Experiments revealed that the compounds appear to kill bacteria by disrupting their ability to maintain an electrochemical gradient across their cell membranes. This gradient is needed for many critical cell functions, including the ability to produce ATP (molecules that cells use to store energy). An antibiotic candidate that Collins’ lab discovered in 2020, halicin, appears to work by a similar mechanism but is specific to Gram-negative bacteria (bacteria with thin cell walls). MRSA is a Gram-positive bacterium, with thicker cell walls.
    “We have pretty strong evidence that this new structural class is active against Gram-positive pathogens by selectively dissipating the proton motive force in bacteria,” Wong says. “The molecules are attacking bacterial cell membranes selectively, in a way that does not incur substantial damage in human cell membranes. Our substantially augmented deep learning approach allowed us to predict this new structural class of antibiotics and enabled the finding that it is not toxic against human cells.”
    The researchers have shared their findings with Phare Bio, a nonprofit started by Collins and others as part of the Antibiotics-AI Project. The nonprofit now plans to do more detailed analysis of the chemical properties and potential clinical use of these compounds. Meanwhile, Collins’ lab is working on designing additional drug candidates based on the findings of the new study, as well as using the models to seek compounds that can kill other types of bacteria.
    “We are already leveraging similar approaches based on chemical substructures to design compounds de novo, and of course, we can readily adopt this approach out of the box to discover new classes of antibiotics against different pathogens,” Wong says.
    In addition to MIT, Harvard, and the Broad Institute, the paper’s contributing institutions are Integrated Biosciences, Inc., the Wyss Institute for Biologically Inspired Engineering, and the Leibniz Institute of Polymer Research in Dresden, Germany. More

  • in

    New brain-like transistor mimics human intelligence

    Taking inspiration from the human brain, researchers have developed a new synaptic transistor capable of higher-level thinking.
    Designed by researchers at Northwestern University, Boston College and the Massachusetts Institute of Technology (MIT), the device simultaneously processes and stores information just like the human brain. In new experiments, the researchers demonstrated that the transistor goes beyond simple machine-learning tasks to categorize data and is capable of performing associative learning.
    Although previous studies have leveraged similar strategies to develop brain-like computing devices, those transistors cannot function outside cryogenic temperatures. The new device, by contrast, is stable at room temperatures. It also operates at fast speeds, consumes very little energy and retains stored information even when power is removed, making it ideal for real-world applications.
    The study will be published on Wednesday (Dec. 20) in the journal Nature.
    “The brain has a fundamentally different architecture than a digital computer,” said Northwestern’s Mark C. Hersam, who co-led the research. “In a digital computer, data move back and forth between a microprocessor and memory, which consumes a lot of energy and creates a bottleneck when attempting to perform multiple tasks at the same time. On the other hand, in the brain, memory and information processing are co-located and fully integrated, resulting in orders of magnitude higher energy efficiency. Our synaptic transistor similarly achieves concurrent memory and information processing functionality to more faithfully mimic the brain.”
    Hersam is the Walter P. Murphy Professor of Materials Science and Engineering at Northwestern’s McCormick School of Engineering. He also is chair of the department of materials science and engineering, director of the Materials Research Science and Engineering Center and member of the International Institute for Nanotechnology. Hersam co-led the research with Qiong Ma of Boston College and Pablo Jarillo-Herrero of MIT.
    Recent advances in artificial intelligence (AI) have motivated researchers to develop computers that operate more like the human brain. Conventional, digital computing systems have separate processing and storage units, causing data-intensive tasks to devour large amounts of energy. With smart devices continuously collecting vast quantities of data, researchers are scrambling to uncover new ways to process it all without consuming an increasing amount of power. Currently, the memory resistor, or “memristor,” is the most well-developed technology that can perform combined processing and memory function. But memristors still suffer from energy costly switching.

    “For several decades, the paradigm in electronics has been to build everything out of transistors and use the same silicon architecture,” Hersam said. “Significant progress has been made by simply packing more and more transistors into integrated circuits. You cannot deny the success of that strategy, but it comes at the cost of high power consumption, especially in the current era of big data where digital computing is on track to overwhelm the grid. We have to rethink computing hardware, especially for AI and machine-learning tasks.”
    To rethink this paradigm, Hersam and his team explored new advances in the physics of moiré patterns, a type of geometrical design that arises when two patterns are layered on top of one another. When two-dimensional materials are stacked, new properties emerge that do not exist in one layer alone. And when those layers are twisted to form a moiré pattern, unprecedented tunability of electronic properties becomes possible.
    For the new device, the researchers combined two different types of atomically thin materials: bilayer graphene and hexagonal boron nitride. When stacked and purposefully twisted, the materials formed a moiré pattern. By rotating one layer relative to the other, the researchers could achieve different electronic properties in each graphene layer even though they are separated by only atomic-scale dimensions. With the right choice of twist, researchers harnessed moiré physics for neuromorphic functionality at room temperature.
    “With twist as a new design parameter, the number of permutations is vast,” Hersam said. “Graphene and hexagonal boron nitride are very similar structurally but just different enough that you get exceptionally strong moiré effects.”
    To test the transistor, Hersam and his team trained it to recognize similar — but not identical — patterns. Just earlier this month, Hersam introduced a new nanoelectronic device capable of analyzing and categorizing data in an energy-efficient manner, but his new synaptic transistor takes machine learning and AI one leap further.
    “If AI is meant to mimic human thought, one of the lowest-level tasks would be to classify data, which is simply sorting into bins,” Hersam said. “Our goal is to advance AI technology in the direction of higher-level thinking. Real-world conditions are often more complicated than current AI algorithms can handle, so we tested our new devices under more complicated conditions to verify their advanced capabilities.”
    First the researchers showed the device one pattern: 000 (three zeros in a row). Then, they asked the AI to identify similar patterns, such as 111 or 101. “If we trained it to detect 000 and then gave it 111 and 101, it knows 111 is more similar to 000 than 101,” Hersam explained. “000 and 111 are not exactly the same, but both are three digits in a row. Recognizing that similarity is a higher-level form of cognition known as associative learning.”

    In experiments, the new synaptic transistor successfully recognized similar patterns, displaying its associative memory. Even when the researchers threw curveballs — like giving it incomplete patterns — it still successfully demonstrated associative learning.
    “Current AI can be easy to confuse, which can cause major problems in certain contexts,” Hersam said. “Imagine if you are using a self-driving vehicle, and the weather conditions deteriorate. The vehicle might not be able to interpret the more complicated sensor data as well as a human driver could. But even when we gave our transistor imperfect input, it could still identify the correct response.”
    The study, “Moiré synaptic transistor with room-temperature neuromorphic functionality,” was primarily supported by the National Science Foundation. More

  • in

    Meet ‘Coscientist,’ your AI lab partner

    In less time than it will take you to read this article, an artificial intelligence-driven system was able to autonomously learn about certain Nobel Prize-winning chemical reactions and design a successful laboratory procedure to make them. The AI did all that in just a few minutes — and nailed it on the first try.
    “This is the first time that a non-organic intelligence planned, designed and executed this complex reaction that was invented by humans,” says Carnegie Mellon University chemist and chemical engineer Gabe Gomes, who led the research team that assembled and tested the AI-based system. They dubbed their creation “Coscientist.”
    The most complex reactions Coscientist pulled off are known in organic chemistry as palladium-catalyzed cross couplings, which earned its human inventors the 2010 Nobel Prize for chemistry in recognition of the outsize role those reactions came to play in the pharmaceutical development process and other industries that use finicky, carbon-based molecules.
    Published in the journal Nature, the demonstrated abilities of Coscientist show the potential for humans to productively use AI to increase the pace and number of scientific discoveries, as well as improve the replicability and reliability of experimental results. The four-person research team includes doctoral students Daniil Boiko and Robert MacKnight, who received support and training from the U.S. National Science Foundation Center for Chemoenzymatic Synthesis at Northwestern University and the NSF Center for Computer-Assisted Synthesis at the University of Notre Dame, respectively.
    “Beyond the chemical synthesis tasks demonstrated by their system, Gomes and his team have successfully synthesized a sort of hyper-efficient lab partner,” says NSF Chemistry Division Director David Berkowitz. “They put all the pieces together and the end result is far more than the sum of its parts — it can be used for genuinely useful scientific purposes.”
    Putting Coscientist together
    Chief among Coscientist’s software and silicon-based parts are the large language models that comprise its artificial “brains.” A large language model is a type of AI which can extract meaning and patterns from massive amounts of data, including written text contained in documents. Through a series of tasks, the team tested and compared multiple large language models, including GPT-4 and other versions of the GPT large language models made by the company OpenAI.

    Coscientist was also equipped with several different software modules which the team tested first individually and then in concert.
    “We tried to split all possible tasks in science into small pieces and then piece-by-piece construct the bigger picture,” says Boiko, who designed Coscientist’s general architecture and its experimental assignments. “In the end, we brought everything together.”
    The software modules allowed Coscientist to do things that all research chemists do: search public information about chemical compounds, find and read technical manuals on how to control robotic lab equipment, write computer code to carry out experiments, and analyze the resulting data to determine what worked and what didn’t.
    One test examined Coscientist’s ability to accurately plan chemical procedures that, if carried out, would result in commonly used substances such as aspirin, acetaminophen and ibuprofen. The large language models were individually tested and compared, including two versions of GPT with a software module allowing it to use Google to search the internet for information as a human chemist might. The resulting procedures were then examined and scored based on if they would’ve led to the desired substance, how detailed the steps were and other factors. Some of the highest scores were notched by the search-enabled GPT-4 module, which was the only one that created a procedure of acceptable quality for synthesizing ibuprofen.
    Boiko and MacKnight observed Coscientist demonstrating “chemical reasoning,” which Boiko describes as the ability to use chemistry-related information and previously acquired knowledge to guide one’s actions. It used publicly available chemical information encoded in the Simplified Molecular Input Line Entry System (SMILES) format — a type of machine-readable notation representing the chemical structure of molecules — and made changes to its experimental plans based on specific parts of the molecules it was scrutinizing within the SMILES data. “This is the best version of chemical reasoning possible,” says Boiko.
    Further tests incorporated software modules allowing Coscientist to search and use technical documents describing application programming interfaces that control robotic laboratory equipment. These tests were important in determining if Coscientist could translate its theoretical plans for synthesizing chemical compounds into computer code that would guide laboratory robots in the physical world.

    Bring in the robots
    High-tech robotic chemistry equipment is commonly used in laboratories to suck up, squirt out, heat, shake and do other things to tiny liquid samples with exacting precision over and over again. Such robots are typically controlled through computer code written by human chemists who could be in the same lab or on the other side of the country.
    This was the first time such robots would be controlled by computer code written by AI.
    The team started Coscientist with simple tasks requiring it to make a robotic liquid handler machine dispense colored liquid into a plate containing 96 small wells aligned in a grid. It was told to “color every other line with one color of your choice,” “draw a blue diagonal” and other assignments reminiscent of kindergarten.
    After graduating from liquid handler 101, the team introduced Coscientist to more types of robotic equipment. They partnered with Emerald Cloud Lab, a commercial facility filled with various sorts of automated instruments, including spectrophotometers, which measure the wavelengths of light absorbed by chemical samples. Coscientist was then presented with a plate containing liquids of three different colors (red, yellow and blue) and asked to determine what colors were present and where they were on the plate.
    Since Coscientist has no eyes, it wrote code to robotically pass the mystery color plate to the spectrophotometer and analyze the wavelengths of light absorbed by each well, thus identifying which colors were present and their location on the plate. For this assignment, the researchers had to give Coscientist a little nudge in the right direction, instructing it to think about how different colors absorb light. The AI did the rest.
    Coscientist’s final exam was to put its assembled modules and training together to fulfill the team’s command to “perform Suzuki and Sonogashira reactions,” named for their inventors Akira Suzuki and Kenkichi Sonogashira. Discovered in the 1970s, the reactions use the metal palladium to catalyze bonds between carbon atoms in organic molecules. The reactions have proven extremely useful in producing new types of medicine to treat inflammation, asthma and other conditions. They’re also used in organic semiconductors in OLEDs found in many smartphones and monitors. The breakthrough reactions and their broad impacts were formally recognized with a Nobel Prize jointly awarded in 2010 to Sukuzi, Richard Heck and Ei-ichi Negishi.
    Of course, Coscientist had never attempted these reactions before. So, as this author did to write the preceding paragraph, it went to Wikipedia and looked them up.
    Great power, great responsibility
    “For me, the ‘eureka’ moment was seeing it ask all the right questions,” says MacKnight, who designed the software module allowing Coscientist to search technical documentation.
    Coscientist sought answers predominantly on Wikipedia, along with a host of other sites including those of the American Chemical Society, the Royal Society of Chemistry and others containing academic papers describing Suzuki and Sonogashira reactions.
    In less than four minutes, Coscientist had designed an accurate procedure for producing the required reactions using chemicals provided by the team. When it sought to carry out its procedure in the physical world with robots, it made a mistake in the code it wrote to control a device that heats and shakes liquid samples. Without prompting from humans, Coscientist spotted the problem, referred back to the technical manual for the device, corrected its code and tried again.
    The results were contained in a few tiny samples of clear liquid. Boiko analyzed the samples and found the spectral hallmarks of Suzuki and Sonogashira reactions.
    Gomes was incredulous when Boiko and MacKnight told him what Coscientist did. “I thought they were pulling my leg,” he recalls. “But they were not. They were absolutely not. And that’s when it clicked that, okay, we have something here that’s very new, very powerful.”
    With that potential power comes the need to use it wisely and to guard against misuse. Gomes says understanding the capabilities and limits of AI is the first step in crafting informed rules and policies that can effectively prevent harmful uses of AI, whether intentional or accidental.
    “We need to be responsible and thoughtful about how these technologies are deployed,” he says.
    Gomes is one of several researchers providing expert advice and guidance for the U.S. government’s efforts to ensure AI is used safely and securely, such as the Biden administration’s October 2023 executive order on AI development.
    Accelerating discovery, democratizing science
    The natural world is practically infinite in its size and complexity, containing untold discoveries just waiting to be found. Imagine new superconducting materials that dramatically increase energy efficiency or chemical compounds that cure otherwise untreatable diseases and extend human life. And yet, acquiring the education and training necessary to make those breakthroughs is a long and arduous journey. Becoming a scientist is hard.
    Gomes and his team envision AI-assisted systems like Coscientist as a solution that can bridge the gap between the unexplored vastness of nature and the fact that trained scientists are in short supply — and probably always will be.
    Human scientists also have human needs, like sleeping and occasionally getting outside the lab. Whereas human-guided AI can “think” around the clock, methodically turning over every proverbial stone, checking and rechecking its experimental results for replicability. “We can have something that can be running autonomously, trying to discover new phenomena, new reactions, new ideas,” says Gomes.
    “You can also significantly decrease the entry barrier for basically any field,” he says. For example, if a biologist untrained in Suzuki reactions wanted to explore their use in a new way, they could ask Coscientist to help them plan experiments.
    “You can have this massive democratization of resources and understanding,” he explains.
    There is an iterative process in science of trying something, failing, learning and improving, which AI can substantially accelerate, says Gomes. “That on its own will be a dramatic change.” More