More stories

  • in

    Transparent brain implant can read deep neural activity from the surface

    Researchers at the University of California San Diego have developed a neural implant that provides information about activity deep inside the brain while sitting on its surface. The implant is made up of a thin, transparent and flexible polymer strip that is packed with a dense array of graphene electrodes. The technology, tested in transgenic mice, brings the researchers a step closer to building a minimally invasive brain-computer interface (BCI) that provides high-resolution data about deep neural activity by using recordings from the brain surface.
    The work was published on Jan. 11 in Nature Nanotechnology.
    “We are expanding the spatial reach of neural recordings with this technology,” said study senior author Duygu Kuzum, a professor in the Department of Electrical and Computer Engineering at the UC San Diego Jacobs School of Engineering. “Even though our implant resides on the brain’s surface, its design goes beyond the limits of physical sensing in that it can infer neural activity from deeper layers.”
    This work overcomes the limitations of current neural implant technologies. Existing surface arrays, for example, are minimally invasive, but they lack the ability to capture information beyond the brain’s outer layers. In contrast, electrode arrays with thin needles that penetrate the brain are capable of probing deeper layers, but they often lead to inflammation and scarring, compromising signal quality over time.
    The new neural implant developed at UC San Diego offers the best of both worlds.
    The implant is a thin, transparent and flexible polymer strip that conforms to the brain’s surface. The strip is embedded with a high-density array of tiny, circular graphene electrodes, each measuring 20 micrometers in diameter. Each electrode is connected by a micrometers-thin graphene wire to a circuit board.
    In tests on transgenic mice, the implant enabled the researchers to capture high-resolution information about two types of neural activity-electrical activity and calcium activity-at the same time. When placed on the surface of the brain, the implant recorded electrical signals from neurons in the outer layers. At the same time, the researchers used a two-photon microscope to shine laser light through the implant to image calcium spikes from neurons located as deep as 250 micrometers below the surface. The researchers found a correlation between surface electrical signals and calcium spikes in deeper layers. This correlation enabled the researchers to use surface electrical signals to train neural networks to predict calcium activity — not only for large populations of neurons, but also individual neurons — at various depths.

    “The neural network model is trained to learn the relationship between the surface electrical recordings and the calcium ion activity of the neurons at depth,” said Kuzum. “Once it learns that relationship, we can use the model to predict the depth activity from the surface.”
    An advantage of being able to predict calcium activity from electrical signals is that it overcomes the limitations of imaging experiments. When imaging calcium spikes, the subject’s head must be fixed under a microscope. Also, these experiments can only last for an hour or two at a time.
    “Since electrical recordings do not have these limitations, our technology makes it possible to conduct longer duration experiments in which the subject is free to move around and perform complex behavioral tasks,” said study co-first author Mehrdad Ramezani, an electrical and computer engineering Ph.D. student in Kuzum’s lab. “This can provide a more comprehensive understanding of neural activity in dynamic, real-world scenarios.”
    Designing and fabricating the neural implant
    The technology owes its success to several innovative design features: transparency and high electrode density combined with machine learning methods.
    “This new generation of transparent graphene electrodes embedded at high density enables us to sample neural activity with higher spatial resolution,” said Kuzum. “As a result, the quality of signals improves significantly. What makes this technology even more remarkable is the integration of machine learning methods, which make it possible to predict deep neural activity from surface signals.”
    This study was a collaborative effort among multiple research groups at UC San Diego. The team, led by Kuzum, one of the world leaders in developing multimodal neural interfaces, includes nanoengineering professor Ertugrul Cubukcu, who specializes in advanced micro- and nanofabrication techniques for graphene materials; electrical and computer engineering professor Vikash Gilja, whose lab integrates domain-specific knowledge from the fields of basic neuroscience, signal processing, and machine learning to decode neural signals; and neurobiology and neurosciences professor Takaki Komiyama, whose lab focuses on investigating neural circuit mechanisms that underlie flexible behaviors.

    Transparency is one of the key features of this neural implant. Traditional implants use opaque metal materials for their electrodes and wires, which block the view of neurons beneath the electrodes during imaging experiments. In contrast, an implant made using graphene is transparent, which provides a completely clear field of view for a microscope during imaging experiments.
    “Seamless integration of recording electrical signals and optical imaging of the neural activity at the same time is only possible with this technology,” said Kuzum. “Being able to conduct both experiments at the same time gives us more relevant data because we can see how the imaging experiments are time-coupled to the electrical recordings.”
    To make the implant completely transparent, the researchers used super thin, long graphene wires instead of traditional metal wires to connect the electrodes to the circuit board. However, fabricating a single layer of graphene as a thin, long wire is challenging because any defect will render the wire nonfunctional, explained Ramezani. “There may be a gap in the graphene wire that prevents the electrical signal from flowing through, so you basically end up with a broken wire.”
    The researchers addressed this issue using a clever technique. Instead of fabricating the wires as a single layer of graphene, they fabricated them as a double layer doped with nitric acid in the middle. “By having two layers of graphene on top of one another, there’s a good chance that defects in one layer will be masked by the other layer, ensuring the creation of fully functional, thin and long graphene wires with improved conductivity,” said Ramezani.
    According to the researchers, this study demonstrates the most densely packed transparent electrode array on a surface-sitting neural implant to date. Achieving high density required fabricating extremely small graphene electrodes. This presented a considerable challenge, as shrinking graphene electrodes in size increases their impedance — this hinders the flow of electrical current needed for recording neural activity. To overcome this obstacle, the researchers used a microfabrication technique developed by Kuzum’s lab that involves depositing platinum nanoparticles onto the graphene electrodes. This approach significantly improved electron flow through the electrodes while keeping them tiny and transparent.
    Next steps
    The team will next focus on testing the technology in different animal models, with the ultimate goal of human translation in the future.
    Kuzum’s research group is also dedicated to using the technology to advance fundamental neuroscience research. In that spirit, they are sharing the technology with labs across the U.S. and Europe, contributing to diverse studies ranging from understanding how vascular activity is coupled to electrical activity in the brain to investigating how place cells in the brain are so efficient at creating spatial memory. To make this technology more widely available, Kuzum’s team has applied for a National Institutes of Health (NIH) grant to fund efforts in scaling up production and facilitating its adoption by researchers worldwide.
    “This technology can be used for so many different fundamental neuroscience investigations, and we are eager to do our part to accelerate progress in better understanding the human brain,” said Kuzum.
    Paper title: “High-density Transparent Graphene Arrays for Predicting Cellular Calcium Activity at Depth from Surface Potential Recordings.” Co-authors include Jeong-Hoon Kim*, Xin Liu, Chi Ren, Abdullah Alothman, Chawina De-Eknamkul and Madison N. Wilson, all at UC San Diego.
    *Study co-first author
    This research was supported by the Office of Naval Research (N000142012405, N000142312163 and N000141912545), the National Science Foundation (ECCS-2024776, ECCS-1752241 and ECCS-1734940) and the National Institutes of Health (R21 EY029466, R21 EB026180, DP2 EB030992, R01 NS091010A, R01 EY025349, R01 DC014690, R21 NS109722 AND P30 EY022589), Pew Charitable Trusts, and David and Lucile Packard Foundation. This work was performed in part at the San Diego Nanotechnology Infrastructure (SDNI) at UC San Diego, a member of the National Nanotechnology Coordinated Infrastructure, which is supported by the National Science Foundation (grant ECCS-1542148). More

  • in

    Revolutionizing real-time data processing with edge computing and reservoir technology

    Traditional cloud computing faces various challenges when processing large amounts of data in real time. “Edge” computing is a promising alternative and can benefit from devices known as physical reservoirs. Researchers have now developed a novel memristor device for this purpose. It responds to electrical and optical signals and overcomes real-time processing limitations. When tested, it achieved up to 90.2% accuracy in digit identification, demonstrating its potential for applications in artificial intelligence systems and beyond.
    Every day, a significant amount of data related to weather, traffic, and social media undergo real-time processing. In traditional cloud computing, this processing occurs on the cloud, raising concerns about issues such as leaks, communication delays, slow speeds, and higher power consumption. Against this backdrop, “edge computing” presents a promising alternative solution. Located near users, it aims to distribute computations, thereby reducing the load and speeding up data processing. Specifically, edge AI, which involves AI processing at the edge, is expected to find applications in, for example, self-driving cars and machine anomaly prediction in factories.
    However, for effective edge computing, efficient and computationally cost-effective technology is needed. One promising option is reservoir computing, a computational method designed for processing signals that are recorded over time. It can transform these signals into complex patterns using reservoirs that respond nonlinearly to them. In particular, physical reservoirs, which use the dynamics of physical systems, are both computationally cost-effective and efficient. However, their ability to process signals in real time is limited by the natural relaxation time of the physical system. This limits real-time processing and requires adjustments for best learning performance.
    Recently, Professor Kentaro Kinoshita, a member of the Faculty of Advanced Engineering and the Department of Applied Physics at the Tokyo University of Science (TUS), and Mr. Yutaro Yamazaki from the Graduate School of Science and the same department at TUS developed an optical device with features that support physical reservoir computing and allow real-time signal processing across a broad range of timescales within a single device. Their findings were published in Advanced Scienceon 20 November 2023.
    Speaking of their motivation for the study, Prof. Kinoshita explains: “The devices developed in this research will enable a single device to process time-series signals with various timescales generated in our living environment in real time. In particular, we hope to realize an AI device to utilize in the edge domain.”
    In their study, the duo created a special device using Sn-doped In2O3 and Nb-doped SrTiO3 (denoted as ITO/Nb:STO), which responds to both electrical and optical signals. They tested the electrical features of the device to confirm that it functions as a memristor (a memory device that can change its electrical resistance). The team also explored the influence of ultraviolet light on ITO/Nb:STO by varying the voltage and observing changes in the current. The results suggested that this device can modify the relaxation time of the photo-induced current according to the voltage, making it a potential candidate for a physical reservoir.
    Furthermore, the team tested the effectiveness of ITO/Nb:STO as a physical reservoir by using it for classifying handwritten digit images in the MNIST (Modified National Institute of Standards and Technology) dataset. To their delight, the device achieved a classification accuracy of up to 90.2%. Additionally, to understand the role of the physical reservoir, the team ran experiments without it, which resulted in a relatively lower classification accuracy of 85.1%. These findings show that the ITO/Nb:STO junction device improves classification accuracy while keeping computational costs lower, proving its value as a physical reservoir.
    “In the past, our research group has focused on research and development of materials applicable to physical reservoir computing. Accordingly, we fabricated these devices with the aim to realize a physical reservoir in which the relaxation time of photo-induced current can be arbitrarily controlled by voltage,” says Prof. Kinoshita.
    In summary, this study presents a novel memristor device capable of adjusting its response timescale through voltage variation, exhibiting enhanced learning capabilities, which makes it promising for applications at the edge as an AI device for edge computing. This, in turn, could pave the way for single devices that can effectively handle signals of varied durations found in real-world environments. More

  • in

    Generating stable qubits at room temperature

    In a study published in Science Advances, a group of researchers led by Associate Professor Nobuhiro Yanai from Kyushu University’s Faculty of Engineering, in collaboration with Associate Professor Kiyoshi Miyata from Kyushu University and Professor Yasuhiro Kobori of Kobe University, reports that they have achieved quantum coherence at room temperature: the ability of a quantum system to maintain a well-defined state over time without getting affected by surrounding disturbances
    This breakthrough was made possible by embedding a chromophore, a dye molecule that absorbs light and emits color, in a metal-organic framework, or MOF, a nanoporous crystalline material composed of metal ions and organic ligands.
    Their findings mark a crucial advancement for quantum computing and sensing technologies. While quantum computing is positioned as the next major advancement of computing technology, quantum sensing is a sensing technology that utilizes the quantum mechanical properties of qubits (quantum analogs of bits in classical computing that can exist in a superposition of 0 and 1).
    Various systems can be employed to implement qubits, with one approach being the utilization of intrinsic spin — a quantum property related to a particle’s magnetic moment — of an electron. Electrons have two spin states: spin up and spin down. Qubits based on spin can exist in a combination of these states and can be “entangled,” allowing the state of one qubit to be inferred from another.
    By leveraging the extremely sensitive nature of a quantum entangled state to environmental noise, quantum sensing technology is expected to enable sensing with higher resolution and sensitivity compared to traditional techniques. However, so far, it has been challenging to entangle four electrons and make them respond to external molecules, that is, achieve quantum sensing using a nanoporous MOF.
    Notably, chromophores can be used to excite electrons with desirable electron spins at room temperatures through a process called singlet fission. However, at room temperature causes the quantum information stored in qubits to lose quantum superposition and entanglement. As a result, it is usually only possible to achieve quantum coherence at liquid nitrogen level temperatures.
    To suppress the molecular motion and achieve room-temperature quantum coherence, the researchers introduced a chromophore based on pentacene (polycyclic aromatic hydrocarbon consisting of five linearly fused benzene rings) in a UiO-type MOF. “The MOF in this work is a unique system that can densely accumulate chromophores. Additionally, the nanopores inside the crystal enable the chromophore to rotate, but at a very restrained angle,” says Yanai.
    The MOF structure facilitated enough motion in the pentacene units to allow the electrons to transition from the triplet state to a quintet state, while also sufficiently suppressing motion at room temperature to maintain quantum coherence of the quintet multiexciton state. Upon photoexciting electrons with microwave pulses, the researchers could observe the quantum coherence of the state for over 100 nanoseconds at room temperature. “This is the first room-temperature quantum coherence of entangled quintets,” remarks an excited Kobori.
    While the coherence was observed only for nanoseconds, the findings will pave the way for designing materials for the generation of multiple qubits at room temperatures. “It will be possible to generate quintet multiexciton state qubits more efficiently in the future by searching for guest molecules that can induce more such suppressed motions and by developing suitable MOF structures,” speculates Yanai. “This can open doors to room-temperature molecular quantum computing based on multiple quantum gate control and quantum sensing of various target compounds.” More

  • in

    First direct imaging of small noble gas clusters at room temperature

    For the first time, scientists have succeeded in the stabilisation and direct imaging of small clusters of noble gas atoms at room temperature. This achievement opens up exciting possibilities for fundamental research in condensed matter physics and applications in quantum information technology. The key to this breakthrough, achieved by scientists at the University of Vienna in collaboration with colleagues at the University of Helsinki, was the confinement of noble gas atoms between two layers of graphene.
    This method overcomes the difficulty that noble gases do not form stable structures under experimental conditions at ambient temperatures. Details of the method and the first ever electron microscopy images of noble gas structures (krypton and xenon) have now been published in Nature Materials.
    A Noble Trap
    Jani Kotakoski’s group at the University of Vienna was investigating the use of ion irradiation to modify the properties of graphene and other two-dimensional materials when they noticed something unusual: when noble gases are used to irradiate, they can get trapped between two sheets of graphene. This happens when noble gas ions are fast enough to pass through the first but not the second graphene layer. Once trapped between the layers, the noble gases are free to move. This is because they do not form chemical bonds. However, in order to accommodate the noble gas atoms, the graphene bends to form tiny pockets. Here, two or more noble gas atoms can meet and form regular, densely packed, two-dimensional noble gas nanoclusters.
    Fun with Microscope
    “We used scanning transmission electron microscopy to observe these clusters, and they are really fascinating and a lot of fun to watch. They rotate, jump, grow and shrink as we image them,” says Manuel Längle, lead author of the study. “Getting the atoms between the layers was the hardest part of the work. Now that we have achieved this, we have a simple system for studying fundamental processes related to material growth and behavior ,” he adds. Commenting on the group’s future work, Jani Kotakoski says: “The next steps are to study the properties of clusters with different noble gases and how they behave at low and high temperatures. Due to the use of noble gases in light sources and lasers, these new structures may in future enable applications for example in quantum information technology.” More

  • in

    New study pinpoints the weaknesses in AI

    ChatGPT and other solutions built on Machine Learning are surging. But even the most successful algorithms have limitations. As the first in the world researchers from University of Copenhagen has proven mathematically that apart from simple problems it is not possible to create algorithms for AI that will always be stable. The study may lead to guidelines on how to better test algorithms and reminds us that machines do not have human intelligence after all.
    Machines interpret medical scanning images more accurately than doctors, they translate foreign languages, and may soon be able to drive cars more safely than humans. However, even the best algorithms do have weaknesses. A research team at Department of Computer Science, University of Copenhagen, tries to reveal them.
    Take an automated vehicle reading a road sign as an example. If someone has placed a sticker on the sign, this will not distract a human driver. But a machine may easily be put off because the sign is now different from the ones it was trained on.
    “We would like algorithms to be stable in the sense, that if the input is changed slightly the output will remain almost the same. Real life involves all kinds of noise which humans are used to ignore, while machines can get confused,” says Professor Amir Yehudayoff, heading the group.
    A language for discussing weaknesses
    As the first in the world, the group together with researchers from other countries has proven mathematically that apart from simple problems it is not possible to create algorithms for Machine Learning that will always be stable. The scientific article describing the result was approved for publication at one of the leading international conferences on theoretical computer science, Foundations of Computer Science (FOCS).
    “I would like to note that we have not worked directly on automated car applications. Still, this seems like a problem too complex for algorithms to always be stable,” says Amir Yehudayoff, adding that this does not necessarily imply major consequences in relation to development of automated cars:
    “If the algorithm only errs under a few very rare circumstances this may well be acceptable. But if it does so under a large collection of circumstances, it is bad news.”

    The scientific article cannot be applied by industry for identifying bugs in its algorithms. This wasn’t the intension, the professor explains:
    “We are developing a language for discussing the weaknesses in Machine Learning algorithms. This may lead to development of guidelines that describe how algorithms should be tested. And in the long run this may again lead to development of better and more stable algorithms.”
    From intuition to mathematics
    A possible application could be for testing algorithms for protection of digital privacy.
    “Some company might claim to have developed an absolutely secure solution for privacy protection. Firstly, our methodology might help to establish that the solution cannot be absolutely secure. Secondly, it will be able to pinpoint points of weakness,” says Amir Yehudayoff.
    First and foremost, though, the scientific article contributes to theory. Especially the mathematical content is groundbreaking, he adds:
    “We understand intuitively, that a stable algorithm should work almost as well as before when exposed to a small amount of input noise. Just like the road sign with a sticker on it. But as theoretical computer scientists we need a firm definition. We must be able to describe the problem in the language of mathematics. Exactly how much noise must the algorithm be able to withstand, and how close to the original output should the output be if we are to accept the algorithm to be stable? This is what we have suggested an answer to.”

    Important to keep limitations in mind
    The scientific article has received large interest from colleagues in the theoretical computer science world, but not from the tech industry. Not yet at least.
    “You should always expect some delay between a new theoretical development and interest from people working in applications,” says Amir Yehudayoff while adding smilingly:
    “And some theoretical developments will remain unnoticed forever.”
    However, he does not see that happening in this case:
    “Machine Learning continues to progress rapidly, and it is important to remember that even solutions which are very successful in the real world still do have limitations. The machines may sometimes seem to be able to think but after all they do not possess human intelligence. This is important to keep in mind.” More

  • in

    Artificial intelligence helps unlock advances in wireless communications

    A new wave of communication technology is quickly approaching and researchers at UBC Okanagan are investigating ways to configure next-generation mobile networks.
    Dr. Anas Chaaban works in the UBCO Communication Theory Lab where researchers are busy analyzing a theoretical wireless communication architecture that will be optimized to handle increasing data loads while sending and receiving data faster.
    Next-generation mobile networks are expected to outperform 5G on many fronts such as reliability, coverage and intelligence, explains Dr. Chaaban, an Assistant Professor in UBCO’s School of Engineering.
    And the benefits go far beyond speed. The next generation of technology is expected to be a fully integrated system that allows for instantaneous communications between devices, consumers and the surrounding environment, he says.
    These new networks will call for intelligent architectures that support massive connectivity, ultra-low latency, ultra-high reliability, high-quality experience, energy efficiency and lower deployment costs.
    “One way to meet these stringent requirements is to rethink traditional communication techniques by exploiting recent advances in artificial intelligence,” he says. “Traditionally, functions such as waveform design, channel estimation, interference mitigation and error detection and correction are developed based on theoretical models and assumptions. This traditional approach is not capable of adapting to new challenges introduced by emerging technologies.”
    Using a technology called transformer masked autoencoders, the researchers are developing techniques that enhance efficiency, adaptability and robustness. Dr. Chaaban says while there are many challenges in this research, it is expected it will play an important role in next-generation communication networks.
    “We are working on ways to take content like images or video files and break them down into smaller packets in order to transport them to a recipient,” he says “The interesting thing is that we can throw away a number of packets and rely on AI to recover them at the recipient, which then links them back together to recreate the image or video.”
    The experience, even today, is something users take for granted but next-generation technology — where virtual reality will be a part of everyday communications including cell phone calls — is positioned to improve wireless systems substantially, he adds. The potential is unparalleled.
    “AI provides us with the power to develop complex architectures that propel communications technologies forward to cope with the proliferation of advanced technologies such as virtual reality,” says Chaaban. “By collectively tackling these intricacies, the next generation of wireless technology can usher in a new era of adaptive, efficient and secure communication networks.” More

  • in

    Toward efficient spintronic materials

    A research team from Osaka University, The University of Tokyo, and Tokyo Institute of Technology revealed the microscopic origin of the large magnetoelectric effect in interfacial multiferroics composed of the ferromagnetic Co2FeSi Heusler alloy and the piezoelectric material. They observed element-specific changes in the orbital magnetic moments in the interfacial multiferroic material using an X-ray Magnetic Circular Dichroism (XMCD) measurement under the application of an electric field, and they showed the change contributes to the large magnetoelectric effect.
    The findings provide guidelines for designing materials with a large magnetoelectric effect, and it will be useful in developing new information writing technology that consumes less power in spintronic memory devices.
    The research results will be shown in an article, “Strain-induced specific orbital control in a Heusler alloy-based interfacial multiferroics” published in NPG Asia Materials.
    Controlling the direction of magnetization using low electric field is necessary for developing efficient spintronic devices. In spintronics, properties of an electron’s spin or magnetic moment are used to store information. The electron spins can be manipulated by straining orbital magnetic moments to create a high-performance magnetoelectric effect.
    Japanese researchers, including Jun Okabayashi from the University of Tokyo, revealed a strain-induced orbital control mechanism in interfacial multiferroics. In multiferroic material, the magnetic property can be controlled using an electric field — potentially leading to efficient spintronic devices. The interfacial multiferroics that Okabayashi and his colleagues studied consist of a junction between a ferromagnetic material and a piezoelectric material. The direction of magnetization in the material could be controlled by applying voltage.
    The team showed the microscopic origin of the large magnetoelectric effect in the material. The strain generated from the piezoelectric material could change the orbital magnetic moment of the ferromagnetic material. They revealed element-specific orbital control in the interfacial multiferroic material using reversible strain and provided guidelines for designing materials with a large magnetoelectric effect. The findings will be useful in developing new information writing technology that consumes less power. More

  • in

    Integrating dimensions to get more out of Moore’s Law and advance electronics

    Moore’s Law, a fundamental scaling principle for electronic devices, forecasts that the number of transistors on a chip will double every two years, ensuring more computing power — but a limit exists.
    Today’s most advanced chips house nearly 50 billion transistors within a space no larger than your thumbnail. The task of cramming even more transistors into that confined area has become more and more difficult, according to Penn State researchers.
    In a study published today (Jan. 10) in the journal Nature, Saptarshi Das, an associate professor of engineering science and mechanics and co-corresponding author of the study, and his team suggest a remedy: seamlessly implementing 3D integration with 2D materials.
    In the semiconductor world, 3D integration means vertically stacking multiple layers of semiconductor devices. This approach not only facilitates the packing of more silicon-based transistors onto a computer chip, commonly referred to as “More Moore,” but also permits the use of transistors made from 2D materials to incorporate diverse functionalities within various layers of the stack, a concept known as “More than Moore.”
    With the work outlined in the study, Saptarshi and the team demonstrate feasible paths beyond scaling current tech to achieve both More Moore and More than Moore through monolithic 3D integration. Monolithic 3D integration is a fabrication process wherein researchers directly make the devices on the one below, as compared to the traditional process of stacking independently fabricated layers.
    “Monolithic 3D integration offers the highest density of vertical connections as it does not rely on bonding of two pre-patterned chips — which would require microbumps where two chips are bonded together — so you have more space to make connections,” said Najam Sakib, graduate research assistant in engineering science and mechanics and co-author of the study.
    Monolithic 3D integration faces significant challenges, though, according to Darsith Jayachandran, graduate research assistant in engineering science and mechanics and co-corresponding author of the study, since conventional silicon components would melt under the processing temperatures.

    “One challenge is the process temperature ceiling of 450 degrees Celsius (C) for back-end integration for silicon-based chips — our monolithic 3D integration approach drops that temperate significantly to less than 200 C,” Jayachandran said, explaining that the process temperature ceiling is the maximum temperature allowed before damaging the prefabricated structures. “Incompatible process temperature budgets make monolithic 3D integration challenging with silicon chips, but 2D materials can withstand temperatures needed for the process.”
    The researchers used existing techniques for their approach, but they are the first to successfully achieve monolithic 3D integration at this scale using 2D transistors made with 2D semiconductors called transition metal dichalcogenides.
    The ability to vertically stack the devices in 3D integration also enabled more energy-efficient computing because it solved a surprising problem for such tiny things as transistors on a computer chip: distance.
    “By stacking devices vertically on top of each other, you’re decreasing the distance between devices, and therefore, you’re decreasing the lag and also the power consumption,” said Rahul Pendurthi, graduate research assistant in engineering science and mechanics and co-corresponding author of the study.
    By decreasing the distance between devices, the researchers achieved “More Moore.” By incorporating transistors made with 2D materials, the researchers met the “More than Moore” criterion as well. The 2D materials are known for their unique electronic and optical properties, including sensitivity to light, which makes these materials ideal as sensors. This is useful, the researchers said, as the number of connected devices and edge devices — things like smartphones or wireless home weather stations that gather data on the ‘edge’ of a network — continue to increase.
    “‘More Than Moore’ refers to a concept in the tech world where we are not just making computer chips smaller and faster, but also with more functionalities,” said Muhtasim Ul Karim Sadaf, graduate research assistant in engineering science and mechanics and co-author of the study. “It is about adding new and useful features to our electronic devices, like better sensors, improved battery management or other special functions, to make our gadgets smarter and more versatile.”
    Using 2D devices for 3D integration has several other advantages, the researchers said. One is superior carrier mobility, which refers to how an electrical charge is carried in semiconductor materials. Another is being ultra-thin, enabling the researchers to fit more transistors on each tier of the 3D integration and enable more computing power.

    While most academic research involves small-scale prototypes, this study demonstrated 3D integration at a massive scale, characterizing tens of thousands of devices. According to Das, this achievement bridges the gap between academia and industry and could lead to future partnerships where industry leverages Penn State’s 2D materials expertise and facilities. The advance in scaling was enabled by the availability of high-quality, wafer-scale transition metal dichalcogenides developed by researchers at Penn State’s Two-Dimensional Crystal Consortium (2DCC-MIP), a U.S. National Science Foundation (NSF) Materials Innovation Platform and national user facility.
    “This breakthrough demonstrates yet again the essential role of materials research as the foundation of the semiconductor industry and U.S. competitiveness,” said Charles Ying, program director for NSF’s Materials Innovation Platforms. “Years of effort by Penn State’s Two-Dimensional Crystal Consortium to improve the quality and size of 2D materials have made it possible to achieve 3D integration of semiconductors at a size that can be transformative for electronics.”
    According to Das, this technological advancement is only the first step.
    “Our ability to demonstrate, at wafer scale, a huge number of devices shows that we have been able to translate this research to a scale which can be appreciated by the semiconductor industry,” Das said. “We have put 30,000 transistors in each tier, which may be a record number. This puts Penn State in a very unique position to lead some of the work and partner with the U.S. semiconductor industry in advancing this research.”
    Along with Das, Jayachandran, Pendurthi, Sadaf and Sakib, other authors include Andrew Pannone, doctoral student in engineering science and mechanics; Chen Chen, assistant research professor in 2DCC-MIP; Ying Han, postdoctoral researcher in mechanical engineering; Nicholas Trainor, doctoral student in materials science and engineering; Shalini Kumari, postdoctoral scholar; Thomas McKnight, doctoral student in materials science and engineering; Joan Redwing, director of the 2DCC-MIP and distinguished professor of materials science and engineering and of electrical engineering; and Yang Yang, assistant professor of engineering science and mechanics.
    The U.S. National Science Foundation and Army Research Office supported this research. More