More stories

  • in

    Innovations in depth from focus/defocus pave the way to more capable computer vision systems

    In several applications of computer vision, such as augmented reality and self-driving cars, estimating the distance between objects and the camera is an essential task. Depth from focus/defocus is one of the techniques that achieves such a process using the blur in the images as a clue. Depth from focus/defocus usually requires a stack of images of the same scene taken with different focus distances, a technique known as focal stack.
    Over the past decade or so, scientists have proposed many different methods for depth from focus/defocus, most of which can be divided into two categories. The first category includes model-based methods, which use mathematical and optics models to estimate scene depth based on sharpness or blur. The main problem with such methods, however, is that they fail for texture-less surfaces which look virtually the same across the entire focal stack.
    The second category includes learning-based methods, which can be trained to perform depth from focus/defocus efficiently, even for texture-less surfaces. However, these approaches fail if the camera settings used for an input focal stack are different from those used in the training dataset.
    Overcoming these limitations now, a team of researchers from Japan has come up with an innovative method for depth from focus/defocus that simultaneously addresses the abovementioned issues. Their study, published in the International Journal of Computer Vision, was led by Yasuhiro Mukaigawa and Yuki Fujimura from Nara Institute of Science and Technology (NAIST), Japan.
    The proposed technique, dubbed deep depth from focal stack (DDFS), combines model-based depth estimation with a learning framework to get the best of both the worlds. Inspired by a strategy used in stereo vision, DDFS involves establishing a ‘cost volume’ based on the input focal stack, the camera settings, and a lens defocus model. Simply put, the cost volume represents a set of depth hypotheses — potential depth values for each pixel — and an associated cost value calculated on the basis of consistency between images in the focal stack. “The cost volume imposes a constraint between the defocus images and scene depth, serving as an intermediate representation that enables depth estimation with different camera settings at training and test times,” explains Mukaigawa.
    The DDFS method also employs an encoder-decoder network, a commonly used machine learning architecture. This network estimates the scene depth progressively in a coarse-to-fine fashion, using ‘cost aggregation’ at each stage for learning localized structures in the images adaptively.
    The researchers compared the performance of DDFS with that of other state-of-the-art depth from focus/defocus methods. Notably, the proposed approach outperformed most methods in various metrics for several image datasets. Additional experiments on focal stacks captured with the research team’s camera further proved the potential of DDFS, making it useful even with only a few input images in the input stacks, unlike other techniques.
    Overall, DDFS could serve as a promising approach for applications where depth estimation is required, including robotics, autonomous vehicles, 3D image reconstruction, virtual and augmented reality, and surveillance. “Our method with camera-setting invariance can help extend the applicability of learning-based depth estimation techniques,” concludes Mukaigawa.
    Here’s hoping that this study paves the way to more capable computer vision systems. More

  • in

    New AI tool discovers realistic ‘metamaterials’ with unusual properties

    The properties of normal materials, such as stiffness and flexibility, are determined by the molecular composition of the material, but the properties of metamaterials are determined by the geometry of the structure from which they are built. Researchers design these structures digitally and then have it 3D-printed. The resulting metamaterials can exhibit unnatural and extreme properties. Researchers have, for instance, designed metamaterials that, despite being solid, behave like a fluid.
    “Traditionally, designers use the materials available to them to design a new device or a machine. The problem with that is that the range of available material properties is limited. Some properties that we would like to have, just don’t exist in nature. Our approach is: tell us what you want to have as properties and we engineer an appropriate material with those properties. What you will then get, is not really a material but something in-between a structure and a material, a metamaterial,” says professor Amir Zadpoor of the Department of Biomechanical Engineering.
    Inverse design
    Such a material discovery process requires solving a so-called inverse problem: the problem of finding the geometry that gives rise to the properties you desire. Inverse problems are notoriously difficult to solve, which is where AI comes into the picture. TU Delft researchers have developed deep learning models that solve these inverse problems.
    “Even when inverse problems were solved in the past, they have been limited by the simplifying assumption that the small-scale geometry can be made from an infinite number of building blocks. The problem with that assumption is that metamaterials are usually made by 3D-printing and real 3D-printers have a limited resolution, which limits the number of building blocks that fit within a given device,” says first author Dr. Helda Pahlavani.
    The AI models developed by TU Delft researchers break new ground by bypassing any such simplifying assumptions. “So we can now simply ask: how many building blocks does your manufacturing technique allow you to accommodate in your device? The model then finds the geometry that gives you your desired properties for the number of building blocks that you can actually manufacture.”
    Unlocking full potential
    A major practical problem neglected in previous research, has been the durability of metamaterials. Most existing designs break once they are used a few times. That is because existing metamaterials design approaches do not take durability into account. “So far, it has been only about what properties can be achieved. Our study considers durability and selects the most durable designs from a large pool of design candidates. This makes our designs really practical and not just theoretical adventures,” says Zadpoor.
    The possibilities of metamaterials seem endless, but the full potential is far from being realised, says assistant professor Mohammad J. Mirzaali, corresponding author of the publication. This is because finding the optimal design of a metamaterial is currently still largely based on intuition, involves trial and error and is therefore labour-intensive. Using an inverse design process, where the desired properties are the starting point of the design, is still very rare within the metamaterials field. “But we think the step we have taken, is revolutionary in the field of metamaterials. It could lead to all kinds of new applications.” There are possible applications in orthopaedic implants, surgical instruments, soft robots, adaptive mirrors, and exo-suits. More

  • in

    Researchers show classical computers can keep up with, and surpass, their quantum counterparts

    Quantum computing has been hailed as a technology that can outperform classical computing in both speed and memory usage, potentially opening the way to making predictions of physical phenomena not previously possible.
    Many see quantum computing’s advent as marking a paradigm shift from classical, or conventional, computing. Conventional computers process information in the form of digital bits (0s and 1s), while quantum computers deploy quantum bits (qubits) to store quantum information in values between 0 and 1. Under certain conditions this ability to process and store information in qubits can be used to design quantum algorithms that drastically outperform their classical counterparts. Notably, quantum’s ability to store information in values between 0 and 1 makes it difficult for classical computers to perfectly emulate quantum ones.
    However, quantum computers are finicky and have a tendency to lose information. Moreover, even if information loss can be avoided, it is difficult to translate it into classical information — which is necessary to yield a useful computation.
    Classical computers suffer from neither of those two problems. Moreover, cleverly devised classical algorithms can further exploit the twin challenges of information loss and translation to mimic a quantum computer with far fewer resources than previously thought — as recently reported in a research paper in the journal PRX Quantum.
    The scientists’ results show that classical computing can be reconfigured to perform faster and more accurate calculations than state-of-the-art quantum computers.
    This breakthrough was achieved with an algorithm that keeps only part of the information stored in the quantum state — and just enough to be able to accurately compute the final outcome.
    “This work shows that there are many potential routes to improving computations, encompassing both classical and quantum approaches,” explains Dries Sels, an assistant professor in New York University’s Department of Physics and one of the paper’s authors. “Moreover, our work highlights how difficult it is to achieve quantum advantage with an error-prone quantum computer.”
    In seeking ways to optimize classical computing, Sels and his colleagues at the Simons Foundation focused on a type of tensor network that faithfully represents the interactions between the qubits. Those types of networks have been notoriously hard to deal with, but recent advances in the field now allow these networks to be optimized with tools borrowed from statistical inference.

    The authors compare the work of the algorithm to the compression of an image into a JPEG file, which allows large images to be stored using less space by eliminating information with barely perceivable loss in the quality of the image.
    “Choosing different structures for the tensor network corresponds to choosing different forms of compression, like different formats for your image,” says the Flatiron Institute’s Joseph Tindall, who led the project. “We are successfully developing tools for working with a wide range of different tensor networks. This work reflects that, and we are confident that we will soon be raising the bar for quantum computing even further.”
    The work was supported by the Flatiron Institute and a grant from the Air Force Office of Scientific Research (FA9550-21-1-0236). More

  • in

    Making AI a partner in neuroscientific discovery

    The past year has seen major advances in Large Language Models (LLMs) such as ChatGPT. The ability of these models to interpret and produce human text sources (and other sequence data) has implications for people in many areas of human activity. A new perspective paper in the journal Neuron argues that like many professionals, neuroscientists can either benefit from partnering with these powerful tools or risk being left behind.
    In their previous studies, the authors showed that important preconditions are met to develop LLMs that can interpret and analyze neuroscientific data like ChatGPT interprets language. These AI models can be built for many different types of data, including neuroimaging, genetics, single-cell genomics, and even hand-written clinical reports.
    In the traditional model of research, a scientist studies previous data on a topic, develops new hypotheses and tests them using experiments. Because of the massive amounts of data available, scientists often focus on a narrow field of research, such as neuroimaging or genetics. LLMs, however, can absorb more neuroscientific research than a single human ever could. In their Neuron paper, the authors argue that one day LLMs specialized in diverse areas of neuroscience could be used to communicate with one another to bridge siloed areas of neuroscience research, uncovering truths that would be impossible to find by humans alone. In the case of drug development, for example, an LLM specialized in genetics could be used along with a neuroimaging LLM to discover promising candidate molecules to stop neurodegeneration. The neuroscientist would direct these LLMs and verify their outputs.
    Lead author Danilo Bzdok mentions the possibility that the scientist will, in certain cases, not always be able to fully understand the mechanism behind the biological processes discovered by these LLMs.
    “We have to be open to the fact that certain things about the brain may be unknowable, or at least take a long time to understand,” he says. “Yet we might still generate insights from state-of-the-art LLMs and make clinical progress, even if we don’t fully grasp the way they reach conclusions.”
    To realize the full potential of LLMs in neuroscience, Bzdok says scientists would need more infrastructure for data processing and storage than is available today at many research organizations. More importantly, it would take a cultural shift to a much more data-driven scientific approach, where studies that rely heavily on artificial intelligence and LLMs are published by leading journals and funded by public agencies. While the traditional model of strongly hypothesis-driven research remains key and is not going away, Bzdok says capitalizing on emerging LLM technologies might be important to spur the next generation of neurological treatments in cases where the old model has been less fruitful.
    “To quote John Naisbitt, neuroscientists today are ‘drowning in information but starving for knowledge,'” he says. “Our ability to generate biomolecular data is eclipsing our ability to glean understanding from these systems. LLMs offer an answer to this problem. They may be able to extract, synergize and synthesize knowledge from and across neuroscience domains, a task that may or may not exceed human comprehension.” More

  • in

    Technique could improve the sensitivity of quantum sensing devices

    In quantum sensing, atomic-scale quantum systems are used to measure electromagnetic fields, as well as properties like rotation, acceleration, and distance, far more precisely than classical sensors can. The technology could enable devices that image the brain with unprecedented detail, for example, or air traffic control systems with precise positioning accuracy.
    As many real-world quantum sensing devices are emerging, one promising direction is the use of microscopic defects inside diamonds to create “qubits” that can be used for quantum sensing. Qubits are the building blocks of quantum devices.
    Researchers at MIT and elsewhere have developed a technique that enables them to identify and control a greater number of these microscopic defects. This could help them build a larger system of qubits that can perform quantum sensing with greater sensitivity.
    Their method builds off a central defect inside a diamond, known as a nitrogen-vacancy (NV) center, which scientists can detect and excite using laser light and then control with microwave pulses. This new approach uses a specific protocol of microwave pulses to identify and extend that control to additional defects that can’t be seen with a laser, which are called dark spins.
    The researchers seek to control larger numbers of dark spins by locating them through a network of connected spins. Starting from this central NV spin, the researchers build this chain by coupling the NV spin to a nearby dark spin, and then use this dark spin as a probe to find and control a more distant spin which can’t be sensed by the NV directly. The process can be repeated on these more distant spins to control longer chains.
    “One lesson I learned from this work is that searching in the dark may be quite discouraging when you don’t see results, but we were able to take this risk. It is possible, with some courage, to search in places that people haven’t looked before and find potentially more advantageous qubits,” says Alex Ungar, a PhD student in electrical engineering and computer science and a member of the Quantum Engineering Group at MIT, who is lead author of a paper on this technique, which is published today in PRX Quantum.
    His co-authors include his advisor and corresponding author, Paola Cappellaro, the Ford Professor of Engineering in the Department of Nuclear Science and Engineering and professor of physics; as well as Alexandre Cooper, a senior research scientist at the University of Waterloo’s Institute for Quantum Computing; and Won Kyu Calvin Sun, a former researcher in Cappellaro’s group who is now a postdoc at the University of Illinois at Urbana-Champaign.

    Diamond defects
    To create NV centers, scientists implant nitrogen into a sample of diamond.
    But introducing nitrogen into the diamond creates other types of atomic defects in the surrounding environment. Some of these defects, including the NV center, can host what are known as electronic spins, which originate from the valence electrons around the site of the defect. Valence electrons are those in the outermost shell of an atom. A defect’s interaction with an external magnetic field can be used to form a qubit.
    Researchers can harness these electronic spins from neighboring defects to create more qubits around a single NV center. This larger collection of qubits is known as a quantum register. Having a larger quantum register boosts the performance of a quantum sensor.
    Some of these electronic spin defects are connected to the NV center through magnetic interaction. In past work, researchers used this interaction to identify and control nearby spins. However, this approach is limited because the NV center is only stable for a short amount of time, a principle called coherence. It can only be used to control the few spins that can be reached within this coherence limit.
    In this new paper, the researchers use an electronic spin defect that is near the NV center as a probe to find and control an additional spin, creating a chain of three qubits.

    They use a technique known as spin echo double resonance (SEDOR), which involves a series of microwave pulses that decouple an NV center from all electronic spins that are interacting with it. Then, they selectively apply another microwave pulse to pair the NV center with one nearby spin.
    Unlike the NV, these neighboring dark spins can’t be excited, or polarized, with laser light. This polarization is a required step to control them with microwaves.
    Once the researchers find and characterize a first-layer spin, they can transfer the NV’s polarization to this first-layer spin through the magnetic interaction by applying microwaves to both spins simultaneously. Then once the first-layer spin is polarized, they repeat the SEDOR process on the first-layer spin, using it as a probe to identify a second-layer spin that is interacting with it.
    Controlling a chain of dark spins
    This repeated SEDOR process allows the researchers to detect and characterize a new, distinct defect located outside the coherence limit of the NV center. To control this more distant spin, they carefully apply a specific series of microwave pulses that enable them to transfer the polarization from the NV center along the chain to this second-layer spin.
    “This is setting the stage for building larger quantum registers to higher-layer spins or longer spin chains, and also showing that we can find these new defects that weren’t discovered before by scaling up this technique,” Ungar says.
    To control a spin, the microwave pulses must be very close to the resonance frequency of that spin. Tiny drifts in the experimental setup, due to temperature or vibrations, can throw off the microwave pulses.
    The researchers were able to optimize their protocol for sending precise microwave pulses, which enabled them to effectively identify and control second-layer spins, Ungar says.
    “We are searching for something in the unknown, but at the same time, the environment might not be stable, so you don’t know if what you are finding is just noise. Once you start seeing promising things, you can put all your best effort in that one direction. But before you arrive there, it is a leap of faith,” Cappellaro says.
    While they were able to effectively demonstrate a three-spin chain, the researchers estimate they could scale their method to a fifth layer using their current protocol, which could provide access to hundreds of potential qubits. With further optimization, they may be able to scale up to more than 10 layers.
    In the future, they plan to continue enhancing their technique to efficiently characterize and probe other electronic spins in the environment and explore different types of defects that could be used to form qubits.
    This research is supported, in part, by the U.S. National Science Foundation and the Canada First Research Excellence Fund. More

  • in

    Combining materials may support unique superconductivity for quantum computing

    A new fusion of materials, each with special electrical properties, has all the components required for a unique type of superconductivity that could provide the basis for more robust quantum computing. The new combination of materials, created by a team led by researchers at Penn State, could also provide a platform to explore physical behaviors similar to those of mysterious, theoretical particles known as chiral Majoranas, which could be another promising component for quantum computing.
    The new study appeared online today (Feb. 8) in the journal Science. The work describes how the researchers combined the two magnetic materials in what they called a critical step toward realizing the emergent interfacial superconductivity, which they are currently working toward.
    Superconductors — materials with no electrical resistance — are widely used in digital circuits, the powerful magnets in magnetic resonance imaging (MRI) and particle accelerators, and other technology where maximizing the flow of electricity is crucial. When superconductors are combined with materials called magnetic topological insulators — thin films only a few atoms thick that have been made magnetic and restrict the movement of electrons to their edges — the novel electrical properties of each component work together to produce “chiral topological superconductors.” The topology, or specialized geometries and symmetries of matter, generates unique electrical phenomena in the superconductor, which could facilitate the construction of topological quantum computers.
    Quantum computers have the potential to perform complex calculations in a fraction of the time it takes traditional computers because, unlike traditional computers which store data as a one or a zero, the quantum bits of quantum computers store data simultaneously in a range of possible states. Topological quantum computers further improve upon quantum computing by taking advantage of how electrical properties are organized to make the computers robust to decoherence, or the loss of information that happens when a quantum system is not perfectly isolated.
    “Creating chiral topological superconductors is an important step toward topological quantum computation that could be scaled up for broad use,” said Cui-Zu Chang, Henry W. Knerr Early Career Professor and associate professor of physics at Penn State and co-corresponding author of the paper. “Chiral topological superconductivity requires three ingredients: superconductivity, ferromagnetism and a property called topological order. In this study, we produced a system with all three of these properties.”
    The researchers used a technique called molecular beam epitaxy to stack together a topological insulator that has been made magnetic and an iron chalcogenide (FeTe), a promising transition metal for harnessing superconductivity. The topological insulator is a ferromagnet — a type of magnet whose electrons spin the same way — while FeTe is an antiferromagnet, whose electrons spin in alternating directions. The researchers used a variety of imaging techniques and other methods to characterize the structure and electrical properties of the resulting combined material and confirmed the presence of all three critical components of chiral topological superconductivity at the interface between the materials.
    Prior work in the field has focused on combining superconductors and nonmagnetic topological insulators. According to the researchers, adding in the ferromagnet has been particularly challenging.

    “Normally, superconductivity and ferromagnetism compete with each other, so it is rare to find robust superconductivity in a ferromagnetic material system,” said Chao-Xing Liu, professor of physics at Penn State and co-corresponding author of the paper. “But the superconductivity in this system is actually very robust against the ferromagnetism. You would need a very strong magnetic field to remove the superconductivity.”
    The research team is still exploring why superconductivity and ferromagnetism coexist in this system.
    “It’s actually quite interesting because we have two magnetic materials that are non-superconducting, but we put them together and the interface between these two compounds produces very robust superconductivity,” Chang said. “Iron chalcogenide is antiferromagnetic, and we anticipate its antiferromagnetic property is weakened around the interface to give rise to the emergent superconductivity, but we need more experiments and theoretical work to verify if this is true and to clarify the superconducting mechanism.”
    The researchers said they believe this system will be useful in the search for material systems that exhibit similar behaviors as Majorana particles — theoretical subatomic particles first hypothesized in 1937. Majorana particles act as their own antiparticle, a unique property that could potentially allow them to be used as quantum bits in quantum computers.
    “Providing experimental evidence for the existence of chiral Majorana will be a critical step in the creation of a topological quantum computer,” Chang said. “Our field has had a rocky past in trying to find these elusive particles, but we think this is a promising platform for exploring Majorana physics.”
    In addition to Chang and Liu, the research team at Penn State at the time of the research included postdoctoral researcher Hemian Yi; graduate students Yi-Fan Zhao, Ruobing Mei, Zi-Jie Yan, Ling-Jie Zhou, Ruoxi Zhang, Zihao Wang, Stephen Paolini and Run Xiao; assistant research professors in the Materials Research Institute Ke Wang and Anthony Richardella; Evan Pugh University Professor Emeritus of Physics Moses Chan; and Verne M. Willaman Professor of Physics and Professor of Materials Science and Engineering Nitin Samarth. The research team also includes Ying-Ting Chan and Weida Wu at Rutgers University; Jiaqi Cai and Xiaodong Xu at the University of Washington; Xianxin Wu at the Chinese Academy of Sciences; John Singleton and Laurel Winter at the National High Magnetic Field Laboratory; Purnima Balakrishnan and Alexander Grutter at the National Institute of Standards and Technology; and Thomas Prokscha, Zaher Salman, and Andreas Suter at the Paul Scherrer Institute of Switzerland.
    This research is supported by the U.S. Department of Energy. Additional support was provided by the U.S. National Science Foundation (NSF), the NSF-funded Materials Research Science and Engineering Center for Nanoscale Science at Penn State, the Army Research Office, the Air Force Office of Scientific Research, the state of Florida and the Gordon and Betty Moore Foundation’s EPiQS Initiative. More

  • in

    AI model as diabetes early warning system when driving

    Based solely on driving behavior and head/gaze motion, the newly developed tool recognizes low blood sugar levels.
    Low blood sugar levels (hypoglycemia) are one of the most dangerous complications of diabetes and pose high risk during cognitively demanding tasks requiring complex motor skills, such as driving a car. The utility of current tools to detect hypoglycemia is limited by diagnostic delay, invasiveness, low availability, and high costs. A recent study published in the journal NEJM AI provides a novel way to detect hypoglycemia during driving. The research was the work of LMU scientists in collaboration with colleagues from the University Hospital of Bern (Inselspital), ETH Zurich, and the University of St. Gallen.
    In their study, the researchers collected data from 30 diabetics as they drove a real car. For each patient, data was recorded once during a state with normal blood sugar levels and once during a hypoglycemic state. To this end, each patient was deliberately put into a hypoglycemic state by medical professionals present in the car. The collected data comprised driving signals such as car speed and head/gaze motion data — for example, the speed of eye movements.
    Subsequently, the scientists developed a novel machine learning (ML) model capable of automatically and reliably detecting hypoglycemic episodes using only routinely collected driving data and head/gaze motion data. “This technology could serve as an early warning system in cars and enable drivers to take necessary precautions before hypoglycemic symptoms impair their ability to drive safely,” says Simon Schallmoser, doctoral candidate at the Institute of AI in Management at LMU and one of the contributing researchers.
    The newly developed ML model also performed well when only head/gaze motion data was used, which is crucial for future self-driving cars. Professor Stefan Feuerriegel, head of the Institute of AI in Management and project partner, explains: “This study not only showcases the potential for AI to improve individual health outcomes but also its role in improving safety on public roads.” More

  • in

    A new ‘metal swap’ method for creating lateral heterostructures of 2D materials

    Heterostructures of two-dimensional materials have unique properties. Among them, lateral heterostructures, which can be used to make electronic devices, are challenging to synthesize. To address this, researchers used a new transmetallation technique to fabricate heterostructures with in-plane heterojunctions using Zn3BHT coordination nanosheet. This simple and powerful method enables the fabrication of ultrathin electronic devices for ultralarge-scale integrated circuits, marking a significant step forward for 2D materials research.
    Electronically conducting two-dimensional (2D) materials are currently hot topics of research in both physics and chemistry owing to their unique properties that have the potential to open up new avenues in science and technology. Moreover, the combination of different 2D materials, called heterostructures, expands the diversity of their electrical, photochemical, and magnetic properties. This can lead to innovative electronic devices not achievable with a single material alone.
    Heterostructures can be fabricated in two ways: vertically, with materials stacked on top of each other, or laterally, where materials are stacked side-by-side on the same plane. Lateral arrangements offer a special advantage, confining charge carriers to a single plane and paving the way for exceptional “in-plane” electronic devices. However, the construction of lateral junctions is challenging.
    In this regard, conducting 2D materials made using organic materials, called “coordination nanosheets,” are promising. They can be created by combining metals and ligands, ranging from those with metallic properties such as graphene and semiconducting properties such as transition metal dichalcogenides to the ones possessing insulating properties such as boron nitride. These nanosheets enable a unique method called transmetallation. This allows the synthesis of lateral heterostructures with “heterojunctions,” which cannot be achieved through direct reaction. Heterojunctions are interfaces between two materials that have distinct electronic properties and therefore can serve as electronic devices. Furthermore, by utilizing heterojunctions of coordinated nanosheets, new electronic properties that have been difficult to get with conventional 2D materials can be created. Despite these advantages, the research on transmetallation as a method to fabricate heterostructures is still limited.
    To address this knowledge gap, a team of researchers from Japan, led by Professor Hiroshi Nishihara from the Research Institute for Science and Technology at Tokyo University of Science (TUS), Japan, used sequential transmetallation to synthesize lateral heterojunctions of Zn3BHT coordination nanosheets. The team included Dr. Choon Meng Tan, Assistant Professor Naoya Fukui, Assistant Professor Kenji Takada, and Assistant Professor Hiroaki Maeda, also from TUS. The study, a joint research effort by TUS, the University of Cambridge, the National Institute for Materials Science (NIMS), Kyoto Institute of Technology, and the Japan Synchrotron Radiation Research Institute (JASRI), was published in the journal Angewandte Chemie International Edition on January 05, 2024.
    The team first fabricated and characterized the Zn3BHT coordination nanosheet. Next, they investigated the transmetallation of Zn3BHT with copper and iron. Prof. Nishihara explains: “Via sequential and spatially limited immersion of the nanosheet into aqueous copper and iron ion solutions under mild conditions, we easily fabricated heterostructures with in-plane heterojunctions of transmetallated iron and copper nanosheets.”
    This method is a solution process at room temperature and atmospheric pressure, from the fabrication of coordinated nanosheets to the fabrication of in-plane heterojunctions. This process is completely different from the high-temperature, vacuum, gas-phase processing process that is used in lithography technology for silicon semiconductors. It is a simple and inexpensive process that does not require large equipment. The challenge is how to create highly crystalline thin films that are free of impurities. If clean rooms and highly purified reagents are available, commercially viable manufacturing techniques will soon be achieved.
    The resulting seamless heterojunction obtained by the researchers demonstrated rectifying behavior common in electronic circuits. Testing the characteristics of the diode revealed the versatility of the Zn3BHT coordination nanosheet. These characteristics can be changed easily without any special equipment. Moreover, this material also enables the fabrication of an integrated circuit from only a single coordination sheet, without any patchworking from different materials. Prof. Nishihara highlights the importance of this technique: “Ultrathin (nanometer-thick) rectifying elements obtained from our method will be quite useful for the fabrication of ultralarge-scale integrated circuits. Simultaneously, the unique physical properties of monoatomic layer films with in-plane heterojunctions can lead to the development of new elements.”
    Furthermore, by using this transmetallation reaction, it is possible to create junctions with various electronic properties, such as p-n, MIM (metal-insulator-metal) and MIS (metal-insulator-semiconductor) junctions. The ability to bond single-layer topological insulators will also enable new electronic devices such as electron splitters and multilevel devices that have only been theoretically predicted.
    Overall, this study presents a simple yet powerful technique for crafting lateral heterostructures, marking a significant step in 2D materials research. More