More stories

  • in

    Computer-engineered DNA to study cell identities

    A new computer program allows scientists to design synthetic DNA segments that indicate, in real time, the state of cells. Reported by the Gargiulo lab in “Nature Communications,” it will be used to screen for anti-cancer or viral infections drugs, or to improve gene and cell-based immunotherapies.
    All the cells in our body have the same genetic code, and yet they can differ in their identities, functions and disease states. Telling one cell apart from another in a simple manner, in real time, would prove invaluable for scientists trying to understand inflammation, infections or cancers. Now, scientists at the Max Delbrück Center have created an algorithm that can design such tools that reveal the identity and state of cells using segments of DNA called “synthetic locus control regions” (sLCRs). They can be used in a variety of biological systems. The findings, by the lab of Dr Gaetano Gargiulo, head of the Molecular Oncology Lab, are reported in Nature Communications.
    “This algorithm enables us to create precise DNA tools for marking and studying cells, offering new insights into cellular behaviors,” says Gargiulo, senior author of the study. “We hope this research opens doors to a more straightforward and scalable way of understanding and manipulating cells.”
    This effort began when Dr Carlos Company, a former graduate student at the Gargiulo lab and co-first author of the study, started to invest energy into making the design of the DNA tools automated and accessible to other scientists. He coded an algorithm that can generate tools to understand basic cellular processes as well as disease processes such as cancers, inflammation and infections.
    “This tool allows researchers to examine the way cells transform from one type to another. It is particularly innovative because it compiles all the crucial instructions that direct these changes into a simple synthetic DNA sequence. In turn, this simplifies studying complex cellular behaviors in important areas like cancer research and human development,” says Company.
    Algorithm to make a tailored DNA tool
    The computer program is named “logical design of synthetic cis-regulatory DNA” (LSD). The researchers input the known genes and transcription factors associated with the specific cell states they want to study, and the program uses this to identify DNA segments (promoters and enhancers) controlling the activity in the cell of interest. This information is sufficient to discover functional sequences, and scientists do not have to know the precise genetic or molecular reason behind a cell’s behavior; they just have to construct the sLCR.

    The program looks within the genomes of either humans or mouse to find places where transcription factors are highly likely to bind, says Yuliia Dramaretska, a graduate student at the Gargiulo lab and co-first author. It spits out a list of 150-basepair long sequences that are relevant, and which likely act as the active promoters and enhancers for the condition being studied.
    “It’s not giving a random list of those regions, obviously,” she says. “The algorithm is actually ranking them and finding the segments that will most efficiently represent the phenotype you want to study.”
    Like a lamp inside the cells
    Scientists can then make a tool, called a “synthetic locus control region” (sLCR), which includes the generated sequence followed by a DNA segment encoding a fluorescent protein. “The sLCRs are like an automated lamp that you can put inside of the cells. This lamp switches on only under the conditions you want to study,” says Dr Michela Serresi, a researcher at the Gargiulo lab and co-first author. The color of the “lamp” can be varied to match different states of interest, so that scientists can look under a fluorescence microscope and immediately know the state of each cell from its color. “We can follow with our eyes the color in a petri dish when we give a treatment,” Serresi says.
    The scientists have validated the utility of the computer program by using it to screen for drugs in SARS-CoV-2 infected cells, as published last year in “Science Advances.” They also used it to find mechanisms implicated in brain cancers called glioblastomas, where no single treatment works. “In order to find treatment combinations that work for specific cell states in glioblastomas, you not only need to understand what defines these cell states, but you also need to see them as they arise,” says Dr Matthias Jürgen Schmitt, the researcher at the Gargiulo lab and co-first author, who used the tools in the lab to showcase their value.
    Now, imagine immune cells engineered in the lab as a gene therapy to kill a type of cancer. When infused into the patient, not all these cells will work as intended. Some will be potent and while others may be in a dysfunctional state. Funded by an European Research Council grant, the Gargiulo lab will be using this system to study the behavior of these delicate anti-cancer cell-based therapeutics during manufacturing. “With the right collaborations, this method holds potential for advancing treatments in areas like cancer, viral infections, and immunotherapies,” Gargiulo says. More

  • in

    Direct view of tantalum oxidation that impedes qubit coherence

    Scientists at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory and DOE’s Pacific Northwest National Laboratory (PNNL) have used a combination of scanning transmission electron microscopy (STEM) and computational modeling to get a closer look and deeper understanding of tantalum oxide. When this amorphous oxide layer forms on the surface of tantalum — a superconductor that shows great promise for making the “qubit” building blocks of a quantum computer — it can impede the material’s ability to retain quantum information. Learning how the oxide forms may offer clues as to why this happens — and potentially point to ways to prevent quantum coherence loss. The research was recently published in the journal ACS Nano.
    The paper builds on earlier research by a team at Brookhaven’s Center for Functional Nanomaterials (CFN), Brookhaven’s National Synchrotron Light Source II (NSLS-II), and Princeton University that was conducted as part of the Co-design Center for Quantum Advantage (C2QA), a Brookhaven-led national quantum information science research center in which Princeton is a key partner.
    “In that work, we used X-ray photoemission spectroscopy at NSLS-II to infer details about the type of oxide that forms on the surface of tantalum when it is exposed to oxygen in the air,” said Mingzhao Liu, a CFN scientist and one of the lead authors on the study. “But we wanted to understand more about the chemistry of this very thin layer of oxide by making direct measurements,” he explained.
    So, in the new study, the team partnered with scientists in Brookhaven’s Condensed Matter Physics & Materials Science (CMPMS) Department to use advanced STEM techniques that enabled them to study the ultrathin oxide layer directly. They also worked with theorists at PNNL who performed computational modeling that revealed the most likely arrangements and interactions of atoms in the material as they underwent oxidation. Together, these methods helped the team build an atomic-level understanding of the ordered crystalline lattice of tantalum metal, the amorphous oxide that forms on its surface, and intriguing new details about the interface between these layers.
    “The key is to understand the interface between the surface oxide layer and the tantalum film because this interface can profoundly impact qubit performance,” said study co-author Yimei Zhu, a physicist from CMPMS, echoing the wisdom of Nobel laureate Herbert Kroemer, who famously asserted, “The interface is the device.”
    Emphasizing that “quantitatively probing a mere one-to-two-atomic-layer-thick interface poses a formidable challenge,” Zhu noted, “we were able to directly measure the atomic structures and bonding states of the oxide layer and tantalum film as well as identify those of the interface using the advanced electron microscopy techniques developed at Brookhaven.”
    “The measurements reveal that the interface consists of a ‘suboxide’ layer nestled between the periodically ordered tantalum atoms and the fully disordered amorphous tantalum oxide. Within this suboxide layer, only a few oxygen atoms are integrated into the tantalum crystal lattice,” Zhu said.

    The combined structural and chemical measurements offer a crucially detailed perspective on the material. Density functional theory calculations then helped the scientists validate and gain deeper insight into these observations.
    “We simulated the effect of gradual surface oxidation by gradually increasing the number of oxygen species at the surface and in the subsurface region,” said Peter Sushko, one of the PNNL theorists.
    By assessing the thermodynamic stability, structure, and electronic property changes of the tantalum films during oxidation, the scientists concluded that while the fully oxidized amorphous layer acts as an insulator, the suboxide layer retains features of a metal.
    “We always thought if the tantalum is oxidized, it becomes completely amorphous, with no crystalline order at all,” said Liu. “But in the suboxide layer, the tantalum sites are still quite ordered.”
    With the presence of both fully oxidized tantalum and a suboxide layer, the scientists wanted to understand which part is most responsible the loss of coherence in qubits made of this superconducting material.
    “It’s likely the oxide has multiple roles,” Liu said.

    First, he noted, the fully oxidized amorphous layer contains many lattice defects. That is, the locations of the atoms are not well defined. Some atoms can shift around to different configurations, each with a different energy level. Though these shifts are small, each one consumes a tiny bit of electrical energy, which contributes to loss of energy from the qubit.
    “This so-called two-level system loss in an amorphous material brings parasitic and irreversible loss to the quantum coherence — the ability of the material to hold onto quantum information,” Liu said.
    But because the suboxide layer is still crystalline, “it may not be as bad as people were thinking,” Liu said. Maybe the more-fixed atomic arrangements in this layer will minimize two-level system loss.
    Then again, he noted, because the suboxide layer has some metallic characteristics, it could cause other problems.
    “When you put a normal metal next to a superconductor, that could contribute to breaking up the pairs of electrons that move through the material with no resistance,” he noted. “If the pair breaks into two electrons again, then you will have loss of superconductivity and coherence. And that is not what you want.”
    Future studies may reveal more details and strategies for preventing loss of superconductivity and quantum coherence in tantalum.
    This research was funded by the DOE Office of Science (BES). In addition to the experimental facilities described above, this research used computational resources at CFN and at the National Energy Research Scientific Computing Center (NERSC) at DOE’s Lawrence Berkeley National Laboratory. CFN, NSLS-II, and NERSC are DOE Office of Science user facilities. More

  • in

    Magnesium protects tantalum, a promising material for making qubits

    Scientists at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory have discovered that adding a layer of magnesium improves the properties of tantalum, a superconducting material that shows great promise for building qubits, the basis of quantum computers. As described in a paper just published in the journal Advanced Materials, a thin layer of magnesium keeps tantalum from oxidizing, improves its purity, and raises the temperature at which it operates as a superconductor. All three may increase tantalum’s ability to hold onto quantum information in qubits.
    This work builds on earlier studies in which a team from Brookhaven’s Center for Functional Nanomaterials (CFN), Brookhaven’s National Synchrotron Light Source II (NSLS-II), and Princeton University sought to understand the tantalizing characteristics of tantalum, and then worked with scientists in Brookhaven’s Condensed Matter Physics & Materials Science (CMPMS) Department and theorists at DOE’s Pacific Northwest National Laboratory (PNNL) to reveal details about how the material oxidizes.
    Those studies showed why oxidation is an issue.
    “When oxygen reacts with tantalum, it forms an amorphous insulating layer that saps tiny bits of energy from the current moving through the tantalum lattice. That energy loss disrupts quantum coherence — the material’s ability to hold onto quantum information in a coherent state,” explained CFN scientist Mingzhao Liu, a lead author on the earlier studies and the new work.
    While the oxidation of tantalum is usually self-limiting — a key reason for its relatively long coherence time — the team wanted to explore strategies to further restrain oxidation to see if they could improve the material’s performance.
    “The reason tantalum oxidizes is that you have to handle it in air and the oxygen in air will react with the surface,” Liu explained. “So, as chemists, can we do something to stop that process? One strategy is to find something to cover it up.”
    All this work is being carried out as part of the Co-design Center for Quantum Advantage (C2QA), a Brookhaven-led national quantum information science research center. While ongoing studies explore different kinds of cover materials, the new paper describes a promising first approach: coating the tantalum with a thin layer of magnesium.

    “When you make a tantalum film, it is always in a high-vacuum chamber, so there is not much oxygen to speak of,” said Liu. “The problem always happens when you take it out. So, we thought, without breaking the vacuum, after we put the tantalum layer down, maybe we can put another layer, like magnesium, on top to block the surface from interacting with the air.”
    Studies using transmission electron microscopy to image structural and chemical properties of the material, atomic layer by atomic layer, showed that the strategy to coat tantalum with magnesium was remarkably successful. The magnesium formed a thin layer of magnesium oxide on the tantalum surface that appears to keep oxygen from getting through.
    “Electron microscopy techniques developed at Brookhaven Lab enabled direct visualization not only of the chemical distribution and atomic arrangement within the thin magnesium coating layer and the tantalum film but also of the changes of their oxidation states,” said Yimei Zhu, a study co-author from CMPMS. “This information is extremely valuable in comprehending the material’s electronic behavior,” he noted.
    X-ray photoelectron spectroscopy studies at NSLS-II revealed the impact of the magnesium coating on limiting the formation of tantalum oxide. The measurements indicated that an extremely thin layer of tantalum oxide — less than one nanometer thick — remains confined directly beneath the magnesium/tantalum interface without disrupting the rest of the tantalum lattice.
    “This is in stark contrast to uncoated tantalum, where the tantalum oxide layer can be more than three nanometers thick — and significantly more disruptive to the electronic properties of tantalum,” said study co-author Andrew Walter, a lead beamline scientist in the Soft X-ray Scattering & Spectroscopy program at NSLS-II.
    Collaborators at PNNL then used computational modeling at the atomic scale to identify the most likely arrangements and interactions of the atoms based on their binding energies and other characteristics. These simulations helped the team develop a mechanistic understanding of why magnesium works so well.

    At the simplest level, the calculations revealed that magnesium has a higher affinity for oxygen than tantalum does.
    “While oxygen has a high affinity to tantalum, it is ‘happier’ to stay with the magnesium than with the tantalum,” said Peter Sushko, one of the PNNL theorists. “So, the magnesium reacts with oxygen to form a protective magnesium oxide layer. You don’t even need that much magnesium to do the job. Just two nanometers of thickness of magnesium almost completely blocks the oxidation of tantalum.”
    The scientists also demonstrated that the protection lasts a long time: “Even after one month, the tantalum is still in pretty good shape. Magnesium is a really good oxygen barrier,” Liu concluded.
    The magnesium had an unexpected beneficial effect: It “sponged out” inadvertent impurities in the tantalum and, as a result, raised the temperature at which it operates as a superconductor.
    “Even though we are making these materials in a vacuum, there is always some residual gas — oxygen, nitrogen, water vapor, hydrogen. And tantalum is very good at sucking up these impurities,” Liu explained. “No matter how careful you are, you will always have these impurities in your tantalum.”
    But when the scientists added the magnesium coating, they discovered that its strong affinity for the impurities pulled them out. The resulting purer tantalum had a higher superconducting transition temperature.
    That could be very important for applications because most superconductors must be kept very cold to operate. In these ultracold conditions, most of the conducting electrons pair up and move through the material with no resistance.
    “Even a slight elevation in the transition temperature could reduce the number of remaining, unpaired electrons,” Liu said, potentially making the material a better superconductor and increasing its quantum coherence time.
    “There will have to be follow-up studies to see if this material improves qubit performance,” Liu said. “But this work provides valuable insights and new materials design principles that could help pave the way to the realization of large-scale, high-performance quantum computing systems.” More

  • in

    A sleeker facial recognition technology tested on Michelangelo’s David

    Many people are familiar with facial recognition systems that unlock smartphones and game systems or allow access to our bank accounts online. But the current technology can require boxy projectors and lenses. Now, researchers report in ACS’ Nano Letters a sleeker 3D surface imaging system with flatter, simplified optics. In proof-of-concept demonstrations, the new system recognized the face of Michelangelo’s David just as well as an existing smartphone system.
    3D surface imaging is a common tool used in smartphone facial recognition, as well as in computer vision and autonomous driving. These systems typically consist of a dot projector that contains multiple components: a laser, lenses, a light guide and a diffractive optical element (DOE). The DOE is a special kind of lens that breaks the laser beam into an array of about 32,000 infrared dots. So, when a person looks at a locked screen, the facial recognition system projects an array of dots onto most of their face, and the device’s camera reads the pattern created to confirm the identity. However, dot projector systems are relatively large for small devices such as smartphones. So, Yu-Heng Hong, Hao-Chung Kuo, Yao-Wei Huang and colleagues set out to develop a more compact facial recognition system that would be nearly flat and require less energy to operate.
    To do this, the researchers replaced a traditional dot projector with a low-power laser and a flat gallium arsenide surface, significantly reducing the imaging device’s size and power consumption. They etched the top of this thin metallic surface with a nanopillar pattern, which creates a metasurface that scatters light as it passes through the material. In this prototype, the low-powered laser light scatters into 45,700 infrared dots that are projected onto an object or face positioned in front of the light source. Like the dot projector system, the new system incorporates a camera to read the patterns that the infrared dots created.
    In tests of the prototype, the system accurately identified a 3D replica of Michelangelo’s David by comparing the infrared dot patterns to online photos of the famous statue. Notably, it accomplished this using five to 10 times less power and on a platform with a surface area about 230 times smaller than a common dot-projector system. The researchers say their prototype demonstrates the usefulness of metasurfaces for effective small-scale low-power imaging solutions for facial recognition, robotics and extended reality.
    The authors acknowledge funding from Hon Hai Precision Industry, the National Science and Technology Council in Taiwan, and the Ministry of Education in Taiwan. More

  • in

    A physical qubit with built-in error correction

    Researchers at the universities of Mainz, Olomouc, and Tokyo succeeded in generating a logical qubit from a single light pulse that has the inherent capacity to correct errors.
    There has been significant progress in the field of quantum computing. Big global players, such as Google and IBM, are already offering cloud-based quantum computing services. However, quantum computers cannot yet help with problems that occur when standard computers reach the limits of their capacities because the availability of qubits or quantum bits, i.e., the basic units of quantum information, is still insufficient. One of the reasons for this is that bare qubits are not of immediate use for running a quantum algorithm.
    While the binary bits of customary computers store information in the form of fixed values of either 0 or 1, qubits can represent 0 and 1 at one and the same time, bringing probability as to their value into play. This is known as quantum superposition. This makes them very susceptible to external influences, which means that the information they store can readily be lost. In order to ensure that quantum computers supply reliable results, it is necessary to generate a genuine entanglement to join together several physical qubits to form a logical qubit. Should one of these physical qubits fail, the other qubits will retain the information. However, one of the main difficulties preventing the development of functional quantum computers is the large number of physical qubits required.
    Advantages of a photon-based approach
    Many different concepts are being employed to make quantum computing viable. Large corporations currently rely on superconducting solid-state systems, for example, but these have the disadvantage that they only function at temperatures close to absolute zero. Photonic concepts, on the other hand, work at room temperature. Single photons usually serve as physical qubits here. These photons, which are, in a sense, tiny particles of light, inherently operate more rapidly than solid-state qubits but, at the same time, are more easily lost. To avoid qubit losses and other errors, it is necessary to couple several single-photon light pulses together to construct a logical qubit — as in the case of the superconductor-based approach.
    A qubit with the inherent capacity for error correction
    Researchers of the University of Tokyo together with colleagues from Johannes Gutenberg University Mainz (JGU) in Germany and Palacký University Olomouc in the Czech Republic have recently demonstrated a new means of constructing a photonic quantum computer. Rather than using a single photon, the team employed a laser-generated light pulse that can consist of several photons. “Our laser pulse was converted to a quantum optical state that gives us an inherent capacity to correct errors,” stated Professor Peter van Loock of Mainz University. “Although the system consists only of a laser pulse and is thus very small, it can — in principle — eradicate errors immediately.” Thus, there is no need to generate individual photons as qubits via numerous light pulses and then have them interact as logical qubits. “We need just a single light pulse to obtain a robust logical qubit,” added van Loock. To put it in other words, a physical qubit is already equivalent to a logical qubit in this system — a remarkable and unique concept. However, the logical qubit experimentally produced at the University of Tokyo was not yet of a sufficient quality to provide the necessary level of error tolerance. Nonetheless, the researchers have clearly demonstrated that it is possible to transform non-universally correctable qubits into correctable qubits using the most innovative quantum optical methods.
    The corresponding research results have recently been published in Science. They are based on a collaboration going back some 20 years between the experimental group of Akira Furusawa in Japan and the theoretical team of Peter van Loock in Germany. More

  • in

    AI learns through the eyes and ears of a child

    AI systems, such as GPT-4, can now learn and use human language, but they learn from astronomical amounts of language input — much more than children receive when learning how to understand and speak a language. The best AI systems train on text with a word count in the trillions, whereas children receive just millions per year.
    Due to this enormous data gap, researchers have been skeptical that recent AI advances can tell us much about human learning and development. An ideal test for demonstrating a connection would involve training an AI model, not on massive data from the web, but on only the input that a single child receives. What would the model be able to learn then?
    A team of New York University researchers ran this exact experiment. They trained a multimodal AI system through the eyes and ears of a single child, using headcam video recordings from when the child was six months and through their second birthday. They examined if the AI model could learn words and concepts present in a child’s everyday experience.
    Their findings, reported in the latest issue of the journal Science, showed that the model, or neural network, could, in fact, learn a substantial number of words and concepts using limited slices of what the child experienced. That is, the video only captured about 1% of the child’s waking hours, but that was sufficient for genuine language learning.
    In this video, the researchers describe their work in greater detail.
    “We show, for the first time, that a neural network trained on this developmentally realistic input from a single child can learn to link words to their visual counterparts,” says Wai Keen Vong, a research scientist at NYU’s Center for Data Science and the paper’s first author. “Our results demonstrate how recent algorithmic advances paired with one child’s naturalistic experience has the potential to reshape our understanding of early language and concept acquisition.”
    “By using AI models to study the real language-learning problem faced by children, we can address classic debates about what ingredients children need to learn words — whether they need language-specific biases, innate knowledge, or just associative learning to get going,” adds Brenden Lake, an assistant professor in NYU’s Center for Data Science and Department of Psychology and the paper’s senior author. “It seems we can get more with just learning than commonly thought.”
    Vong, Lake, and their NYU colleagues, Wentao Wang and Emin Orhan, analyzed a child’s learning process captured on first-person video — via a light, head-mounted camera — on a weekly basis beginning at six months and through 25 months, using more than 60 hours of footage. The footage contained approximately a quarter of a million word instances (i.e., the number of words communicated, many of them repeatedly) that are linked with video frames of what the child saw when those words were spoken and included a wide range of different activities across development, including mealtimes, reading books, and the child playing.

    The NYU researchers then trained a multimodal neural network with two separate modules: one that takes in single video frames (the vision encoder) and another that takes in the transcribed child-directed speech (the language encoder). These two encoders were combined and trained using an algorithm called contrastive learning, which aims to learn useful input features and their cross-modal associations. For instance, when a parent says something in view of the child, it is likely that some of the words used are likely referring to something that the child can see, meaning comprehension is instilled by linking visual and linguistic cues.
    “This provides the model a clue as to which words should be associated with which objects,” explains Vong. “Combining these cues is what enables contrastive learning to gradually determine which words belong with which visuals and to capture the learning of a child’s first words.”
    After training the model, the researchers tested it using the same kinds of evaluations used to measure word learning in infants — presenting the model with the target word and an array of four different image options and asking it to select the image that matches the target word. Their results showed that the model was able to learn a substantial number of the words and concepts present in the child’s everyday experience. Furthermore, for some of the words the model learned, it could generalize them to very different visual instances than those seen at training, reflecting an aspect of generalization also seen in children when they are tested in the lab.
    “These findings suggest that this aspect of word learning is feasible from the kind of naturalistic data that children receive while using relatively generic learning mechanisms such as those found in neural networks,” observes Lake.
    The work was supported by the U.S. Department of Defense’s Defense Advanced Research Projects Agency (N6600119C4030) and the National Science Foundation (1922658). Participation of the child was approved by the parents and the methodology was approved by NYU’s Institutional Review Board. More

  • in

    Photonics-based wireless link breaks speed records for data transmission

    From coffee-shop customers who connect their laptop to the local Wi-Fi network to remote weather monitoring stations in the Antarctic, wireless communication is an essential part of modern life. Researchers worldwide are currently working on the next evolution of communication networks, called “beyond 5G” or 6G networks. To enable the near-instantaneous communication needed for applications like augmented reality or the remote control of surgical robots, ultra-high data speeds will be needed on wireless channels. In a study published recently in IEICE Electronics Express, researchers from Osaka University and IMRA AMERICA have found a way to increase these data speeds by reducing the noise in the system through lasers.
    To pack in large amounts of data and keep responses fast, the sub-terahertz band, which extends from 100 GHz to 300 GHz, will be used by 6G transmitters and receivers. A sophisticated approach called “multi-level signal modulation” is used to further increase the data transmission rate of these wireless links. However, when operating at the top end of these extremely high frequencies, multi-level signal modulation becomes highly sensitive to noise. To work well, it relies on precise reference signals, and when these signals begin to shift forward and backward in time (a phenomenon called “phase noise”), the performance of multi-level signal modulation drops.
    “This problem has limited 300-GHz communications so far,” says Keisuke Maekawa, lead author of the study. “However, we found that at high frequencies, a signal generator based on a photonic device had much less phase noise than a conventional electrical signal generator.”
    Specifically, the team used a stimulated Brillouin scattering laser, which employs interactions between sound and light waves, to generate a precise signal. They then set up a 300 GHz-band wireless communication system that employs the laser-based signal generator in both the transmitter and receiver. The system also used on-line digital signal processing (DSP) to demodulate the signals in the receiver and increase the data rate.
    “Our team achieved a single-channel transmission rate of 240 gigabits per second,” says Tadao Nagatsuma, PI of the project. “This is the highest transmission rate obtained so far in the world using on-line DSP.”
    As 5G spreads across the globe, researchers are working hard to develop the technology that will be needed for 6G, and the results of this study are a significant step toward 300GHz-band wireless communication. The researchers anticipate that with multiplexing techniques (where more than one channel can be used) and more sensitive receivers, the data rate can be increased to 1 terabit per second, ushering in a new era of near-instantaneous global communication. More

  • in

    Hexagonal copper disk lattice unleashes spin wave control

    A collaborative group of researchers has potentially developed a means of controlling spin waves by creating a hexagonal pattern of copper disks on a magnetic insulator. The breakthrough is expected to lead to greater efficiency and miniaturization of communication devices in fields such as artificial intelligence and automation technology.
    Details of the study were published in the journal Physical Review Applied on January 30, 2024.
    In a magnetic material, the spins of electrons are aligned. When these spins undergo coordinated movement, it generates a kind of ripple in the magnetic order, dubbed spin waves. Spin waves generate little heat and offer an abundance of advantages for next-generation devices.
    Implementing spin waves in semiconductor circuits, which conventionally rely on electrical currents, could lessen power consumption and promote high integration. Since spin waves are waves, they tend to propagate in random directions unless controlled by structures and other means. As such, elements capable of generating, propagating, superimposing, and measuring spin waves are being competitively developed worldwide.
    “We leveraged the wavelike nature of spin waves to successfully control their propagation directly,” points out Taichi Goto, associate professor at Tohoku University’s Electrical Communication Research Institute, and co-author of the paper. “We did so by first developing an excellent magnetic insulator material called magnetic garnet film, which has low spin wave losses. We then periodically arranged small copper disks with diameters less than 1 mm on this film.”
    By arranging copper disks in a hexagonal pattern resembling snowflakes, Goto and his colleagues could effectively reflect the spin waves. Furthermore, by rotating the magnonic crystal and changing the incident angle of spin waves, the researchers revealed that the frequency at which the magnonic band gap occurs remains largely unchanged in the range from 10 to 30 degrees. This suggests the potential for the two-dimensional magnonic crystal to freely control the propagation direction of spin waves.
    Goto notes the novelty of their findings: “To date, there have been no experimental confirmations of changes in the spin wave incident angle for a two-dimensional magnonic crystal comprising a magnetic insulator and copper disks, making this the world’s first report.”
    Looking ahead, the team hopes to demonstrate the direction control of spin waves using two-dimensional magnonic crystals and to develop functional components that utilize this technology. More