More stories

  • in

    Electrons become fractions of themselves in graphene

    The electron is the basic unit of electricity, as it carries a single negative charge. This is what we’re taught in high school physics, and it is overwhelmingly the case in most materials in nature.
    But in very special states of matter, electrons can splinter into fractions of their whole. This phenomenon, known as “fractional charge,” is exceedingly rare, and if it can be corralled and controlled, the exotic electronic state could help to build resilient, fault-tolerant quantum computers.
    To date, this effect, known to physicists as the “fractional quantum Hall effect,” has been observed a handful of times, and mostly under very high, carefully maintained magnetic fields. Only recently have scientists seen the effect in a material that did not require such powerful magnetic manipulation.
    Now, MIT physicists have observed the elusive fractional charge effect, this time in a simpler material: five layers of graphene — an atom-thin layer of carbon that stems from graphite and common pencil lead. They report their results in Nature.
    They found that when five sheets of graphene are stacked like steps on a staircase, the resulting structure inherently provides just the right conditions for electrons to pass through as fractions of their total charge, with no need for any external magnetic field.
    The results are the first evidence of the “fractional quantum anomalous Hall effect” (the term “anomalous” refers to the absence of a magnetic field) in crystalline graphene, a material that physicists did not expect to exhibit this effect.
    “This five-layer graphene is a material system where many good surprises happen,” says study author Long Ju, assistant professor of physics at MIT. “Fractional charge is just so exotic, and now we can realize this effect with a much simpler system and without a magnetic field. That in itself is important for fundamental physics. And it could enable the possibility for a type of quantum computing that is more robust against perturbation.”
    Ju’s MIT co-authors are lead author Zhengguang Lu, Tonghang Han, Yuxuan Yao, Aidan Reddy, Jixiang Yang, Junseok Seo, and Liang Fu, along with Kenji Watanabe and Takashi Taniguchi at the National Institute for Materials Science in Japan.

    A bizarre state
    The fractional quantum Hall effect is an example of the weird phenomena that can arise when particles shift from behaving as individual units to acting together as a whole. This collective “correlated” behavior emerges in special states, for instance when electrons are slowed from their normally frenetic pace to a crawl that enables the particles to sense each other and interact. These interactions can produce rare electronic states, such as the seemingly unorthodox splitting of an electron’s charge.
    In 1982, scientists discovered the fractional quantum Hall effect in heterostructures of gallium arsenide, where a gas of electrons confined in a two-dimensional plane is placed under high magnetic fields. The discovery later won the group a Nobel Prize in Physics.
    “[The discovery] was a very big deal, because these unit charges interacting in a way to give something like fractional charge was very, very bizarre,” Ju says. “At the time, there were no theory predictions, and the experiments surprised everyone.”
    Those researchers achieved their groundbreaking results using magnetic fields to slow down the material’s electrons enough for them to interact. The fields they worked with were about 10 times stronger than what typically powers an MRI machine.
    In August 2023, scientists at the University of Washington reported the first evidence of fractional charge without a magnetic field. They observed this “anomalous” version of the effect, in a twisted semiconductor called molybdenum ditelluride. The group prepared the material in a specific configuration, which theorists predicted would give the material an inherent magnetic field, enough to encourage electrons to fractionalize without any external magnetic control.

    The “no magnets” result opened a promising route to topological quantum computing — a more secure form of quantum computing, in which the added ingredient of topology (a property that remains unchanged in the face of weak deformation or disturbance) gives a qubit added protection when carrying out a computation. This computation scheme is based on a combination of fractional quantum Hall effect and a superconductor. It used to be almost impossible to realize: One needs a strong magnetic field to get fractional charge, while the same magnetic field will usually kill the superconductor. In this case the fractional charges would serve as a qubit (the basic unit of a quantum computer).
    Making steps
    That same month, Ju and his team happened to also observe signs of anomalous fractional charge in graphene — a material for which there had been no predictions for exhibiting such an effect.
    Ju’s group has been exploring electronic behavior in graphene, which by itself has exhibited exceptional properties. Most recently, Ju’s group has looked into pentalayer graphene — a structure of five graphene sheets, each stacked slightly off from the other, like steps on a staircase. Such pentalayer graphene structure is embedded in graphite and can be obtained by exfoliation using Scotch tape. When placed in a refrigerator at ultracold temperatures, the structure’s electrons slow to a crawl and interact in ways they normally wouldn’t when whizzing around at higher temperatures.
    In their new work, the researchers did some calculations and found that electrons might interact with each other even more strongly if the pentalayer structure were aligned with hexagonal boron nitride (hBN) — a material that has a similar atomic structure to that of graphene, but with slightly different dimensions. In combination, the two materials should produce a moiré superlattice — an intricate, scaffold-like atomic structure that could slow electrons down in ways that mimic a magnetic field.
    “We did these calculations, then thought, let’s go for it,” says Ju, who happened to install a new dilution refrigerator in his MIT lab last summer, which the team planned to use to cool materials down to ultralow temperatures, to study exotic electronic behavior.
    The researchers fabricated two samples of the hybrid graphene structure by first exfoliating graphene layers from a block of graphite, then using optical tools to identify five-layered flakes in the steplike configuration. They then stamped the graphene flake onto an hBN flake and placed a second hBN flake over the graphene structure. Finally, they attached electrodes to the structure and placed it in the refrigerator, set to near absolute zero.
    As they applied a current to the material and measured the voltage output, they started to see signatures of fractional charge, where the voltage equals the current multiplied by a fractional number and some fundamental physics constants.
    “The day we saw it, we didn’t recognize it at first,” says first author Lu. “Then we started to shout as we realized, this was really big. It was a completely surprising moment.”
    “This was probably the first serious samples we put in the new fridge,” adds co-first author Han. “Once we calmed down, we looked in detail to make sure that what we were seeing was real.”
    With further analysis, the team confirmed that the graphene structure indeed exhibited the fractional quantum anomalous Hall effect. It is the first time the effect has been seen in graphene.
    “Graphene can also be a superconductor,” Ju says. “So, you could have two totally different effects in the same material, right next to each other. If you use graphene to talk to graphene, it avoids a lot of unwanted effects when bridging graphene with other materials.”
    For now, the group is continuing to explore multilayer graphene for other rare electronic states.
    “We are diving in to explore many fundamental physics ideas and applications,” he says. “We know there will be more to come.”
    This research is supported in part by the Sloan Foundation, and the National Science Foundation. More

  • in

    Engineers use AI to wrangle fusion power for the grid

    In the blink of an eye, the unruly, superheated plasma that drives a fusion reaction can lose its stability and escape the strong magnetic fields confining it within the donut-shaped fusion reactor. These getaways frequently spell the end of the reaction, posing a core challenge to developing fusion as a non-polluting, virtually limitless energy source.
    But a Princeton-led team composed of engineers, physicists, and data scientists from the University and the Princeton Plasma Physics Laboratory (PPPL) have harnessed the power of artificial intelligence to predict — and then avoid — the formation of a specific plasma problem in real time.
    In experiments at the DIII-D National Fusion Facility in San Diego, the researchers demonstrated their model, trained only on past experimental data, could forecast potential plasma instabilities known as tearing mode instabilities up to 300 milliseconds in advance. While that leaves no more than enough time for a slow blink in humans, it was plenty of time for the AI controller to change certain operating parameters to avoid what would have developed into a tear within the plasma’s magnetic field lines, upsetting its equilibrium and opening the door for a reaction-ending escape.
    “By learning from past experiments, rather than incorporating information from physics-based models, the AI could develop a final control policy that supported a stable, high-powered plasma regime in real time, at a real reactor,” said research leader Egemen Kolemen, associate professor of mechanical and aerospace engineering and the Andlinger Center for Energy and the Environment, as well as staff research physicist at PPPL.
    The research opens the door for more dynamic control of a fusion reaction than current approaches, and it provides a foundation for using artificial intelligence to solve a broad range of plasma instabilities, which have long been obstacles to achieving a sustained fusion reaction. The team published their findings in Nature on February 21.
    “Previous studies have generally focused on either suppressing or mitigating the effects of these tearing instabilities after they occur in the plasma,” said first author Jaemin Seo, an assistant professor of physics at Chung-Ang University in South Korea who performed much of the work while a postdoctoral researcher in Kolemen’s group. “But our approach allows us to predict and avoid those instabilities before they ever appear.”
    Superheated plasma swirling in a donut-shaped device
    Fusion takes place when two atoms — usually light atoms like hydrogen — come together to form one heavier atom, releasing a large amount of energy in the process. The process powers the Sun, and, by extension, makes life on Earth possible.

    However, getting the two atoms to fuse is tricky, as it takes massive amounts of pressure and energy for the two atoms to overcome their mutual repulsion.
    Fortunately for the Sun, its massive gravitational pull and extremely high pressures at its core allow fusion reactions to proceed. To replicate a similar process on the Earth, scientists instead use extremely hot plasma and extremely strong magnets.
    In donut-shaped devices known as tokamaks — sometimes referred to as “stars in jars” — magnetic fields struggle to contain plasmas that reach above 100 million degrees Celsius, hotter than the center of the Sun.
    While there are many types of plasma instabilities that can terminate the reaction, the Princeton team concentrated on solving tearing mode instabilities, a disturbance in which the magnetic field lines within a plasma actually break and create an opportunity for the plasma’s subsequent escape.
    “Tearing mode instabilities are one of the major causes of plasma disruption, and they will become even more prominent as we try to run fusion reactions at the high powers required to produce enough energy,” said Seo. “They are an important challenge for us to solve.”
    Fusing artificial intelligence and plasma physics
    Since tearing mode instabilities can form and derail a fusion reaction in milliseconds, the researchers turned to artificial intelligence for its ability to quickly process and act in response to new data.

    But the process to develop an effective AI controller was not as simple as trying out a few things on a tokamak, where time is limited, and the stakes are high.
    Co-author Azarakhsh Jalalvand, a research scholar in Kolemen’s group, compared teaching an algorithm to run a fusion reaction in a tokamak to teaching someone how to fly a plane.
    “You wouldn’t teach someone by handing them a set of keys and telling them to try their best,” Jalalvand said. “Instead, you’d have them practice on a very intricate flight simulator until they’ve learned enough to try out the real thing.”
    Like developing a flight simulator, the Princeton team used data from past experiments at the DIII-D tokamak to construct a deep neural network capable of predicting the likelihood of a future tearing instability based on real-time plasma characteristics.
    They used that neural network to train a reinforcement learning algorithm. Like a pilot trainee, the reinforcement learning algorithm could try out different strategies for controlling plasma, learning through trial and error which strategies worked and which did not within the safety of a simulated environment.
    “We don’t teach the reinforcement learning model all of the complex physics of a fusion reaction,” Jalalvand said. “We tell it what the goal is — to maintain a high-powered reaction — what to avoid — a tearing mode instability — and the knobs it can turn to achieve those outcomes. Over time, it learns the optimal pathway for achieving the goal of high power while avoiding the punishment of an instability.”
    While the model went through countless simulated fusion experiments, trying to find ways to maintain high power levels while avoiding instabilities, co-author SangKyeun Kim could observe and refine its actions.
    “In the background, we can see the intentions of the model,” said Kim, a staff research scientist at PPPL and former postdoctoral researcher in Kolemen’s group. “Some of the chnges that the model wants are too rapid, so we work to smooth and calm the model. As humans, we arbitrate between what the AI wants to do and what the tokamak can accommodate.”
    Once they were confident in the AI controller’s abilities, they tested it during an actual fusion experiment at the D-III D tokamak, observing as the controller made real-time changes to certain tokamak parameters to avoid the onset of an instability. These parameters included changing the shape of the plasma and the strength of the beams inputting power into the reaction.
    “Being able to predict instabilities ahead of time can make it easier to run these reactions than current approaches, which are more passive,” said Kim. “We no longer have to wait for the instabilities to occur and then take quick corrective action before the plasma becomes disrupted.”
    Powering into the future
    While the researchers said the work is a promising proof-of-concept demonstrating how artificial intelligence can effectively control fusion reactions, it is only one of many next steps already ongoing in Kolemen’s group to advance the field of fusion research.
    The first step is to get more evidence of the AI controller in action at the DIII-D tokamak, and then expand the controller to function at other tokamaks.
    “We have strong evidence that the controller works quite well at DIII-D, but we need more data to show that it can work in a number of different situations,” said first author Seo. “We want to work toward something more universal.”
    A second line of research involves expanding the algorithm to handle many different control problems at the same time. While the current model uses a limited number of diagnostics to avoid one specific type of instability, the researchers could provide data on other types of instabilities and give access to more knobs for the AI controller to tune.
    “You could imagine one large reward function that turns many different knobs to simultaneously control for several types of instabilities,” said co-author Ricardo Shousha, a postdoc at PPPL and former graduate student in Kolemen’s group who provided support for the experiments at DIII-D.
    And on the route to developing better AI controllers for fusion reactions, researchers might also gain more understanding of the underlying physics. By studying the AI controller’s decisions as it attempts to contain the plasma, which can be radically different than what traditional approaches might prescribe, artificial intelligence may be not only a tool to control fusion reactions but also a teaching resource.
    “Eventually, it may be more than just a one-way interaction of scientists developing and deploying these AI models,” said Kolemen. “By studying them in more detail, they may have certain things that they can teach us too.”
    The work was supported by the U.S. Department of Energy’s Office of Fusion Energy Sciences, as well as the National Research Foundation of Korea (NRF). The authors also acknowledge the use of the DIII-D National Fusion Facility, a Department of Energy Office of Science user facility. More

  • in

    Angle-dependent holograms made possible by metasurfaces

    The expression “flawless from every angle” is commonly used to characterize a celebrity’s appearance. This doesn’t simply imply that they appear attractive from a specific viewpoint, but rather that their appeal remains consistent and appealing from various angles and perspectives. Recently, a research team from Pohang University of Science and Technology (POSTECH) has employed metasurface to fabricate angle-dependent holograms with multiple functions, capturing significant interest within the academic community.
    A research team comprising Professor Junsuk Rho from the Department of Mechanical Engineering and the Department of Chemical Engineering and PhD candidate Joohoon Kim from the Department of Mechanical Engineering at the POSTECH created metasurface display technology. This technology allows holograms to display multiple images based on the observer’s viewing angle. The findings were recently published in Nano Letters, an international journal focusing on nanoscale research and applications.
    Objects can appear distinct depending on the viewer’s position, a concept that can be harnessed in holographic technology to generate cinematic and realistic 3D holograms presenting different images based on the viewing angle. However, the current challenge lies in controlling light dispersion according to the angle, making the application of nano-optics in this context a complex endeavor.
    The team addressed this challenge by leveraging metasurfaces, artificial nanostructures capable of precisely manipulating the characteristics of light. These metasurfaces are incredibly thin and lightweight, approximately one-hundredth the thickness of a human hair, making them promising for applications in miniaturized displays such as virtual and augmented reality devices. Through the use of metasurfaces, the team devised a system that controls light to convey only a specific phase of information at a given angle, resulting in diverse images based on the angle of incidence.
    In their experiments, the team’s metasurface generated distinct 3D holographic images at angles of both +35 degrees and -35 degrees for left-circular polarization. Remarkably, the team achieved the production of different images for incident light by using a single metasurface, contingent on the specific polarization. Notably, the holographic display demonstrated an extensive viewing angle of 70 degrees (±35 degrees), enabling observers to perceive the three-dimensional hologram from various directions.
    Professor Junsuk Rho who led the research explained, “We have successfully achieved an effective display from diverse angles.” He added, “We anticipate this technology will make significant contributions to the commercialization of technology in virtual and augmented reality displays, encrypted imaging, information storage, and other applications.”
    The study was conducted with the support from the program of POSCO-POSTECH-RIST Convergence Research Center program, the STEAM Research Program of the National Research Foundation of Korea funded by the Ministry of Science and ICT, and the Alchemist fellowship of the Ministry of Trade, Industry and Energy. More

  • in

    Science fiction meets reality: New technique to overcome obstructed views

    After a recent car crash, John Murray-Bruce wished he could have seen the other car coming. The crash reaffirmed the University of South Florida assistant professor of computer science and engineering’s mission to create a technology that could do just that: See around obstacles and ultimately expand one’s line of vision.
    Using a single photograph, Murray-Bruce and his doctoral student, Robinson Czajkowski, created an algorithm that computes highly accurate, full-color three-dimensional reconstructions of areas behind obstacles — a concept that can not only help prevent car crashes, but help law enforcement experts in hostage situations, search-and-rescue and strategic military efforts.
    “We’re turning ordinary surfaces into mirrors to reveal regions, objects and rooms that are outside our line of vision,” Murray-Bruce said. “We live in a 3D world, so obtaining a more complete 3D picture of a scenario can be critical in a number of situations and applications.”
    As published in Nature Communications, Czajkowski and Murray-Bruce’s research is the first-of-its-kind to successfully reconstruct a hidden scene in 3D using an ordinary digital camera. The algorithm works by using information from the photo of faint shadows cast on nearby surfaces to create a high-quality reconstruction of the scene. While it is more technical for the average person, it could have broad applications.
    “These shadows are all around us,” Czajkowski said. “The fact we can’t see them with our naked eye doesn’t mean they’re not there.”
    The idea of seeing around obstacles has been a topic of science-fiction movies and books for decades. Murray-Bruce says this research takes significant strides in bringing that concept to life.
    Prior to this work, researchers have only used ordinary cameras to create rough 2D reconstructions of small spaces. The most successful demonstrations of 3D imaging of hidden scene all required specialized, expensive equipment.

    “Our work achieves a similar result using far less,” Czajkowski said. “You don’t need to spend a million dollars on equipment for this anymore.”
    Czajkowski and Murray-Bruce expect it will be 10 to 20 years before the technology is robust enough to be adopted by law enforcement and car manufacturers. Right now, they plan to continue their research to further improve the technology’s speed and accuracy to expand its applications in the future, including self-driving cars to improve their safety and situational awareness.
    “In just over a decade since the idea of seeing around corners emerged, there has been remarkable progress and there is accelerating interest and research activity in the area,” Murray-Bruce said. “This increased activity, along with access to better, more sensitive cameras and faster computing power form the basis for my optimism on how soon this technology will become practical for a wide range of scenarios.”
    While the algorithm is still in the development phase, it is available for other researchers to test and reproduce in their own space. More

  • in

    Time watching videos may stunt toddler language development, but it depends on why they’re watching

    A new study from SMU psychologist Sarah Kucker and colleagues reveals that passive video use among toddlers can negatively affect language development, but their caregiver’s motivations for exposing them to digital media could also lessen the impact.
    Results show that children between the ages of 17 and 30 months spend an average of nearly two hours per day watching videos — a 100 percent increase from prior estimates gathered before the COVID pandemic. The research reveals a negative association between high levels of digital media watching and children’s vocabulary development.
    Children exposed to videos by caregivers for their calming or “babysitting” benefits tended to use phrases and sentences with fewer words. However, the negative impact on language skills was mitigated when videos were used for educational purposes or to foster social connections — such as through video chats with family members.
    “In those first couple years of life, language is one of the core components of development that we know media can impact,” said Kucker, assistant professor of psychology in SMU’s Dedman College of Humanities & Sciences. “There’s less research focused on toddlers using digital media than older ages, which is why we’re trying to understand better how digital media affects this age group and what type of screen time is beneficial and what is not.”
    Published in the journal Acta Paediatrica, the study involved 302 caregivers of children between 17 and 30 months. Caregivers answered questions about their child’s words, sentences, and how much time they spend on different media activities each day. Those activities included video/TV, video games, video chat, and e-books, with caregivers explaining why they use each activity with their child. Print book reading was also compared.
    Researchers looked at the amount of media use and the reasons provided by caregivers for their children’s media use. These factors were then compared to the children’s vocabulary and length using two or more words together (the mean length of utterance).
    Kucker suggests that caregivers need to consider what kind of videos their children are watching (whether for learning or fun) and how they interact with toddlers watching videos. She acknowledges that parents often use digital media to occupy children while they complete tasks. Kucker recommends caregivers consider how much digital media they allow young children and if they can interact with them while using it.
    The study’s findings underscore the need for parents, caregivers, and educators to be aware of the potential effects of digital media on language development in children 30 months and under. By understanding the types of digital media children are exposed to and the reasons behind its usage, appropriate measures can be taken to ensure more healthy language development.
    Future research by Kucker and her colleagues will continue to explore the types of videos young children watch, how they use screens with others, and if young children watching digital media for two hours is the new normal and, if so, how that impacts language development.
    Research team members included Rachel Barr, from Georgetown University and Lynn K. Perry, from the University of Miami. Research reported in this press release was supported by the Eunice Kennedy Shriver National Institute Of Child Health & Human Development of the National Institutes of Health under Award Number R15HD101841. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. More

  • in

    Engineers achieve breakthrough in quantum sensing

    A collaborative project led by Professor Zhiqin Chu, Professor Can Li and Professor Ngai Wong, at the Department of Electrical and Electronic Engineering of the University of Hong Kong (HKU) has made a breakthrough in enhancing the speed and resolution of widefield quantum sensing, leading to new opportunities in scientific research and practical applications.
    By collaborating with scientists from Mainland China and Germany, the team has successfully developed a groundbreaking quantum sensing technology using a neuromorphic vision sensor, which is designed to mimic the human vision system. This sensor is capable of encoding changes in fluorescence intensity into spikes during optically detected magnetic resonance (ODMR) measurements. The key advantage of this approach is that it results in highly compressed data volumes and reduced latency, making the system more efficient than traditional methods. This breakthrough in quantum sensing holds potential for various applications in fields such as monitoring dynamic processes in biological systems.
    The research paper has been published in the journal Advanced Science titled “Widefield Diamond Quantum Sensing with Neuromorphic Vision Sensors.”
    “Researchers worldwide have spent much effort looking into ways to improve the measurement accuracy and spatiotemporal resolution of camera sensors. But a fundamental challenge remains: handling the massive amount of data in the form of image frames that need to be transferred from the camera sensors for further processing. This data transfer can significantly limit the temporal resolution, which is typically no more than 100 fps due to the use of frame-based image sensors. What we did was trying to overcome the bottleneck,” said Zhiyuan Du, the first author of the paper and PhD candidate at the Department of Electrical and Electronic Engineering
    Du said his professor’s focus on quantum sensing had inspired him and other team members to break new ground in the area. He is also driven by a passion for integrating sensing and computing.
    “The latest development provides new insights for high-precision and low-latency widefield quantum sensing, with possibilities for integration with emerging memory devices to realise more intelligent quantum sensors,” he added.
    The team’s experiment with an off-the-shelf event camera demonstrated a 13× improvement in temporal resolution, with comparable precision in detecting ODMR resonance frequencies with the state-of-the-art highly specialized frame-based approach. The new technology was successfully deployed in monitoring dynamically modulated laser heating of gold nanoparticles coated on a diamond surface. “It would be difficult to perform the same task using existing approaches,” Du said.

    Unlike traditional sensors that record the light intensity levels, neuromorphic vision sensors process the light intensity change into “spikes” similar to biological vision systems, leading to improved temporal resolution (≈µs) and dynamic range ( >120 dB). This approach is particularly effective in scenarios where image changes are infrequent, such as object tracking and autonomous vehicles, as it eliminates redundant static background signals.
    “We anticipate that our successful demonstration of the proposed method will revolutionise widefield quantum sensing, significantly improving performance at an affordable cost,” said Professor Zhiqin Chu.
    “This also brings closer the realisation of near-sensor processing with emerging memory-based electronic synapse devices,” said Professor Can Li.
    “The technology’s potential for industrial use should be explored further, such as studying dynamic changes in currents in materials and identifying defects in microchips,” said Professor Ngai Wong. More

  • in

    Accelerating the discovery of single-molecule magnets with deep learning

    Synthesizing or studying certain materials in a laboratory setting often poses challenges due to safety concerns, impractical experimental conditions, or cost constraints. In response, scientists are increasingly turning to deep learning methods which involve developing and training machine learning models to recognize patterns and relationships in data that include information about material properties, compositions, and behaviors. Using deep learning, scientists can quickly make predictions about material properties based on the material’s composition, structure, and other relevant features, identify potential candidates for further investigation, and optimize synthesis conditions.
    Now, in a study published on 1 February 2024 in the International Union of Crystallography Journal (IUCrJ), Professor Takashiro Akitsu, Assistant Professor Daisuke Nakane, and Mr. Yuji Takiguchi from Tokyo University of Science (TUS) have used deep learning to predict single-molecule magnets (SMMs) from a pool of 20,000 metal complexes. This innovative strategy streamlines the material discovery process by minimizing the need for lengthy experiments.
    Single-molecule magnets (SMMs) are metal complexes that demonstrate magnetic relaxation behavior at the individual molecule level, where magnetic moments undergo changes or relaxation over time. These materials have potential applications in the development of high-density memory, quantum molecular spintronic devices, and quantum computing devices. SMMs are characterized by having a high effective energy barrier (Ueff) for the magnetic moment to flip. However, these values are typically in the range of tens to hundreds of Kelvins, making SMMs challenging to synthesize.
    The researchers used deep-learning to identify the relationship between molecular structures and SMM behavior in metal complexes with salen-type ligands. These metal complexes were chosen as they can be easily synthesized by complexing aldehydes and amines with various 3d and 4f metals. For the dataset, the researchers worked extensively to screen 800 papers from 2011 to 2021, collecting information on the crystal structure and determining if these complexes exhibited SMM behavior. Additionally, they obtained 3D structural details of the molecules from the Cambridge Structural Database.
    The molecular structure of the complexes was represented using voxels or 3D pixels, where each element was assigned a unique RGB value. Subsequently, these voxel representations served as input to a 3D Convolutional Neural Network model based on the ResNet architecture. This model was specifically designed to classify molecules as either SMMs or non-SMMs by analyzing their 3D molecular images.
    When the model was trained on a dataset of crystal structures of metal complexes containing salen type complexes, it achieved a 70% accuracy rate in distinguishing between the two categories. When the model was tested on 20,000 crystal structures of metal complexes containing Schiff bases, it successfully discovered the metal complexes reported as single-molecule magnets. “This is the first report of deep learning on the molecular structures of SMMs,” says Prof. Akitsu.
    Many of the predicted SMM structures involved multinuclear dysprosium complexes, known for their high Ueff values. While this method simplifies the SMM discovery process, it is important to note that the model’s predictions are solely based on training data and do not explicitly link chemical structures with their quantum chemical calculations, a preferred method in AI-assisted molecular design. Further experimental research is required to obtain the data of SMM behavior under uniform conditions.
    However, this simplified approach has its advantages. It reduces the need for complex computational calculations and avoids the challenging task of simulating magnetism. Prof. Akitsu concludes: “Adopting such an approach can guide the design of innovative molecules, bringing about significant savings in time, resources, and costs in the development of functional materials.” More

  • in

    Tapping into the 300 GHz band with an innovative CMOS transmitter

    New phased-array transmitter design overcomes common problems of CMOS technology in the 300 GHz band, as reported by scientists from Tokyo Tech. Thanks to its remarkable area efficiency, low power consumption, and high data rate, the proposed transmitter could pave the way to many technological applications in the 300 GHz band, including body and cell monitoring, radar, 6G wireless communications, and terahertz sensors.
    Today, most frequencies above the 250 GHz mark remain unallocated. Accordingly, many researchers are developing 300 GHz transmitters/receivers to capitalize on the low atmospheric absorption at these frequencies, as well as the potential for extremely high data rates that comes with it.
    However, high-frequency electromagnetic waves become weaker at a fast pace when travelling through free space. To combat this problem, transmitters must compensate by achieving a large effective radiated power. While some interesting solutions have been proposed over the past few years, no 300 GHz-band transmitter manufactured via conventional CMOS processes has simultaneously realized high output power and small chip size.
    Now, a research team led by Professor Kenichi Okada from Tokyo Institute of Technology(Tokyo Tech) and NTT Corporation (Headquarters: Chiyoda-ku, Tokyo; President & CEO: Akira Shimada; “NTT”) have recently developed a 300 GHz-band transmitter that solves these issues through several key innovations. Their work will be presented in the 2024 IEEE International Solid-State Circuits Conference (ISSCC).
    The proposed solution is a phased-array transmitter composed of 64 radiating elements, which are arranged in 16 integrated circuits with four antennas each. Since the elements are arranged in three dimensions by stacking printed circuit boards (PCBs), this transmitter supports 2D beam steering. Simply put, the transmitted power can be aimed both vertically and horizontally, allowing for fast beam steering and tracking receivers efficiently. Notably, the antennas used are Vivaldi antennas, which can be implemented directly on-chip and have a suitable shape and emission profile for high frequencies.
    An important feature of the proposed transmitter is its power amplifier (PA)-last architecture. By placing the amplification stage right before the antennas, the system only needs to amplify signals that have already been conditioned and processed. This leads to higher efficiency and better amplifier performance.
    The researchers also addressed a few common problems that arise with conventional transistor layouts in CMOS processes, namely high gate resistance and large parasitic capacitances. They optimized their layout by adding additional drain paths and vias and by altering the geometry and element placing between metal layers. “Compared to the standard transistor layout, the parasitic resistance and capacitances in the proposed transistor layout are all mitigated,” remarks Prof. Okada. “In turn, the transistor-gain corner frequency, which is the point where the transistor’s amplification starts to decrease at higher frequencies, was increased from 250 to 300 GHz.”
    On top of these innovations, the team designed and implemented a multi-stage 300 GHz power amplifier to be used with each antenna. Thanks to excellent impedance matching between stages, the amplifiers demonstrated outstanding performance, as Prof. Okada highlights: “The proposed power amplifiers achieved a gain higher than 20 dB from 237 to 267 GHz, with a sharp cut-off frequency to suppress out-of-band undesired signals.” The proposed amplifier also achieves a noise figure of 15 dB which was evaluated by the noise measurement system in 300-GHz band.
    The researchers tested their design through both simulations and experiments, obtaining very promising results. Remarkably, the proposed transmitter achieved a data rate of 108 Gb/s in on-PCB probe measurements, which is substantially higher than other state-of-the-art 300 GHz-band transmitters.
    Moreover, the transmitter also displayed remarkable area efficiency compared to other CMOS-based designs alongside low power consumption, highlighting its potential for miniaturized and power-constrained applications. Some notable use cases are sixth-generation (6G) wireless communications, high-resolution terahertz sensors, and human body and cell monitoring. More