More stories

  • in

    Semiconductor doping and electronic devices: Heating gallium nitride and magnesium forms superlattice

    A study led by Nagoya University in Japan revealed that a simple thermal reaction of gallium nitride (GaN) with metallic magnesium (Mg) results in the formation of a distinctive superlattice structure. This represents the first time researchers have identified the insertion of 2D metal layers into a bulk semiconductor. By carefully observing the materials through various cutting-edge characterization techniques, the researchers uncovered new insights into the process of semiconductor doping and elastic strain engineering. They published their findings in the journal Nature.
    GaN is an important wide bandgap semiconductor material that is poised to replace traditional silicon semiconductors in applications demanding higher power density and faster operating frequencies. These distinctive characteristics of GaN make it valuable in devices such as LEDs, laser diodes, and power electronics — including critical components in electric vehicles and fast chargers. The improved performance of GaN-based devices contributes to the realization of an energy-saving society and a carbon-neutral future.
    In semiconductors, there are two essential and complementary types of electrical conductivity: p-type and n-type. The p-type semiconductor features primarily free carriers carrying positive charges, known as holes, whereas the n-type semiconductor conducts electricity through free electrons.
    A semiconductor acquires p-type or n-type conductivity through a process called doping, which refers to the intentional introduction of specific impurities (known as dopants) into a pure semiconductor material to greatly alter its electrical and optical properties.
    In the field of GaN semiconductors, Mg is the only known element to create p-type conductivity up to now. Despite 35 years since the first success of doping Mg into GaN, the full mechanisms of Mg doping in GaN, especially the solubility limit and segregation behavior of Mg, remain unclear. This uncertainty limits their optimization for optoelectronics and electronics.
    To improve the conductivity of p-type GaN, Jia Wang, the first author of the study, and his colleagues conducted an experiment in which they patterned deposited metallic Mg thin films on GaN wafers and heated them up at a high temperature — a conventional process known as annealing.
    Using state-of-the-art electron microscope imaging, the scientists observed the spontaneous formation of a superlattice featuring alternating layers of GaN and Mg. This is especially unusual since GaN and Mg are two types of materials with significant differences in their physical properties.
    “Although GaN is a wide-bandgap semiconductor with mixed ionic and covalent bonding, and Mg is a metal featuring metallic bonding, these two dissimilar materials have the same crystal structure, and it is a strikingly natural coincidence that the lattice difference between hexagonal GaN and hexagonal Mg is negligibly small,” Wang said. “We think that the perfect lattice match between GaN and Mg greatly reduces the energy needed to create the structure, playing a critical role in the spontaneous formation of such a superlattice.”
    The researchers determined that this unique intercalation behavior, which they named interstitial intercalation, leads to compressive strain to the host material. Specifically, they found that the GaN being inserted with Mg layers sustains a high stress of more than 20 GPa, equivalent to 200,000 times atmospheric pressure, making it the highest compressive strain ever recorded in a thin-film material. This is much more than the compressive stresses commonly found in silicon films (in the range of 0.1 to 2 GPa). Electronic thin films can undergo significant changes in electronic and magnetic properties because of this strain. The researchers found that the electrical conductivity in GaN via hole transport was significantly enhanced along the strained direction.
    “Using such a simple and low-cost approach, we were able to enhance the transport of holes in GaN, which conducts more current,” Wang said. “This interesting finding in interactions between a semiconductor and a metal may provide new insights into semiconductor doping and improve the performance of GaN-based devices.” More

  • in

    Switching nanomagnets using infrared lasers

    Physicists at TU Graz have calculated how suitable molecules can be stimulated by infrared light pulses to form tiny magnetic fields. If this is also successful in experiments, the principle could be used in quantum computer circuits.
    When molecules are irradiated with infrared light, they begin to vibrate due to the energy supply. For Andreas Hauser from the Institute of Experimental Physics at Graz University of Technology (TU Graz), this well-known phenomenon was the starting point for considering whether these oscillations could also be used to generate magnetic fields. This is because atomic nuclei are positively charged, and when a charged particle moves, a magnetic field is created. Using the example of metal phthalocyanines — ring-shaped, planar dye molecules — Andreas Hauser and his team have now calculated that, due to their high symmetry, these molecules actually generate tiny magnetic fields in the nanometre range when infrared pulses act on them. According to the calculations, it should be possible to measure the rather low but very precisely localised field strength using nuclear magnetic resonance spectroscopy. The researchers have published their results in the Journal of the American Chemical Society.
    Circular dance of the molecules
    For the calculations, the team drew on preliminary work from the early days of laser spectroscopy, some of which was decades old, and used modern electron structure theory on supercomputers at the Vienna Scientific Cluster and TU Graz to calculate how phthalocyanine molecules behave when irradiated with circularly polarised infrared light. What happened was that the circularly polarised, i.e. helically twisted, light waves excite two molecular vibrations at the same time at right angles to each other. “As every rumba dancing couple knows, the right combination of forwards-backwards and left-right creates a small, closed loop. And this circular movement of each affected atomic nucleus actually creates a magnetic field, but only very locally, with dimensions in the range of a few nanometres,” says Andreas Hauser.
    Molecules as circuits in quantum computers
    By selectively manipulating the infrared light, it is even possible to control the strength and direction of the magnetic field, explains Andreas Hauser. This would turn the molecules into high-precision optical switches, which could perhaps also be used to build circuits for a quantum computer.
    Experiments as next step
    Together with colleagues from the Institute of Solid State Physics at TU Graz and a team at the University of Graz, Andreas Hauser now wants to prove experimentally that molecular magnetic fields can be generated in a controlled manner. “For proof, but also for future applications, the phthalocyanine molecule needs to be placed on a surface. However, this changes the physical conditions, which in turn influences the light-induced excitation and the characteristics of the magnetic field,” explains Andreas Hauser. “We therefore want to find a support material that has minimal impact on the desired mechanism.” In a next step, the physicist and his colleagues want to compute the interactions between the deposited phthalocyanines, the support material and the infrared light before putting the most promising variants to the test in experiments. More

  • in

    ‘Self-taught’ AI tool helps to diagnose and predict severity of common lung cancer

    A computer program based on data from nearly a half-million tissue images and powered by artificial intelligence can accurately diagnose cases of adenocarcinoma, the most common form of lung cancer, a new study shows.
    Researchers at NYU Langone Health’s Perlmutter Cancer Center and the University of Glasgow developed and tested the program. They say that because it incorporates structural features of tumors from 452 adenocarcinoma patients, who are among the more than 11,000 patients in the United States National Cancer Institute’s Cancer Genome Atlas, the program offers an unbiased, detailed, and reliable second opinion for patients and oncologists about the presence of the cancer and the likelihood and timing of its return (prognosis).
    The research team also points out that the program is independent and “self-taught,” meaning that it determined on its own which structural features were statistically most significant to gauging the severity of disease and had the greatest impact on tumor recurrence.
    Publishing in the journal Nature Communications online June 11, the study program, also called an algorithm, or specifically, histomorphological phenotype learning (HPL), was found to accurately distinguish between similar lung cancers, adenocarcinoma and squamous cell cancers, 99% of the time. The HPL program was also found to be 72% accurate at predicting the likelihood and timing of cancer’s return after therapy, bettering the 64% accuracy in the predictions made by pathologists who directly examined the same patients’ tumor images, researchers say.
    “Our new histomorphological phenotype learning program has the potential to offer cancer specialists and their patients a quick and unbiased diagnostic tool for lung adenocarcinoma that, once further testing is complete, can also be used to help validate and even guide their treatment decisions,” said study lead investigator Nicolas Coudray, PhD, a bioinformatics programmer at NYU Grossman School of Medicine and Perlmutter Cancer Center.
    “Patients, physicians, and researchers know they can rely on this predictive modeling because it is self-taught, provides explainable decisions, and is based only on the knowledge drawn specifically from each patient’s tissue, including such features as its proportion of dying cells, tumor-fighting immune cells, and how densely packed the tumor cells are, among other features,” said Coudray.
    “Lung tissue samples can now be analyzed in minutes by our computer program to provide fairly accurate predictions of whether their cancer will return, predictions that are better than current standards of care for making a prognosis in lung adenocarcinoma,” said study co-senior investigator Aristotelis Tsirigos, PhD. Tsirigos is a professor in the Departments of Pathology and Medicine at NYU Grossman School of Medicine and Perlmutter Cancer Center, where he also serves as co-director of precision medicine and director of its Applied Bioinformatics Laboratories.

    Tsirigos says that thanks to such tools and other advances in the lung cancer biology, pathologists will be examining tissue scans on their computer screens, and less and less on microscopes, and then using their AI program to analyze the image and produce its own image of the scan. The new image, or “landscape,” they add, will offer a detailed breakdown of the tissue’s content. It might note, for example, that there is 5% necrosis and 10% tumor infiltration and what that means in terms of survival. That reading may statistically equate to an 80% chance of remaining cancer-free for two years or more, based on information from all the patient data in the program.
    To develop the HPL program, the researchers first analyzed lung adenocarcinoma tissue slides from the Cancer Genome Atlas. Adenocarcinoma was chosen for the test model because the disease is known for characteristic features. As an example, they note that its tumor cells tend to group in so-called acinar, or saclike patterns and spread predictably along the surface lining of lung cells.
    From their analysis of the slides, whose visual images were digitally scanned and broken into 432,231 small quadrants or tiles, researchers found 46 key characteristics, what they term histomorphological phenotype clusters, from both normal and diseased tissue, a subset of which were statistically linked to either cancer’s early return or to long-term survival. The findings were then confirmed by further and separate testing on tissue images from 276 men and women who were treated for adenocarcinoma at NYU Langone from 2006 to 2021.
    Researchers say their goal is to use the HPL algorithm to assign to each patient a score between 0 and 1 that reflects their statistical chance of survival and tumor recurrence for up to five years. Because the program is self-learning, they stress HPL will become increasingly more accurate as more data is added over time. To build public trust, researchers have posted their programming code online and have plans to make the new HPL tool freely available upon completion of further testing.
    Characteristics linked to tumors recurring included high tile percentages of dead cancer cells and tumor-fighting immune cells called lymphocytes, and the dense clustering of tumor cells in the outer linings of the lungs. Features tied to increased likelihood for survival were high percentages of unchanged or preserved lung sac tissue, and lack of or mild presence of inflammatory cells.
    Tsirigos says the team next plans to look at developing HPL-like programs for other cancers, such as breast, ovarian, and colorectal, that are similarly based on distinctive and key morphological features and additional molecular data. The team also has plans to expand and improve the accuracy of the current adenocarcinoma HPL program by including other data from hospital electronic health records about other illnesses and diseases, or even income and home ZIP code.
    Funding support for the new study was provided by National Institutes of Health grant P30CA016087, United Kingdom Research Council grants Ep/R018634/1 and BB/V016067/1, and European Union Horizon 2020 grant no. 101016851.
    Besides Tsirigos and Coudray, other NYU Langone researchers involved in this study are Anna Yeaton, Bojing Liu, Hortense Le, Luis Chiriboga, Afreen Karimkhan, Navneet Natula, Christopher Park, Harvey Pass, and Andre Moreira. Study co-lead investigator Adalberto Claudio Quiros, study co-investigators Xinyu Yang and John Le Quesne, and study co-senior investigator Ke Yuan are all at the University of Glasgow, UK. Study co-investigator David Moore is at the University College London, UK. More

  • in

    New computer vision method helps speed up screening of electronic materials

    Boosting the performance of solar cells, transistors, LEDs, and batteries will require better electronic materials, made from novel compositions that have yet to be discovered.
    To speed up the search for advanced functional materials, scientists are using AI tools to identify promising materials from hundreds of millions of chemical formulations. In tandem, engineers are building machines that can print hundreds of material samples at a time based on chemical compositions tagged by AI search algorithms.
    But to date, there’s been no similarly speedy way to confirm that these printed materials actually perform as expected. This last step of material characterization has been a major bottleneck in the pipeline of advanced materials screening.
    Now, a new computer vision technique developed by MIT engineers significantly speeds up the characterization of newly synthesized electronic materials. The technique automatically analyzes images of printed semiconducting samples and quickly estimates two key electronic properties for each sample: band gap (a measure of electron activation energy) and stability (a measure of longevity).
    The new technique accurately characterizes electronic materials 85 times faster compared to the standard benchmark approach.
    The researchers intend to use the technique to speed up the search for promising solar cell materials. They also plan to incorporate the technique into a fully automated materials screening system.
    “Ultimately, we envision fitting this technique into an autonomous lab of the future,” says MIT graduate student Eunice Aissi. “The whole system would allow us to give a computer a materials problem, have it predict potential compounds, and then run 24-7 making and characterizing those predicted materials until it arrives at the desired solution.”
    “The application space for these techniques ranges from improving solar energy to transparent electronics and transistors,” adds MIT graduate student Alexander (Aleks) Siemenn. “It really spans the full gamut of where semiconductor materials can benefit society.”

    Aissi and Siemenn detail the new technique in a study that will appear in Nature Communications. Their MIT co-authors include graduate student Fang Sheng, postdoc Basita Das, and professor of mechanical engineering Tonio Buonassisi, along with former visiting professor Hamide Kavak of Cukurova University and visiting postdoc Armi Tiihonen of Aalto University.
    Power in optics
    Once a new electronic material is synthesized, the characterization of its properties is typically handled by a “domain expert” who examines one sample at a time using a benchtop tool called a UV-Vis, which scans through different colors of light to determine where the semiconductor begins to absorb more strongly. This manual process is precise but also time-consuming: A domain expert typically characterizes about 20 material samples per hour — a snail’s pace compared to some printing tools that can lay down 10,000 different material combinations per hour.
    “The manual characterization process is very slow,” Buonassisi says. “They give you a high amount of confidence in the measurement, but they’re not matched to the speed at which you can put matter down on a substrate nowadays.”
    To speed up the characterization process and clear one of the largest bottlenecks in materials screening, Buonassisi and his colleagues looked to computer vision — a field that applies computer algorithms to quickly and automatically analyze optical features in an image.
    “There’s power in optical characterization methods,” Buonassisi notes. “You can obtain information very quickly. There is richness in images, over many pixels and wavelengths, that a human just can’t process but a computer machine-learning program can.”
    The team realized that certain electronic properties — namely, band gap and stability — could be estimated based on visual information alone, if that information were captured with enough detail and interpreted correctly.

    With that goal in mind, the researchers developed two new computer vision algorithms to automatically interpret images of electronic materials: one to estimate band gap and the other to determine stability.
    The first algorithm is designed to process visual data from highly detailed, hyperspectral images.
    “Instead of a standard camera image with three channels — red, green, and blue (RBG) — the hyperspectral image has 300 channels,” Siemenn explains. “The algorithm takes that data, transforms it, and computes a band gap. We run that process extremely fast.”
    The second algorithm analyzes standard RGB images and assesses a material’s stability based on visual changes in the material’s color over time.
    “We found that color change can be a good proxy for degradation rate in the material system we are studying,” Aissi says.
    Material compositions
    The team applied the two new algorithms to characterize the band gap and stability for about 70 printed semiconducting samples. They used a robotic printer to deposit samples on a single slide, like cookies on a baking sheet. Each deposit was made with a slightly different combination of semiconducting materials. In this case, the team printed different ratios of perovskites — a type of material that is expected to be a promising solar cell candidate though is also known to quickly degrade.
    “People are trying to change the composition — add a little bit of this, a little bit of that — to try to make [perovskites] more stable and high-performance,” Buonassisi says.
    Once they printed 70 different compositions of perovskite samples on a single slide, the team scanned the slide with a hyperspectral camera. Then they applied an algorithm that visually “segments” the image, automatically isolating the samples from the background. They ran the new band gap algorithm on the isolated samples and automatically computed the band gap for every sample. The entire band gap extraction process process took about six minutes.
    “It would normally take a domain expert several days to manually characterize the same number of samples,” Siemenn says.
    To test for stability, the team placed the same slide in a chamber in which they varied the environmental conditions, such as humidity, temperature, and light exposure. They used a standard RGB camera to take an image of the samples every 30 seconds over two hours. They then applied the second algorithm to the images of each sample over time to estimate the degree to which each droplet changed color, or degraded under various environmental conditions. In the end, the algorithm produced a “stability index,” or a measure of each sample’s durability.
    As a check, the team compared their results with manual measurements of the same droplets, taken by a domain expert. Compared to the expert’s benchmark estimates, the team’s band gap and stability results were 98.5 percent and 96.9 percent as accurate, respectively, and 85 times faster.
    “We were constantly shocked by how these algorithms were able to not just increase the speed of characterization, but also to get accurate results,” Siemenn says. “We do envision this slotting into the current automated materials pipeline we’re developing in the lab, so we can run it in a fully automated fashion, using machine learning to guide where we want to discover these new materials, printing them, and then actually characterizing them, all with very fast processing.”
    This work was supported in part by First Solar. More

  • in

    Four-legged, dog-like robot ‘sniffs’ hazardous gases in inaccessible environments

    Nightmare material or truly man’s best friend? A team of researchers equipped a dog-like quadruped robot with a mechanized arm that takes air samples from potentially treacherous situations, such as an abandoned building or fire. The robot dog walks samples to a person who screens them for potentially hazardous compounds, says the team that published its study in ACS’ Analytical Chemistry. While the system needs further refinement, demonstrations show its potential value in dangerous conditions.
    Testing the air for dangerous chemicals in risky workplaces or after an accident, such as a fire, is an important but very dangerous task for scientists and technicians. To keep humans out of harm’s way, Bin Hu and colleagues are developing mobile detection systems for hazardous gases and volatile organic compounds (VOCs) by building remote-controlled sampling devices like aerial drones and tiny remotely operated ships. The team’s latest entry into this mechanical menagerie is a dog-like robot with an articulated testing arm mounted on its back. The independently controlled arm is loaded with three needle trap devices (NTDs) that can collect air samples at any point during the robot’s terrestrial mission.
    The researchers test-drove their four-legged “lab” through a variety of inaccessible environments, including a garbage disposal plant, sewer system, gasoline fireground and chemical warehouse, to sample the air for hazardous VOCs. While the robot had trouble navigating effectively in rainy and snowy weather, it collected air samples and returned them to the portable mass spectrometer (MS) for onsite analysis in less time than it would take to transfer the samples to an off-site laboratory — and without putting a technician in a dangerous environment. The researchers say the robot-MS system represents a “smart” and safer approach for detecting potentially harmful compounds.
    The authors acknowledge funding from the Guangzhou Science and Technology Program and the National Natural Science Foundation of China. More

  • in

    Protocol for creating ‘wired miniature brains’

    Researchers worldwide can now create highly realistic brain cortical organoids — essentially miniature artificial brains with functioning neural networks — thanks to a proprietary protocol released this month by researchers at the University of California San Diego.
    The new technique, published in Nature Protocols, paves the way for scientists to perform more advanced research regarding autism, schizophrenia and other neurological disorders in which the brain’s structure is usually typical, but electrical activity is altered. That’s according to Alysson Muotri, Ph.D., corresponding author and director of the UC San Diego Sanford Stem Cell Institute (SSCI) Integrated Space Stem Cell Orbital Research Center. The SSCI is directed by Dr. Catriona Jamieson, a leading physician-scientist in cancer stem cell biology whose research explores the fundamental question of how space alters cancer progression.
    The newly detailed method allows for the creation of tiny replicas of the human brain so realistic that they rival “the complexity of the fetal brain’s neural network,” according to Muotri, who is also a professor in the UC San Diego School of Medicine’s Departments of Pediatrics and Cellular and Molecular Medicine. His brain replicas have already traveled to the International Space Station (ISS), where their activity was studied under conditions of microgravity.
    Two other protocols for creating brain organoids are publicly accessible, but neither allow researchers to study the brain’s electrical activity. Muotri’s method, however, allows researchers to study neural networks created from the stem cells of patients with various neurodevelopmental conditions.
    “You no longer need to create different regions and assemble them together,” said Muotri, adding that his protocol allows different brain areas — like the cortex and midbrain — “to co-develop, as naturally observed in human development.”
    “I believe we will see many derivations of this protocol in the future for the study of different brain circuits,” he added.
    Such “mini brains” can be used to test potentially therapeutic drugs and even gene therapies before patient use, as well as to screen for efficacy and side effects, according to Muotri.

    A plan to do so is already in the works. Muotri and researchers at the Federal University of Amazonas in Manaus, Amazonas, Brazil, are teaming up to record and investigate Amazonian tribal remedies for Alzheimer’s disease — not on Earth-based mouse models, but on diseased human brain organoids in space.
    A recent Humans in Space grant — awarded by Boryung, a leading health care investment company based in South Korea — will help fuel the research project, which spans multiple continents and habitats, from the depths of the Amazon rainforest to Muotri’s lab on the coast of California — and, eventually, to the International Space Station.
    Other research possibilities for the brain organoids include disease modeling, understanding human consciousness and additional space-based experiments. In March, Muotri — in partnership with NASA — sent to space a number of brain organoids made from the stem cells of patients with Alzheimer’s disease and ALS (amyotrophic lateral sclerosis, also known as Lou Gehrig’s disease). The payload returned in May and results, which will eventually be published, are being reviewed.
    Because microgravity mimics an accelerated version of Earth-based aging, Muotri should be able to witness the effects of several years of disease progression while studying the month-long mission’s payload, including potential changes in protein production, signaling pathways, oxidative stress and epigenetics.
    “We’re hoping for novel findings — things researchers haven’t discovered before,” he said. “Nobody has sent such a model into space, until now.”
    Co-authors of the study include Michael Q. Fitzgerald, Tiffany Chu, Francesca Puppo, Rebeca Blanch and Shankar Subramaniam, all of UC San Diego, and Miguel Chillón, of the Universitat Autònoma de Barcelona and the Institució Catalana de Recerca i Estudis Avançats, both in Barcelona, Spain. Blanch is also affiliated with the Universitat Autònoma de Barcelona.
    This work was supported by the National Institutes of Health R01MH100175, R01NS105969, MH123828, R01NS123642, R01MH127077, R01ES033636, R21MH128827, R01AG078959, R01DA056908, R01HD107788, R01HG012351, R21HD109616, R01MH107367, California Institute for Regenerative Medicine (CIRM) DISC2-13515 and a grant from the Department of Defense W81XWH2110306. More

  • in

    Advanced AI-based techniques scale-up solving complex combinatorial optimization problems

    A framework based on advanced AI techniques can solve complex, computationally intensive problems faster and in a more more scalable way than state-of-the-art methods, according to a study led by engineers at the University of California San Diego.
    In the paper, which was published May 30 in Nature Machine Intelligence, researchers present HypOp, a framework that uses unsupervised learning and hypergraph neural networks. The framework is able to solve combinatorial optimization problems significantly faster than existing methods. HypOp is also able to solve certain combinatorial problems that can’t be solved as effectively by prior methods.
    “In this paper, we tackle the difficult task of addressing combinatorial optimization problems that are paramount in many fields of science and engineering,” said Nasimeh Heydaribeni, the paper’s corresponding author and a postdoctoral scholar in the UC San Diego Department of Electrical and Computer Engineering. She is part of the research group of Professor Farinaz Koushanfar, who co-directs the Center for Machine-Intelligence, Computing and Security at the UC San Diego Jacobs School of Engineering. Professor Tina Eliassi-Rad from Northeastern University also collaborated with the UC San Diego team on this project.
    One example of a relatively simple combinatorial problem is figuring out how many and what kind of goods to stock at specific warehouses in order to consume the least amount of gas when delivering these goods.
    HypOp can be applied to a broad spectrum of challenging real-world problems, with applications in drug discovery, chip design, logic verification, logistics and more. These are all combinatorial problems with a wide range of variables and constraints that make them extremely difficult to solve. That is because in these problems, the size of the underlying search space for finding potential solutions increases exponentially rather than in a linear fashion with respect to the problem size.
    HypOp can solve these complex problems in a more scalable manner by using a new distributed algorithm that allows multiple computation units on the hypergraph to solve the problem together, in parallel, more efficiently.
    HypOp introduces new problem embedding leveraging hypergraph neural networks, which have higher order connections than traditional graph neural networks, to better model the problem constraints and solve them more proficiently. HypOp also can transfer learning from one problem to help solve other, seemingly different problems more effectively. HypOp includes an additional fine-tuning step, which leads to finding more accurate solutions than the prior existing methods.
    This research was funded in part by the Department of Defense and Army Research Office funded MURI AutoCombat project and the NSF-funded TILOS AI Institute. More

  • in

    Researchers demonstrate the first chip-based 3D printer

    Imagine a portable 3D printer you could hold in the palm of your hand. The tiny device could enable a user to rapidly create customized, low-cost objects on the go, like a fastener to repair a wobbly bicycle wheel or a component for a critical medical operation.
    Researchers from MIT and the University of Texas at Austin took a major step toward making this idea a reality by demonstrating the first chip-based 3D printer. Their proof-of-concept device consists of a single, millimeter-scale photonic chip that emits reconfigurable beams of light into a well of resin that cures into a solid shape when light strikes it.
    The prototype chip has no moving parts, instead relying on an array of tiny optical antennas to steer a beam of light. The beam projects up into a liquid resin that has been designed to rapidly cure when exposed to the beam’s wavelength of visible light.
    By combining silicon photonics and photochemistry, the interdisciplinary research team was able to demonstrate a chip that can steer light beams to 3D print arbitrary two-dimensional patterns, including the letters M-I-T. Shapes can be fully formed in a matter of seconds.
    In the long run, they envision a system where a photonic chip sits at the bottom of a well of resin and emits a 3D hologram of visible light, rapidly curing an entire object in a single step.
    This type of portable 3D printer could have many applications, such as enabling clinicians to create tailor-made medical device components or allowing engineers to make rapid prototypes at a job site.
    “This system is completely rethinking what a 3D printer is. It is no longer a big box sitting on a bench in a lab creating objects, but something that is handheld and portable. It is exciting to think about the new applications that could come out of this and how the field of 3D printing could change,” says senior author Jelena Notaros, the Robert J. Shillman Career Development Professor in Electrical Engineering and Computer Science (EECS), and a member of the Research Laboratory of Electronics.

    Joining Notaros on the paper are Sabrina Corsetti, lead author and EECS graduate student; Milica Notaros PhD ’23; Tal Sneh, an EECS graduate student; Alex Safford, a recent graduate of the University of Texas at Austin; and Zak Page, an assistant professor in the Department of Chemical Engineering at UT Austin. The research appears today in Nature Light Science and Applications.
    Printing with a chip
    Experts in silicon photonics, the Notaros group previously developed integrated optical-phased-array systems that steer beams of light using a series of microscale antennas fabricated on a chip using semiconductor manufacturing processes. By speeding up or delaying the optical signal on either side of the antenna array, they can move the beam of emitted light in a certain direction.
    Such systems are key for lidar sensors, which map their surroundings by emitting infrared light beams that bounce off nearby objects. Recently, the group has focused on systems that emit and steer visible light for augmented-reality applications.
    They wondered if such a device could be used for a chip-based 3D printer.
    At about the same time they started brainstorming, the Page Group at UT Austin demonstrated specialized resins that can be rapidly cured using wavelengths of visible light for the first time. This was the missing piece that pushed the chip-based 3D printer into reality.

    “With photocurable resins, it is very hard to get them to cure all the way up at infrared wavelengths, which is where integrated optical-phased-array systems were operating in the past for lidar,” Corsetti says. “Here, we are meeting in the middle between standard photochemistry and silicon photonics by using visible-light-curable resins and visible-light-emitting chips to create this chip-based 3D printer. You have this merging of two technologies into a completely new idea.”
    Their prototype consists of a single photonic chip containing an array of 160-nanometer-thick optical antennas. (A sheet of paper is about 100,000 nanometers thick.) The entire chip fits onto a U.S. quarter.
    When powered by an off-chip laser, the antennas emit a steerable beam of visible light into the well of photocurable resin. The chip sits below a clear slide, like those used in microscopes, which contains a shallow indentation that holds the resin. The researchers use electrical signals to nonmechanically steer the light beam, causing the resin to solidify wherever the beam strikes it.
    A collaborative approach
    But effectively modulating visible-wavelength light, which involves modifying its amplitude and phase, is especially tricky. One common method requires heating the chip, but this is inefficient and takes a large amount of physical space.
    Instead, the researchers used liquid crystal to fashion compact modulators they integrate onto the chip. The material’s unique optical properties enable the modulators to be extremely efficient and only about 20 microns in length.
    A single waveguide on the chip holds the light from the off-chip laser. Running along the waveguide are tiny taps which tap off a little bit of light to each of the antennas.
    The researchers actively tune the modulators using an electric field, which reorients the liquid crystal molecules in a certain direction. In this way, they can precisely control the amplitude and phase of light being routed to the antennas.
    But forming and steering the beam is only half the battle. Interfacing with a novel photocurable resin was a completely different challenge.
    The Page Group at UT Austin worked closely with the Notaros Group at MIT, carefully adjusting the chemical combinations and concentrations to zero-in on a formula that provided a long shelf-life and rapid curing.
    In the end, the group used their prototype to 3D print arbitrary two-dimensional shapes within seconds.
    Building off this prototype, they want to move toward developing a system like the one they originally conceptualized — a chip that emits a hologram of visible light in a resin well to enable volumetric 3D printing in only one step.
    “To be able to do that, we need a completely new silicon-photonics chip design. We already laid out a lot of what that final system would look like in this paper. And, now, we are excited to continue working towards this ultimate demonstration,” Jelena Notaros says.
    This work was funded, in part, by the U.S. National Science Foundation, the U.S. Defense Advanced Research Projects Agency, the Robert A. Welch Foundation, the MIT Rolf G. Locher Endowed Fellowship, and the MIT Frederick and Barbara Cronin Fellowship. More