More stories

  • in

    New computer vision method helps speed up screening of electronic materials

    Boosting the performance of solar cells, transistors, LEDs, and batteries will require better electronic materials, made from novel compositions that have yet to be discovered.
    To speed up the search for advanced functional materials, scientists are using AI tools to identify promising materials from hundreds of millions of chemical formulations. In tandem, engineers are building machines that can print hundreds of material samples at a time based on chemical compositions tagged by AI search algorithms.
    But to date, there’s been no similarly speedy way to confirm that these printed materials actually perform as expected. This last step of material characterization has been a major bottleneck in the pipeline of advanced materials screening.
    Now, a new computer vision technique developed by MIT engineers significantly speeds up the characterization of newly synthesized electronic materials. The technique automatically analyzes images of printed semiconducting samples and quickly estimates two key electronic properties for each sample: band gap (a measure of electron activation energy) and stability (a measure of longevity).
    The new technique accurately characterizes electronic materials 85 times faster compared to the standard benchmark approach.
    The researchers intend to use the technique to speed up the search for promising solar cell materials. They also plan to incorporate the technique into a fully automated materials screening system.
    “Ultimately, we envision fitting this technique into an autonomous lab of the future,” says MIT graduate student Eunice Aissi. “The whole system would allow us to give a computer a materials problem, have it predict potential compounds, and then run 24-7 making and characterizing those predicted materials until it arrives at the desired solution.”
    “The application space for these techniques ranges from improving solar energy to transparent electronics and transistors,” adds MIT graduate student Alexander (Aleks) Siemenn. “It really spans the full gamut of where semiconductor materials can benefit society.”

    Aissi and Siemenn detail the new technique in a study that will appear in Nature Communications. Their MIT co-authors include graduate student Fang Sheng, postdoc Basita Das, and professor of mechanical engineering Tonio Buonassisi, along with former visiting professor Hamide Kavak of Cukurova University and visiting postdoc Armi Tiihonen of Aalto University.
    Power in optics
    Once a new electronic material is synthesized, the characterization of its properties is typically handled by a “domain expert” who examines one sample at a time using a benchtop tool called a UV-Vis, which scans through different colors of light to determine where the semiconductor begins to absorb more strongly. This manual process is precise but also time-consuming: A domain expert typically characterizes about 20 material samples per hour — a snail’s pace compared to some printing tools that can lay down 10,000 different material combinations per hour.
    “The manual characterization process is very slow,” Buonassisi says. “They give you a high amount of confidence in the measurement, but they’re not matched to the speed at which you can put matter down on a substrate nowadays.”
    To speed up the characterization process and clear one of the largest bottlenecks in materials screening, Buonassisi and his colleagues looked to computer vision — a field that applies computer algorithms to quickly and automatically analyze optical features in an image.
    “There’s power in optical characterization methods,” Buonassisi notes. “You can obtain information very quickly. There is richness in images, over many pixels and wavelengths, that a human just can’t process but a computer machine-learning program can.”
    The team realized that certain electronic properties — namely, band gap and stability — could be estimated based on visual information alone, if that information were captured with enough detail and interpreted correctly.

    With that goal in mind, the researchers developed two new computer vision algorithms to automatically interpret images of electronic materials: one to estimate band gap and the other to determine stability.
    The first algorithm is designed to process visual data from highly detailed, hyperspectral images.
    “Instead of a standard camera image with three channels — red, green, and blue (RBG) — the hyperspectral image has 300 channels,” Siemenn explains. “The algorithm takes that data, transforms it, and computes a band gap. We run that process extremely fast.”
    The second algorithm analyzes standard RGB images and assesses a material’s stability based on visual changes in the material’s color over time.
    “We found that color change can be a good proxy for degradation rate in the material system we are studying,” Aissi says.
    Material compositions
    The team applied the two new algorithms to characterize the band gap and stability for about 70 printed semiconducting samples. They used a robotic printer to deposit samples on a single slide, like cookies on a baking sheet. Each deposit was made with a slightly different combination of semiconducting materials. In this case, the team printed different ratios of perovskites — a type of material that is expected to be a promising solar cell candidate though is also known to quickly degrade.
    “People are trying to change the composition — add a little bit of this, a little bit of that — to try to make [perovskites] more stable and high-performance,” Buonassisi says.
    Once they printed 70 different compositions of perovskite samples on a single slide, the team scanned the slide with a hyperspectral camera. Then they applied an algorithm that visually “segments” the image, automatically isolating the samples from the background. They ran the new band gap algorithm on the isolated samples and automatically computed the band gap for every sample. The entire band gap extraction process process took about six minutes.
    “It would normally take a domain expert several days to manually characterize the same number of samples,” Siemenn says.
    To test for stability, the team placed the same slide in a chamber in which they varied the environmental conditions, such as humidity, temperature, and light exposure. They used a standard RGB camera to take an image of the samples every 30 seconds over two hours. They then applied the second algorithm to the images of each sample over time to estimate the degree to which each droplet changed color, or degraded under various environmental conditions. In the end, the algorithm produced a “stability index,” or a measure of each sample’s durability.
    As a check, the team compared their results with manual measurements of the same droplets, taken by a domain expert. Compared to the expert’s benchmark estimates, the team’s band gap and stability results were 98.5 percent and 96.9 percent as accurate, respectively, and 85 times faster.
    “We were constantly shocked by how these algorithms were able to not just increase the speed of characterization, but also to get accurate results,” Siemenn says. “We do envision this slotting into the current automated materials pipeline we’re developing in the lab, so we can run it in a fully automated fashion, using machine learning to guide where we want to discover these new materials, printing them, and then actually characterizing them, all with very fast processing.”
    This work was supported in part by First Solar. More

  • in

    Four-legged, dog-like robot ‘sniffs’ hazardous gases in inaccessible environments

    Nightmare material or truly man’s best friend? A team of researchers equipped a dog-like quadruped robot with a mechanized arm that takes air samples from potentially treacherous situations, such as an abandoned building or fire. The robot dog walks samples to a person who screens them for potentially hazardous compounds, says the team that published its study in ACS’ Analytical Chemistry. While the system needs further refinement, demonstrations show its potential value in dangerous conditions.
    Testing the air for dangerous chemicals in risky workplaces or after an accident, such as a fire, is an important but very dangerous task for scientists and technicians. To keep humans out of harm’s way, Bin Hu and colleagues are developing mobile detection systems for hazardous gases and volatile organic compounds (VOCs) by building remote-controlled sampling devices like aerial drones and tiny remotely operated ships. The team’s latest entry into this mechanical menagerie is a dog-like robot with an articulated testing arm mounted on its back. The independently controlled arm is loaded with three needle trap devices (NTDs) that can collect air samples at any point during the robot’s terrestrial mission.
    The researchers test-drove their four-legged “lab” through a variety of inaccessible environments, including a garbage disposal plant, sewer system, gasoline fireground and chemical warehouse, to sample the air for hazardous VOCs. While the robot had trouble navigating effectively in rainy and snowy weather, it collected air samples and returned them to the portable mass spectrometer (MS) for onsite analysis in less time than it would take to transfer the samples to an off-site laboratory — and without putting a technician in a dangerous environment. The researchers say the robot-MS system represents a “smart” and safer approach for detecting potentially harmful compounds.
    The authors acknowledge funding from the Guangzhou Science and Technology Program and the National Natural Science Foundation of China. More

  • in

    Protocol for creating ‘wired miniature brains’

    Researchers worldwide can now create highly realistic brain cortical organoids — essentially miniature artificial brains with functioning neural networks — thanks to a proprietary protocol released this month by researchers at the University of California San Diego.
    The new technique, published in Nature Protocols, paves the way for scientists to perform more advanced research regarding autism, schizophrenia and other neurological disorders in which the brain’s structure is usually typical, but electrical activity is altered. That’s according to Alysson Muotri, Ph.D., corresponding author and director of the UC San Diego Sanford Stem Cell Institute (SSCI) Integrated Space Stem Cell Orbital Research Center. The SSCI is directed by Dr. Catriona Jamieson, a leading physician-scientist in cancer stem cell biology whose research explores the fundamental question of how space alters cancer progression.
    The newly detailed method allows for the creation of tiny replicas of the human brain so realistic that they rival “the complexity of the fetal brain’s neural network,” according to Muotri, who is also a professor in the UC San Diego School of Medicine’s Departments of Pediatrics and Cellular and Molecular Medicine. His brain replicas have already traveled to the International Space Station (ISS), where their activity was studied under conditions of microgravity.
    Two other protocols for creating brain organoids are publicly accessible, but neither allow researchers to study the brain’s electrical activity. Muotri’s method, however, allows researchers to study neural networks created from the stem cells of patients with various neurodevelopmental conditions.
    “You no longer need to create different regions and assemble them together,” said Muotri, adding that his protocol allows different brain areas — like the cortex and midbrain — “to co-develop, as naturally observed in human development.”
    “I believe we will see many derivations of this protocol in the future for the study of different brain circuits,” he added.
    Such “mini brains” can be used to test potentially therapeutic drugs and even gene therapies before patient use, as well as to screen for efficacy and side effects, according to Muotri.

    A plan to do so is already in the works. Muotri and researchers at the Federal University of Amazonas in Manaus, Amazonas, Brazil, are teaming up to record and investigate Amazonian tribal remedies for Alzheimer’s disease — not on Earth-based mouse models, but on diseased human brain organoids in space.
    A recent Humans in Space grant — awarded by Boryung, a leading health care investment company based in South Korea — will help fuel the research project, which spans multiple continents and habitats, from the depths of the Amazon rainforest to Muotri’s lab on the coast of California — and, eventually, to the International Space Station.
    Other research possibilities for the brain organoids include disease modeling, understanding human consciousness and additional space-based experiments. In March, Muotri — in partnership with NASA — sent to space a number of brain organoids made from the stem cells of patients with Alzheimer’s disease and ALS (amyotrophic lateral sclerosis, also known as Lou Gehrig’s disease). The payload returned in May and results, which will eventually be published, are being reviewed.
    Because microgravity mimics an accelerated version of Earth-based aging, Muotri should be able to witness the effects of several years of disease progression while studying the month-long mission’s payload, including potential changes in protein production, signaling pathways, oxidative stress and epigenetics.
    “We’re hoping for novel findings — things researchers haven’t discovered before,” he said. “Nobody has sent such a model into space, until now.”
    Co-authors of the study include Michael Q. Fitzgerald, Tiffany Chu, Francesca Puppo, Rebeca Blanch and Shankar Subramaniam, all of UC San Diego, and Miguel Chillón, of the Universitat Autònoma de Barcelona and the Institució Catalana de Recerca i Estudis Avançats, both in Barcelona, Spain. Blanch is also affiliated with the Universitat Autònoma de Barcelona.
    This work was supported by the National Institutes of Health R01MH100175, R01NS105969, MH123828, R01NS123642, R01MH127077, R01ES033636, R21MH128827, R01AG078959, R01DA056908, R01HD107788, R01HG012351, R21HD109616, R01MH107367, California Institute for Regenerative Medicine (CIRM) DISC2-13515 and a grant from the Department of Defense W81XWH2110306. More

  • in

    Advanced AI-based techniques scale-up solving complex combinatorial optimization problems

    A framework based on advanced AI techniques can solve complex, computationally intensive problems faster and in a more more scalable way than state-of-the-art methods, according to a study led by engineers at the University of California San Diego.
    In the paper, which was published May 30 in Nature Machine Intelligence, researchers present HypOp, a framework that uses unsupervised learning and hypergraph neural networks. The framework is able to solve combinatorial optimization problems significantly faster than existing methods. HypOp is also able to solve certain combinatorial problems that can’t be solved as effectively by prior methods.
    “In this paper, we tackle the difficult task of addressing combinatorial optimization problems that are paramount in many fields of science and engineering,” said Nasimeh Heydaribeni, the paper’s corresponding author and a postdoctoral scholar in the UC San Diego Department of Electrical and Computer Engineering. She is part of the research group of Professor Farinaz Koushanfar, who co-directs the Center for Machine-Intelligence, Computing and Security at the UC San Diego Jacobs School of Engineering. Professor Tina Eliassi-Rad from Northeastern University also collaborated with the UC San Diego team on this project.
    One example of a relatively simple combinatorial problem is figuring out how many and what kind of goods to stock at specific warehouses in order to consume the least amount of gas when delivering these goods.
    HypOp can be applied to a broad spectrum of challenging real-world problems, with applications in drug discovery, chip design, logic verification, logistics and more. These are all combinatorial problems with a wide range of variables and constraints that make them extremely difficult to solve. That is because in these problems, the size of the underlying search space for finding potential solutions increases exponentially rather than in a linear fashion with respect to the problem size.
    HypOp can solve these complex problems in a more scalable manner by using a new distributed algorithm that allows multiple computation units on the hypergraph to solve the problem together, in parallel, more efficiently.
    HypOp introduces new problem embedding leveraging hypergraph neural networks, which have higher order connections than traditional graph neural networks, to better model the problem constraints and solve them more proficiently. HypOp also can transfer learning from one problem to help solve other, seemingly different problems more effectively. HypOp includes an additional fine-tuning step, which leads to finding more accurate solutions than the prior existing methods.
    This research was funded in part by the Department of Defense and Army Research Office funded MURI AutoCombat project and the NSF-funded TILOS AI Institute. More

  • in

    Researchers demonstrate the first chip-based 3D printer

    Imagine a portable 3D printer you could hold in the palm of your hand. The tiny device could enable a user to rapidly create customized, low-cost objects on the go, like a fastener to repair a wobbly bicycle wheel or a component for a critical medical operation.
    Researchers from MIT and the University of Texas at Austin took a major step toward making this idea a reality by demonstrating the first chip-based 3D printer. Their proof-of-concept device consists of a single, millimeter-scale photonic chip that emits reconfigurable beams of light into a well of resin that cures into a solid shape when light strikes it.
    The prototype chip has no moving parts, instead relying on an array of tiny optical antennas to steer a beam of light. The beam projects up into a liquid resin that has been designed to rapidly cure when exposed to the beam’s wavelength of visible light.
    By combining silicon photonics and photochemistry, the interdisciplinary research team was able to demonstrate a chip that can steer light beams to 3D print arbitrary two-dimensional patterns, including the letters M-I-T. Shapes can be fully formed in a matter of seconds.
    In the long run, they envision a system where a photonic chip sits at the bottom of a well of resin and emits a 3D hologram of visible light, rapidly curing an entire object in a single step.
    This type of portable 3D printer could have many applications, such as enabling clinicians to create tailor-made medical device components or allowing engineers to make rapid prototypes at a job site.
    “This system is completely rethinking what a 3D printer is. It is no longer a big box sitting on a bench in a lab creating objects, but something that is handheld and portable. It is exciting to think about the new applications that could come out of this and how the field of 3D printing could change,” says senior author Jelena Notaros, the Robert J. Shillman Career Development Professor in Electrical Engineering and Computer Science (EECS), and a member of the Research Laboratory of Electronics.

    Joining Notaros on the paper are Sabrina Corsetti, lead author and EECS graduate student; Milica Notaros PhD ’23; Tal Sneh, an EECS graduate student; Alex Safford, a recent graduate of the University of Texas at Austin; and Zak Page, an assistant professor in the Department of Chemical Engineering at UT Austin. The research appears today in Nature Light Science and Applications.
    Printing with a chip
    Experts in silicon photonics, the Notaros group previously developed integrated optical-phased-array systems that steer beams of light using a series of microscale antennas fabricated on a chip using semiconductor manufacturing processes. By speeding up or delaying the optical signal on either side of the antenna array, they can move the beam of emitted light in a certain direction.
    Such systems are key for lidar sensors, which map their surroundings by emitting infrared light beams that bounce off nearby objects. Recently, the group has focused on systems that emit and steer visible light for augmented-reality applications.
    They wondered if such a device could be used for a chip-based 3D printer.
    At about the same time they started brainstorming, the Page Group at UT Austin demonstrated specialized resins that can be rapidly cured using wavelengths of visible light for the first time. This was the missing piece that pushed the chip-based 3D printer into reality.

    “With photocurable resins, it is very hard to get them to cure all the way up at infrared wavelengths, which is where integrated optical-phased-array systems were operating in the past for lidar,” Corsetti says. “Here, we are meeting in the middle between standard photochemistry and silicon photonics by using visible-light-curable resins and visible-light-emitting chips to create this chip-based 3D printer. You have this merging of two technologies into a completely new idea.”
    Their prototype consists of a single photonic chip containing an array of 160-nanometer-thick optical antennas. (A sheet of paper is about 100,000 nanometers thick.) The entire chip fits onto a U.S. quarter.
    When powered by an off-chip laser, the antennas emit a steerable beam of visible light into the well of photocurable resin. The chip sits below a clear slide, like those used in microscopes, which contains a shallow indentation that holds the resin. The researchers use electrical signals to nonmechanically steer the light beam, causing the resin to solidify wherever the beam strikes it.
    A collaborative approach
    But effectively modulating visible-wavelength light, which involves modifying its amplitude and phase, is especially tricky. One common method requires heating the chip, but this is inefficient and takes a large amount of physical space.
    Instead, the researchers used liquid crystal to fashion compact modulators they integrate onto the chip. The material’s unique optical properties enable the modulators to be extremely efficient and only about 20 microns in length.
    A single waveguide on the chip holds the light from the off-chip laser. Running along the waveguide are tiny taps which tap off a little bit of light to each of the antennas.
    The researchers actively tune the modulators using an electric field, which reorients the liquid crystal molecules in a certain direction. In this way, they can precisely control the amplitude and phase of light being routed to the antennas.
    But forming and steering the beam is only half the battle. Interfacing with a novel photocurable resin was a completely different challenge.
    The Page Group at UT Austin worked closely with the Notaros Group at MIT, carefully adjusting the chemical combinations and concentrations to zero-in on a formula that provided a long shelf-life and rapid curing.
    In the end, the group used their prototype to 3D print arbitrary two-dimensional shapes within seconds.
    Building off this prototype, they want to move toward developing a system like the one they originally conceptualized — a chip that emits a hologram of visible light in a resin well to enable volumetric 3D printing in only one step.
    “To be able to do that, we need a completely new silicon-photonics chip design. We already laid out a lot of what that final system would look like in this paper. And, now, we are excited to continue working towards this ultimate demonstration,” Jelena Notaros says.
    This work was funded, in part, by the U.S. National Science Foundation, the U.S. Defense Advanced Research Projects Agency, the Robert A. Welch Foundation, the MIT Rolf G. Locher Endowed Fellowship, and the MIT Frederick and Barbara Cronin Fellowship. More

  • in

    Researchers create skin-inspired sensory robots to provide medical treatment

    University of North Carolina at Chapel Hill scientists have created innovative soft robots equipped with electronic skins and artificial muscles, allowing them to sense their surroundings and adapt their movements in real-time, according to the paper, “Skin-Inspired, Sensory Robots for Electronic Implants,” in Nature Communications.
    In their research, funded by the National Science Foundation and the National Institutes of Health, the robots are designed to mimic the way muscles and skin work together in animals, making them more effective and safer to use inside the body. The e-skin integrates various sensing materials, such as silver nanowires and conductive polymers within a flexible base, closely resembling the complex sensory functions of real skin.
    “These soft robots can perform a variety of well-controlled movements, including bending, expanding and twisting inside biological environments,” said Lin Zhang, first author of the paper and a postdoctoral fellow in Carolina’s Department of Applied Physical Sciences. “They are designed to attach to tissues gently, reducing stress and potential damage. Inspired by natural shapes like starfish and seedpods, they can transform their structures to perform different tasks efficiently.”
    These features make soft sensory robots highly adaptable and useful for enhancing medical diagnostics and treatments. They can change shape to fit organs for better sensing and treatment; are capable of continuous monitoring of internal conditions, like bladder volume and blood pressure; provide treatments, such as electrical stimulation, based on real-time data; and can be swallowed to monitor and treat conditions in the stomach.
    An ingestible robot capable of residing in the stomach called a thera-gripper, can monitor pH levels and deliver drugs over an extended period, improving treatment outcomes for gastrointestinal conditions. The thera-gripper can also gently attach to a beating heart, continuously monitoring electrophysiological activity, measuring cardiac contraction and providing electrical stimulation to regulate heart rhythm.
    A robotic gripper designed to wrap around a person’s bladder can measure its volume and provide electrical stimulation to treat the overactive one, enhancing patient care and treatment efficacy. A robotic cuff that twists around a blood vessel can accurately measure blood pressure in real time, offering a non-invasive and precise monitoring solution.
    “Tests on mice have demonstrated the thera-gripper’s capability to perform these functions effectively, showcasing its potential as a next-generation cardiac implant,” said Zhang.
    The Bai Lab collaborated on the study with UNC-Chapel Hill researchers in the Department of Biology; Department of Biomedical Engineering; Department of Chemistry; Joint Department of Biomedical Engineering and McAllister Heart Institute; North Carolina State University; and Weldon School of Biomedical Engineering at Purdue University.
    The researchers’ success in live animal models suggests a promising future for these robots in real-world medical applications, potentially revolutionizing the treatment of chronic diseases and improving patient outcomes.
    “This innovative approach to robot design not only broadens the scope of medical devices but also highlights the potential for future advancements in the synergistic interaction between soft implantable robots and biological tissues,” said Wubin Bai, principal investigator of the research and Carolina assistant professor. “We’re aiming for long-term biocompatibility and stability in dynamic physiological environments.” More

  • in

    Peers crucial in shaping boys’ confidence in math skills

    Boys are good at math, girls not so much? A study from the University of Zurich has analyzed the social mechanisms that contribute to the gender gap in math confidence. While peer comparisons seem to play a crucial role for boys, girls’ subjective evaluations are more likely to be based on objective performance.
    Research has shown that in Western societies, the average secondary school girl has less confidence in her mathematical abilities than the average boy of the same age. At the same time, no significant difference has been found between girls’ and boys’ performance in mathematics. This phenomenon is often framed as girls not being confident enough in their abilities, or that boys might in fact be overconfident.
    This math confidence gap has far-reaching consequences: self-perceived competence influences educational and occupational choices and young people choose university subjects and careers that they believe they are talented in. As a result, women are underrepresented in STEM (science, technology, engineering, math) subjects at university level and in high-paying STEM careers.
    Peer processes provide nuanced insights into varying self-perceptions
    A study from the University of Zurich (UZH) focuses on a previously neglected aspect of the math confidence gap: the role of peer relationships. “Especially in adolescence, peers are the primary social reference for individual development. Peer processes that operate through friendship networks determine a wide range of individual outcomes,” said the study’s lead author Isabel Raabe from the Department of Sociology at UZH. The study analyzed data from 8,812 individuals in 358 classrooms in a longitudinal social network analysis.
    As expected, the main predictor of math confidence is individual math grades. While girls translated their grades — more or less directly — into self-assessment, boys with below-average grades nevertheless believed they were good at math.
    Boys tend to be overconfident and sensitive to social processes
    “In general, boys seem to be more sensitive to social processes in their self-perception — they compare themselves more with others for validation and then adjust their confidence accordingly,” Raabe explains. “When they were confronted with girls’ self-assessments in cross-gender friendships, their math confidence tended to be lower.” Peers’ self-assessment was less relevant to girls’ math confidence. Their subjective evaluation seemed to be driven more by objective performance.
    Gender stereotypes did not appear to have negative social consequences for either boys or girls. “We found that confidence in mathematics is often associated with better social integration, both in same-sex and cross-sex friendships,” said Raabe. Thus, there was no evidence of harmful peer norms pressuring girls to underestimate their math skills.
    The results of the study suggest that math skills are more important to boys, who adjust their self-assessment in peer processes, while math confidence does not seem to be socially relevant for girls. More

  • in

    Miniaturizing a laser on a photonic chip

    Lasers have revolutionized the world since the 60’s and are now indispensable in modern applications, from cutting-edge surgery and precise manufacturing to data transmission across optical fibers.
    But as the need for laser-based applications grows, so do challenges. For example, there is a growing market for fiber lasers, which are currently used in industrial cutting, welding, and marking applications.
    Fiber lasers use an optical fiber doped with rare-earth elements (erbium, ytterbium, neodymium etc) as their optical gain source (the part that produces the laser’s light). They emit high-quality beams, they have high power output, and they are efficient, low-maintenance, durable, and they are typically smaller than gas lasers. Fiber lasers are also the ‘gold standard’ for low phase noise, meaning that their beams remain stable over time.
    But despite all that, there is a growing demand for miniaturizing fiber lasers on a chip-scale level. Erbium-based fiber lasers are especially interesting, as they meet all the requirements for maintaining a laser’s high coherence and stability. But miniaturizing them has been met by challenges in maintaining their performance at small scales.
    Now, scientists led by Dr Yang Liu and Professor Tobias Kippenberg at EPFL have built the first ever chip-integrated erbium-doped waveguide laser that approaches the performance with fiber-based lasers, combining wide wavelength tunability with the practicality of chip-scale photonic integration. The breakthrough is published in Nature Photonics.
    Building a chip-scale laser
    The researchers developed their chip-scale erbium laser using a state-of-the-art fabrication process. They began by constructing a meter-long, on-chip optical cavity (a set of mirrors that provide optical feedback) based on ultralow-loss silicon nitride photonic integrated circuit.

    “We were able to design the laser cavity to be meter-scale in length despite the compact chip size, thanks to the integration of these microring resonators that effectively extend the optical path without physically enlarging the device,” says Dr. Liu.
    The team then implanted the circuit with high-concentration erbium ions to selectively create the active gain medium necessary for lasing. Finally, they integrated the citcuit with a III-V semiconductor pump laser to excite the erbium ions to enable them to emit light and produce the laser beam.
    To refine the laser’s performance and achieve precise wavelength control, the researchers engineered an innovative intra-cavity design featuring microring-based Vernier filters, a type of optical filter that can select specific frequencies of light.
    The filters allow for dynamic tuning of the laser’s wavelength over a broad range, making it versatile and usable in various applications. This design supports stable, single-mode lasing with an impressively narrow intrinsic linewidth of just 50 Hz.
    It also allows for significant side mode suppression — the laser’s ability to emit light at a single, consistent frequency while minimizing the intensity of other frequencies (‘side modes’). This ensures “clean” and stable output across the light spectrum for high-precision applications.
    Power, precision, stability, and low noise
    The chip-scale erbium-based fiber laser features output power exceeding 10 mW and a side mode suppression ratio greater than 70 dB, outperforming many conventional systems.

    It also has a very narrow linewidth, which means the light it emits is very pure and steady, which is important for coherent applications such as sensing, gyroscopes, LiDAR, and optical frequency metrology.
    The microring-based Vernier filter gives the laser broad wavelength tunability across 40 nm within the C- and L-bands (ranges of wavelengths used in telecommunications), surpassing legacy fiber lasers in both tuning and low spectral spurs metrics (“spurs” are unwanted frequencies), while remaining compatible with current semiconductor manufacturing processes.
    Next-generation lasers
    Miniaturizing and integrating erbium fiber lasers into chip-scale devices can reduce their overall costs, making them accessible for portable and highly integrated systems across telecommunications, medical diagnostics, and consumer electronics.
    It can also scale down optical technologies in various other applications, such as LiDAR, microwave photonics, optical frequency synthesis, and free-space communications.
    “The application areas of such a new class of erbium-doped integrated lasers are virtually unlimited,” says Liu.
    The lab spin-off,EDWATEC SA, is an Integrated Device Manufacturer that can now offer Rare-Earth Ion-Doped Photonic Integrated Circuit-based Devices including high-performance amplifiers and lasers. More