More stories

  • in

    New deep-learning model outperforms Google AI system in predicting peptide structures

    Researchers at the University of Toronto have developed a deep-learning model, called PepFlow, that can predict all possible shapes of peptides — chains of amino acids that are shorter than proteins, but perform similar biological functions.
    PepFlow combines machine learning and physics to model the range of folding patterns that a peptide can assume based on its energy landscape. Peptides, unlike proteins, are very dynamic molecules that can take on a range of conformations.
    “We haven’t been able to model the full range of conformations for peptides until now,” said Osama Abdin, first author on the study and recent PhD graduate of molecular genetics at U of T’s Donnelly Centre for Cellular and Biomolecular Research. “PepFlow leverages deep-learning to capture the precise and accurate conformations of a peptide within minutes. There’s potential with this model to inform drug development through the design of peptides that act as binders.”
    The study was published today in the journal Nature Machine Intelligence.
    A peptide’s role in the human body is directly linked to how it folds, as its 3D structure determines the way it binds and interacts with other molecules. Peptides are known to be highly flexible, taking on a wide range of folding patterns, and are thus involved in many biological processes of interest to researchers in the development of therapeutics.
    “Peptides were the focus of the PepFlow model because they are very important biological molecules and they are naturally very dynamic, so we need to model their different conformations to understand their function,” said Philip M. Kim, principal investigator on the study and a professor at the Donnelly Centre. “They’re also important as therapeutics, as can be seen by the GLP1 analogues, like Ozempic, used to treat diabetes and obesity.”
    Peptides are also cheaper to produce than their larger protein counterparts, said Kim, who is also a professor of computer science at U of T’s Faculty of Arts & Science.

    The new model expands on the capabilities of the leading Google Deepmind AI system for predicting protein structure, AlphaFold. PepFlow can outperform AlphaFold2 by generating a range of conformations for a given peptide, which AlphaFold2 was not designed to do.
    What sets PepFlow apart is the technological innovations that power it. For instance, it is a generalized model that takes inspiration from Boltzmann generators, which are highly advanced physics-based machine learning models.
    PepFlow can also model peptide structures that take on unusual formations, such as the ring-like structure that results from a process called macrocyclization. Peptide macrocycles are currently a highly promising venue for drug development.
    While PepFlow improves upon AlphaFold2, it has limitations of its own, being the first version of a model. The study authors noted a number of ways in which PepFlow could be improved, including training the model with explicit data for solvent atoms, which would dissolve the peptides to form a solution, and for constraints on the distance between atoms in ring-like structures.
    PepFlow was built to be easily expanded to account for additional considerations and new information and potential uses. Even as a first version, PepFlow is a comprehensive and efficient model with potential for furthering the development of treatments that depend on peptide binding to activate or inhibit biological processes.
    “Modelling with PepFlow offers insight into the real energy landscape of peptides,” saidAbdin. “It took two-and-a-half years to develop PepFlow and one month to train it, but it was worthwhile to move to the next frontier, beyond models that only predict one structure of a peptide.” More

  • in

    Understanding quantum states: New research shows importance of precise topography in solid neon qubits

    Quantum computers have the potential to be revolutionary tools for their ability to perform calculations that would take classical computers many years to resolve.
    But to make an effective quantum computer, you need a reliable quantum bit, or qubit, that can exist in a simultaneous 0 or 1 state for a sufficiently long period, known as its coherence time.
    One promising approach is trapping a single electron on a solid neon surface, called an electron-on-solid-neon qubit. A study led by FAMU-FSU College of Engineering Professor Wei Guo that was published in Physical Review Letters shows new insight into the quantum state that describes the condition of electrons on such a qubit, information that can help engineers build this innovative technology.
    Guo’s team found that small bumps on the surface of solid neon in the qubit can naturally bind electrons, which creates ring-shaped quantum states of these electrons. The quantum state refers to the various properties of an electron, such as position, momentum and other characteristics, before they are measured. When the bumps are a certain size, the electron’s transition energy — the amount of energy required for an electron to move from one quantum ring state to another — aligns with the energy of microwave photons, another elementary particle.
    This alignment allows for controlled manipulation of the electron, which is needed for quantum computing.
    “This work significantly advances our understanding of the electron-trapping mechanism on a promising quantum computing platform,” Guo said. “It not only clarifies puzzling experimental observations but also delivers crucial insights for the design, optimization and control of electron-on-solid-neon qubits.”
    Previous work by Guo and collaborators demonstrated the viability of a solid-state single-electron qubit platform using electrons trapped on solid neon. Recent research showed coherence times as great as 0.1 millisecond, or 100 times longer than typical coherence times of 1 microsecond for conventional semiconductor-based and superconductor-based charge qubits.

    Coherence time determines how long a quantum system can maintain a superposition state — the ability of the system to be in multiple states at the same time until it is measured, which is one characteristic that gives quantum computers their unique abilities.
    The extended coherence time of the electron-on-solid-neon qubit can be attributed to the inertness and purity of solid neon. This qubit system also addresses the issue of liquid surface vibrations, a problem inherent in the more extensively studied electron-on-liquid-helium qubit. The current research offers crucial insights into optimizing the electron-on-solid-neon qubit further.
    A crucial part of that optimization is creating qubits that are smooth through most of the solid neon surface but have bumps of the right size where they are needed. Designers want minimal naturally occurring bumps on the surface that attract disruptive background electrical charge. At the same time, intentionally fabricating bumps of the correct size within the microwave resonator on the qubit improves the ability to trap electrons.
    “This research underscores the critical need for further study of how different conditions affect neon qubit manufacturing,” Guo said. “Neon injection temperatures and pressure influence the final qubit product. The more control we have over this process, the more precise we can build, and the closer we move to quantum computing that can solve currently unmanageable calculations.”
    Co-authors on this paper were Toshiaki Kanai, a former graduate research student in the FSU Department of Physics, and Dafei Jin, an associate professor at the University of Notre Dame.
    The research was supported by the National Science Foundation, the Gordon and Betty Moore Foundation, and the Air Force Office of Scientific Research. More

  • in

    Public perception of scientists’ credibility slips

    New analyses from the Annenberg Public Policy Center find that public perceptions of scientists’ credibility — measured as their competence, trustworthiness, and the extent to which they are perceived to share an individual’s values — remain high, but their perceived competence and trustworthiness eroded somewhat between 2023 and 2024. The research also found that public perceptions of scientists working in artificial intelligence (AI) differ from those of scientists as a whole.
    From 2018-2022, the Annenberg Public Policy Center (APPC) of the University of Pennsylvania relied on national cross-sectional surveys to monitor perceptions of science and scientists. In 2023-24, APPC moved to a nationally representative empaneled sample to make it possible to observe changes in individual perceptions.
    The February 2024 findings, released today to coincide with the address by National Academy of Sciences President Marcia McNutt on “The State of the Science,” come from an analysis of responses from an empaneled national probability sample of U.S. adults surveyed in February 2023 (n=1,638 respondents), November 2023 (n=1,538), and February 2024 (n=1,555).
    Drawing on the 2022 cross-sectional data, in an article titled “Factors Assessing Science’s Self-Presentation model and their effect on conservatives’ and liberals’ support for funding science,” published in Proceedings of the National Academy of Sciences (September 2023), Annenberg-affiliated researchers Yotam Ophir (State University of New York at Buffalo and an APPC distinguished research fellow), Dror Walter (Georgia State University and an APPC distinguished research fellow), and Patrick E. Jamieson and Kathleen Hall Jamieson of the Annenberg Public Policy Center isolated factors that underlie perceptions of scientists (Factors Assessing Science’s Self-Presentation, or FASS). These factors predict public support for increased funding of science and support for federal funding of basic research.
    The five factors in FASS are whether science and scientists are perceived to be credible and prudent, and whether they are perceived to overcome bias, correct error (self-correcting), and whether their work benefits people like the respondent and the country as a whole (beneficial). In a 2024 publication titled “The Politicization of Climate Science: Media Consumption, Perceptions of Science and Scientists, and Support for Policy” (May 26, 2024) in the Journal of Health Communication, the same team showed that these five factors mediate the relationship between exposure to media sources such as Fox News and outcomes such as belief in anthropogenic climate change, perception of the threat it poses, and support for climate-friendly policies such as a carbon tax.
    Speaking about the FASS model, Jamieson, director of the Annenberg Public Policy Center and director of the survey, said that “because our 13 core questions reliably reduce to five factors with significant predictive power, the ASK survey’s core questions make it possible to isolate both stability and changes in public perception of science and scientists across time.”
    The new research finds that while scientists are held in high regard, two of the three dimensions that make up credibility — perceptions of competence and trustworthiness — showed a small but statistically significant drop from 2023 to 2024, as did both measures of beneficial. The 2024 survey data also indicate that the public considers AI scientists less credible than scientists in general, with notably fewer people saying that AI scientists are competent and trustworthy and “share my values” than scientists generally.

    “Although confidence in science remains high overall, the survey reveals concerns about AI science,” Jamieson said. “The finding is unsurprising. Generative AI is an emerging area of science filled with both great promise and great potential peril.”
    The data are based on Annenberg Science Knowledge (ASK) waves of the Annenberg Science and Public Health (ASAPH) surveys conducted in 2023 and 2024. The findings labeled 2023 are based on a February 2023 survey, and the findings labeled 2024 are based on combined ASAPH survey half-samples surveyed in November 2023 and February 2024.
    For further details, download the toplines and a series of figures that accompany this summary.
    Perceptions of scientists overall
    In the FASS model, perceptions of scientists’ credibility are assessed through perceptions of whether scientists are competent, trustworthy, and “share my values.” The first two of those values slipped in the most recent survey. In 2024, 70% of those surveyed strongly or somewhat agree that scientists are competent (down from 77% in 2023) and 59% strongly or somewhat agree that scientists are trustworthy (down from 67% in 2023).
    The survey also found that in 2024, fewer people felt that scientists’ findings benefit “the country as a whole” and “benefit people like me.” In 2024, 66% strongly or somewhat agreed that findings benefit the country as a whole (down from 75% in 2023). Belief that scientists’ findings “benefit people like me,” also declined, to 60% from 68%. Taken together those two questions make up the beneficial factor of FASS.

    The findings follow sustained attacks on climate and Covid-19-related science and, more recently, public concerns about the rapid development and deployment of artificial intelligence.
    Comparing perceptions of scientists in general with climate and AI scientists
    Credibility: When asked about three factors underlying scientists’ credibility, AI scientists have lower credibility in all three values. Competent: 70% strongly/somewhat agree that scientists are competent, but only 62% for climate scientists and 49% for AI scientists. Trustworthy: 59% agree scientists are trustworthy, 54% agree for climate scientists, 28% for AI scientists. Share my values: A higher number (38%) agree that climate scientists share my values than for scientists in general (36%) and AI scientists (15%). More people disagree with this for AI scientists (35%) than for the others.Prudence: Asked whether they agree or disagree that science by various groups of scientists “creates unintended consequences and replaces older problems with new ones,” over half of those surveyed (59%) agree that AI scientists create unintended consequences and just 9% disagree.
    Overcoming bias: Just 42% of those surveyed agree that scientists “are able to overcome human and political biases,” but only 21% feel that way about AI scientists. In fact, 41% disagree that AI scientists are able to overcome human political biases. In another area, just 23% agree that AI scientists provide unbiased conclusions in their area of inquiry and 38% disagree.
    Self-correction: Self-correction, or “organized skepticism expressed in expectations sustaining a culture of critique,” as the FASS paper puts it, is considered by some as a “hallmark of science.” AI scientists are seen as less likely than scientists or climate scientists to take action to prevent fraud; take responsibility for mistakes; or to have mistakes that are caught by peer review. (
    Benefits: Asked about the benefits from scientists’ findings, 60% agree that scientists’ “findings benefit people like me,” though just 44% agree for climate scientists and 35% for AI scientists. Asked about whether findings benefit the country as a whole, 66% agree for scientists, 50% for climate scientists and 41% for AI scientists.
    Your best interest: The survey also asked respondents how much trust they have in scientists to act in the best interest of people like you. (This specific trust measure is not a part of the FASS battery.) Respondents have less trust in AI scientists than in others: 41% have a great deal/a lot of trust in medical scientists; 39% in climate scientists; 36% in scientists; and 12% in AI scientists. More

  • in

    A chip-scale Titanium-sapphire laser

    As lasers go, those made of Titanium-sapphire (Ti:sapphire) are considered to have “unmatched” performance. They are indispensable in many fields, including cutting-edge quantum optics, spectroscopy, and neuroscience. But that performance comes at a steep price. Ti:sapphire lasers are big, on the order of cubic feet in volume. They are expensive, costing hundreds of thousands of dollars each. And they require other high-powered lasers, themselves costing $30,000 each, to supply them with enough energy to function.
    As a result, Ti:sapphire lasers have never achieved the broad, real-world adoption they deserve — until now. In a dramatic leap forward in scale, efficiency, and cost, researchers at Stanford University have built a Ti:sapphire laser on a chip. The prototype is four orders of magnitude smaller (10,000x) and three orders less expensive (1,000x) than any Ti:sapphire laser ever produced.
    “This is a complete departure from the old model,” said Jelena Vu?kovi?, the Jensen Huang Professor in Global Leadership, a professor of electrical engineering, and senior author of the paper introducing the chip-scale Ti:sapphire laser published in the journal Nature. “Instead of one large and expensive laser, any lab might soon have hundreds of these valuable lasers on a single chip. And you can fuel it all with a green laser pointer.”
    Profound benefits
    “When you leap from tabletop size and make something producible on a chip at such a low cost, it puts these powerful lasers in reach for a lot of different important applications,” said Joshua Yang, a doctoral candidate in Vu?kovi?’s lab and co-first author of the study along with Vu?kovi?’sNanoscale and Quantum Photonics Lab colleagues, research engineerKasper Van Gasse and postdoctoral scholarDaniil M. Lukin.
    In technical terms, Ti:sapphire lasers are so valuable because they have the largest “gain bandwidth” of any laser crystal, explained Yang. In simple terms, gain bandwidth translates to the broader range of colors the laser can produce compared to other lasers. It’s also ultrafast, Yang said. Pulses of light issue forth every quadrillionth of a second.
    But Ti:sapphire lasers are also hard to come by. Even Vu?kovi?’s lab, which does cutting-edge quantum optics experiments, only has a few of these prized lasers to share. The new Ti:sapphire laser fits on a chip that is measured in square millimeters. If the researchers can mass-produce them on wafers, potentially thousands, perhaps tens-of-thousands of Ti:sapphire lasers could be squeezed on a disc that fits in the palm of a human hand.

    “A chip is light. It is portable. It is inexpensive and it is efficient. There are no moving parts. And it can be mass-produced,” Yang said. “What’s not to like? This democratizes Ti:sapphire lasers.”
    How it’s done
    To fashion the new laser, the researchers began with a bulk layer of Titanium-sapphire on a platform of silicon dioxide (SiO2), all riding atop true sapphire crystal. They then grind, etch, and polish the Ti:sapphire to an extremely thin layer, just a few hundred nanometers thick. Into that thin layer, they then pattern a swirling vortex of tiny ridges. These ridges are like fiber-optic cables, guiding the light around and around, building in intensity. In fact, the pattern is known as a waveguide.
    “Mathematically speaking, intensity is power divided by area. So, if you maintain the same power as the large-scale laser, but reduce the area in which it is concentrated, the intensity goes through the roof,” Yang says. “The small scale of our laser actually helps us make it more efficient.”
    The remaining piece of the puzzle is a microscale heater that warms the light traveling through the waveguides, allowing the Vu?kovi? team to change the wavelength of the emitted light to tune the color of the light anywhere between 700 and 1,000 nanometers — in the red to infrared.
    Spotlight on applications
    Vu?kovi?, Yang, and colleagues are most excited about the range of fields that such a laser might impact. In quantum physics, the new laser provides an inexpensive and practical solution that could dramatically scale down state-of-the-art quantum computers. In neuroscience, the researchers can foresee immediate application in optogenetics, a field that allows scientists to control neurons with light guided inside the brain by relatively bulky optical fiber. Small-scale lasers, they say, might be integrated into more compact probes opening up new experimental avenues. In ophthalmology, it might find new use with Nobel Prize-winning chirped pulse amplification in laser surgery or offer less expensive, more compact optical coherence tomography technologies used to assess retinal health.
    Next up, the team is working on perfecting their chip-scale Ti:sapphire laser and on ways to mass-produce them, thousands at a time, on wafers. Yang will earn his doctorate this summer based on this research and is working to bring the technology to market.
    “We could put thousands of lasers on a single 4-inch wafer,” Yang says. “That’s when the cost per laser starts to become almost zero. That’s pretty exciting.” More

  • in

    Microrobot-packed pill shows promise for treating inflammatory bowel disease in mice

    Engineers at the University of California San Diego have developed a pill that releases microscopic robots, or microrobots, into the colon to treat inflammatory bowel disease (IBD). The experimental treatment, given orally, has shown success in mice. It significantly reduced IBD symptoms and promoted the healing of damaged colon tissue without causing toxic side effects.
    The study was published June 26 in Science Robotics.
    IBD, an autoimmune disorder characterized by chronic inflammation of the gut, affects millions of people worldwide, causing severe abdominal pain, rectal bleeding, diarrhea and weight loss. It occurs when immune cells known as macrophages become overly activated, producing excessive levels of inflammation-causing proteins called pro-inflammatory cytokines. These cytokines, in turn, bind to receptors on macrophages, triggering them to produce more cytokines, and thereby perpetuating a cycle of inflammation that leads to the debilitating symptoms of IBD.
    Now, researchers have developed a treatment that successfully keeps these cytokine levels in check. A team led by Liangfang Zhang and Joseph Wang, both professors in the Aiiso Yufeng Li Family Department of Chemical and Nano Engineering at UC San Diego, engineered microrobots composed of inflammation-fighting nanoparticles chemically attached to green algae cells. The nanoparticles absorb and neutralize pro-inflammatory cytokines in the gut. Meanwhile, the green algae use their natural swimming abilities to efficiently distribute the nanoparticles throughout the colon, accelerating cytokine removal to help heal inflamed tissue.
    What makes these nanoparticles so effective is their biomimetic design. They are made of biodegradable polymer nanoparticles coated with macrophage cell membranes, allowing them to act as macrophage decoys. These decoys naturally bind pro-inflammatory cytokines without being triggered to produce more, thus breaking the inflammatory cycle.
    “The beauty of this approach is that it’s drug-free — we just leverage the natural cell membrane to absorb and neutralize pro-inflammatory cytokines,” said Zhang.
    The researchers have ensured that their biohybrid microrobots meet rigorous safety standards. The nanoparticles are made of biocompatible materials, and the green algae cells used in this study are recognized as safe for consumption by the U.S. Food and Drug Administration.

    The microrobots are packed inside a liquid capsule with a pH-responsive coating. This coating remains intact in the acidic environment of the stomach acid, but dissolves upon reaching the neutral pH of the colon. This ensures that the microrobots are selectively released where they are needed most. “We can direct the microrobots to the diseased location without affecting other organs,” said Wang. “In this way, we can minimize toxicity.” The capsule keeps the functionalized algae in the liquid phase until their release.
    The capsule was administered orally to mice afflicted with IBD. The treatment reduced fecal bleeding, improved stool consistency, reversed IBD-induced weight loss and reduced inflammation in the colon, all without apparent side effects.
    The research team is now focusing on translating their microrobot treatment into clinical studies.
    This work is supported by the Defense Threat Reduction Agency Joint Science and Technology Office for Chemical and Biological Defense (HDTRA1-21-1-0010). More

  • in

    AI generated exam answers go undetected in real-world blind test

    Experienced exam markers may struggle to spot answers generated by Artificial Intelligence (AI), researchers have found. 
    The study was conducted at the University of Reading, UK, where university leaders are working to identify potential risks and opportunities of AI for research, teaching, learning, and assessment, with updated advice already issued to staff and students as a result of their findings. 
    The researchers are calling for the global education sector to follow the example of Reading, and others who are also forming new policies and guidance and do more to address this emerging issue. 
    In a rigorous blind test of a real-life university examinations system, published today (26 June) in PLOS ONE, ChatGPT generated exam answers, submitted for several undergraduate psychology modules, went undetected in 94% of cases and, on average, attained higher grades than real student submissions.   
    This was the largest and most robust blind study of its kind, to date, to challenge human educators to detect AI-generated content.  
    Associate Professor Peter Scarfe and Professor Etienne Roesch, who led the study at Reading’s School of Psychology and Clinical Language Sciences, said their findings should provide a “wakeup call” for educators across the world. A recent UNESCO survey of 450 schools and universities found that less than 10% had policies or guidance on the use of generative AI. 
    Dr Scarfe said: “Many institutions have moved away from traditional exams to make assessment more inclusive. Our research shows it is of international importance to understand how AI will affect the integrity of educational assessments. 
    “We won’t necessarily go back fully to hand-written exams, but global education sector will need to evolve in the face of AI.  

    “It is testament to the candid academic rigour and commitment to research integrity at Reading that we have turned the microscope on ourselves to lead in this.” 
    Professor Roesch said: “As a sector, we need to agree how we expect students to use and acknowledge the role of AI in their work. The same is true of the wider use of AI in other areas of life to prevent a crisis of trust across society. 
    “Our study highlights the responsibility we have as producers and consumers of information. We need to double down on our commitment to academic and research integrity.” 
    Professor Elizabeth McCrum, Pro-Vice-Chancellor for Education and Student Experience at the University of Reading, said: “It is clear that AI will have a transformative effect in many aspects of our lives, including how we teach students and assess their learning.  
    “At Reading, we have undertaken a huge programme of work to consider all aspects of our teaching, including making greater use of technology to enhance student experience and boost graduate employability skills.  
    “Solutions include moving away from outmoded ideas of assessment and towards those that are more aligned with the skills that students will need in the workplace, including making use of AI. Sharing alternative approaches that enable students to demonstrate their knowledge and skills, with colleagues across disciplines, is vitally important.  More

  • in

    Mechanical computer relies on kirigami cubes, not electronics

    North Carolina State University researchers have developed a kirigami-inspired mechanical computer that uses a complex structure of rigid, interconnected polymer cubes to store, retrieve and erase data without relying on electronic components. The system also includes a reversible feature that allows users to control when data editing is permitted and when data should be locked in place.
    Mechanical computers are computers that operate using mechanical components rather than electronic ones. Historically, these mechanical components have been things like levers or gears. However, mechanical computers can also be made using structures that are multistable, meaning they have more than one stable state — think of anything that can be folded into more than one stable position.
    “We were interested in doing a couple things here,” says Jie Yin, co-corresponding author of a paper on the work and an associate professor of mechanical and aerospace engineering at NC State. “First, we were interested in developing a stable, mechanical system for storing data.
    “Second, this proof-of-concept work focused on binary computing functions with a cube being either pushed up or pushed down — it’s either a 1 or a 0. But we think there is potential here for more complex computing, with data being conveyed by how high a given cube has been pushed up. We’ve shown within this proof-of-concept system that cubes can have five or more different states. Theoretically, that means a given cube can convey not only a 1 or a 0, but also a 2, 3 or 4.”
    The fundamental units of the new mechanical computer are 1-centimeter plastic cubes, grouped into functional units consisting of 64 interconnected cubes. The design of these units was inspired by kirigami, which is the art of cutting and folding paper. Yin and his collaborators have applied the principles of kirigami to three-dimensional materials that are cut into connected cubes.
    When any of the cubes are pushed up or down, this changes the geometry — or architecture — of all of the connected cubes. This can be done by physically pushing up or down on one of the cubes, or by attaching a magnetic plate to the top of the functional unit and applying a magnetic field to remotely push it up or down. These 64-cube functional units can be grouped together into increasingly complex metastructures that allow for storing more data or for conducting more complex computations.
    The cubes are connected by thin strips of elastic tape. To edit data, you have to change the configuration of functional units. That requires users to pull on the edges of the metastructure, which stretches the elastic tape and allows you to push cubes up or down. When you release the metastructure, the tape contracts, locking the cubes — and the data — in place.

    “One potential application for this is that it allows for users to create three-dimensional, mechanical encryption or decryption,” says Yanbin Li, first author of the paper and a postdoctoral researcher at NC State. “For example, a specific configuration of functional units could serve as a 3D password.
    “And the information density is quite good,” Li says. “Using a binary framework — where cubes are either up or down — a simple metastructure of 9 functional units has more than 362,000 possible configurations.”
    “But we’re not necessarily limited to a binary context,” says Yin. “Each functional unit of 64 cubes can be configured into a wide variety of architectures, with cubes stacked up to five cubes high. This allows for the development of computing that goes well beyond binary code. Our proof-of-concept work here demonstrates the potential range of these architectures, but we have not developed code that capitalizes on those architectures. We’d be interested in collaborating with other researchers to explore the coding potential of these metastructures.”
    “We’re also interested in exploring the potential utility of these metastructures to create haptic systems that display information in a three-dimensional context, rather than as pixels on a screen,” says Li.
    The work was done with support from the National Science Foundation under grants 2005374, 2126072 and 2231419. More

  • in

    A new study highlights potential of ultrafast laser processing for next-gen devices

    A new joint study uncovers the remarkable potential of ultrafast lasers that could provide innovative solutions in 2D materials processing for many technology developers such as high-speed photodetectors, flexible electronics, biohybrids, and next-generation solar cells.
    The manipulation of 2D materials, such as graphene and transition metal dichalcogenides (TMDs), is crucial for the advancement of next-generation electronic, photonic, quantum, and sensor technologies. These materials exhibit unique properties, including high electrical conductivity, mechanical flexibility, and tunable optical characteristics. Traditional processing methods, however, often lack the necessary precision and can introduce thermal damage. This is where ultrafast laser processing comes into play, offering unprecedented control over the material properties at the nanoscale.
    Ultrafast lasers for modifying materials
    Recent advancements in the field of light-matter interactions have paved the way for the transformative use of ultrafast laser processing in 2D materials. A new study by Aleksei Emelianov, Mika Pettersson from the University of Jyväskylä (Finland), and Ivan Bobrinetskiy from Biosense Institute (Serbia) explores the remarkable potential of ultrafast laser techniques in manipulating 2D layered materials and van der Waals (vdW) heterostructures toward novel applications.
    “Ultrafast laser processing has emerged as a versatile technique for modifying materials and introducing novel functionalities. Unlike continuous-wave and long-pulsed optical methods, ultrafast lasers offer a solution for thermal heating issues. The nonlinear interactions between ultrafast laser pulses and the atomic lattice of 2D materials substantially influence their chemical and physical properties,” tells Postdoctoral Researcher Aleksei Emelianov from University of Jyväskylä.
    A new tool for manipulating the properties of 2D materials
    The researchers describe progress made over the past decade and primarily focus on the transformative role of ultrafast laser pulses in maskless green technology, enabling subtractive and additive processes that unveil ways for advanced devices. Utilizing the synergetic effect between the energy states within the atomic layers and ultrafast laser irradiation, it is feasible to achieve resolution down to several nanometers.

    “Ultrafast light-matter interactions are being actively probed to study the unique optical properties of low-dimensional materials, says Aleksei Emelianov. In our research, we discovered that ultrafast laser processing has the potential to become a new technological tool for manipulating the properties of 2D materials,” he continues.
    Reliable tools for advanced materials processing
    Key advancements are discussed in functionalization, doping, atomic reconstruction, phase transformation, and 2D and 3D micro- and nanopatterning. The ability to manipulate 2D materials at such a fine scale opens up numerous possibilities for the development of novel photonic, electronic, and sensor applications. Potential applications include high-speed photodetectors, flexible electronics, biohybrids, and next-generation solar cells. The precision of ultrafast laser processing enables the creation of intricate micro- and nanoscale structures with potential utilization in telecommunications, medical diagnostics, and environmental monitoring.
    “It is surprising how versatile ultrafast lasers are in tuning and modifying 2D materials. It is highly likely that lasers could provide innovative solutions in 2D materials processing for many technology developers,” adds Mika Pettersson.
    This review represents a significant step forward in realizing the full potential of 2D and vdW materials, promising to drive new advancements in technology and industry.
    “Still, there is a need for research on the physical basics of ultrafast interactions between lasers and 2D materials, says Ivan Bobrinetskiy. These should include not only interactions between the 2D material lattice and light but also involve the environment and substrates, which makes the physics of these processes more complicated but exciting at the same time,” he continues. More