More stories

  • in

    Researchers develop fastest possible flow algorithm

    In a breakthrough that brings to mind Lucky Luke — the man who shoots faster than his shadow — Rasmus Kyng and his team have developed a superfast algorithm that looks set to transform an entire field of research. The groundbreaking work by Kyng’s team involves what is known as a network flow algorithm, which tackles the question of how to achieve the maximum flow in a network while simultaneously minimising transport costs.
    Imagine you are using the European transportation network and looking for the fastest and cheapest route to move as many goods as possible from Copenhagen to Milan. Kyng’s algorithm can be applied in such cases to calculate the optimal, lowest-cost traffic flow for any kind of network — be it rail, road, water or the internet. His algorithm performs these computations so fast that it can deliver the solution at the very moment a computer reads the data that describes the network.
    Computations as fast as a network is big
    Before Kyng, no one had ever managed to do that — even though researchers have been working on this problem for some 90 years. Previously, it took significantly longer to compute the optimal flow than to process the network data. And as the network became larger and more complex, the required computing time increased much faster, comparatively speaking, than the actual size of the computing problem. This is why we also see flow problems in networks that are too large for a computer to even calculate.
    Kyng’s approach eliminates this problem: using his algorithm, computing time and network size increase at the same rate — a bit like going on a hike and constantly keeping up the same pace however steep the path gets. A glance at the raw figures shows just how far we have come: until the turn of the millennium, no algorithm managed to compute faster than m1.5, where m stands for the number of connections in a network that the computer has to calculate, and just reading the network data once takes m time. In 2004, the computing speed required to solve the problem was successfully reduced to m1.33. Using Kyng’s algorithm, the “additional” computing time required to reach the solution after reading the network data is now negligible.
    Like a Porsche racing a horse-drawn carriage
    The ETH Zurich researchers have thus developed what is, in theory, the fastest possible network flow algorithm. Two years ago, Kyng and his team presented mathematical proof of their concept in a groundbreaking paper. Scientists refer to these novel, almost optimally fast algorithms as “almost-linear-time algorithms,” and the community of theoretical computer scientists responded to Kyng’s breakthrough with a mixture of amazement and enthusiasm.

    Kyng’s doctoral supervisor, Daniel A. Spielman, Professor of Applied Mathematics and Computer Science at Yale and himself a pioneer and doyen in this field, compared the “absurdly fast” algorithm to a Porsche overtaking horse-drawn carriages. As well as winning the 2022 Best Paper Award at the IEEE Annual Symposium on Foundations of Computer Science (FOCS), their paper was also highlighted in the computing journal Communications of the ACM, and the editors of popular science magazine Quanta named Kyng’s algorithm one of the ten biggest discoveries in computer science in 2022.
    The ETH Zurich researchers have since refined their approach and developed further almost-linear-time algorithms. For example, the first algorithm was still focused on fixed, static networks whose connections are directed, meaning they function like one-way streets in urban road networks. The algorithms published this year are now also able to compute optimal flows for networks that incrementally change over time. Lightning-fast computation is an important step in tackling highly complex and data-rich networks that change dynamically and very quickly, such as molecules or the brain in biology, or human friendships.
    Lightning-fast algorithms for changing networks
    On Thursday, Simon Meierhans — a member of Kyng’s team — presented a new almost-linear-time algorithm at the Annual ACM Symposium on Theory of Computing (STOC) in Vancouver. This algorithm solves the minimum-cost maximum-flow problem for networks that incrementally change as new connections are added. Furthermore, in a second paper accepted by the IEEE Symposium on Foundations of Computer Science (FOCS) in October, the ETH researchers have developed another algorithm that also handles connections being removed.
    Specifically, these algorithms identify the shortest routes in networks where connections are added or deleted. In real-world traffic networks, examples of such changes in Switzerland include the complete closure and then partial reopening of the Gotthard Base Tunnel in the months since summer 2023, or the recent landslide that destroyed part of the A13 motorway, which is the main alternative route to the Gotthard Road Tunnel.
    Confronted with such changes, how does a computer, an online map service or a route planner calculate the lowest-cost and fastest connection between Milan and Copenhagen? Kyng’s new algorithms also compute the optimal route for these incrementally or decrementally changing networks in almost-linear time — so quickly that the computing time for each new connection, whether added through rerouting or the creation of new routes, is again negligible.

    But what exactly is it that makes Kyng’s approach to computations so much faster than any other network flow algorithm? In principle, all computational methods are faced with the challenge of having to analyse the network in multiple iterations in order to find the optimal flow and the minimum-cost route. In doing so, they run through each of the different variants of which connections are open, closed or congested because they have reached their capacity limit.
    Compute the whole? Or its parts?
    Prior to Kyng, computer scientists tended to choose between two key strategies for solving this problem. One of these was modelled on the railway network and involved computing a whole section of the network with a modified flow of traffic in each iteration. The second strategy — inspired by power flows in the electricity grid — computed the entire network in each iteration but used statistical mean values for the modified flow of each section of the network in order to make the computation faster.
    Kyng’s team has now tied together the respective advantages of these two strategies in order to create a radical new combined approach. “Our approach is based on many small, efficient and low-cost computational steps, which — taken together — are much faster than a few large ones,” says Maximilian Probst Gutenberg, a lecturer and member of Kyng’s group, who played a key role in developing the almost-linear-time algorithms.
    A brief look at the history of this discipline adds an additional dimension to the significance of Kyng’s breakthrough: flow problems in networks were among the first to be solved systematically with the help of algorithms in the 1950s, and flow algorithms played an important role in establishing theoretical computer science as a field of research in its own right. The well-known algorithm developed by mathematicians Lester R. Ford Jr. and Delbert R. Fulkerson also stems from this period. Their algorithm efficiently solves the maximum-flow problem, which seeks to determine how to transport as many goods through a network as possible without exceeding the capacity of the individual routes.
    Fast and wide-ranging
    These advances showed researchers that the maximum-flow problem, the minimum-cost problem (transshipment or transportation problem) and many other important network-flow problems can all be viewed as special cases of the general minimum-cost flow problem. Prior to Kyng’s research, most algorithms were only able to solve one of these problems efficiently, though they could not do even this particularly quickly, nor could they be extended to the broader minimum-cost flow problem. The same applies to the pioneering flow algorithms of the 1970s, for which the theoretical computer scientists John Edward Hopcroft, Richard Manning Karp and Robert Endre Tarjan each received a Turing Award, regarded as the “Nobel Prize” of computer science. Karp received his in 1985; Hopcroft and Tarjan won theirs in 1986.
    Shift in perspective from railways to electricity
    It wasn’t until 2004 that mathematicians and computer scientists Daniel Spielman and Shang-Hua Teng — and later Samuel Daitch — succeeded in writing algorithms that also provided a fast and efficient solution to the minimum-cost flow problem. It was this group that shifted the focus to power flows in the electricity grid. Their switch in perspective from railways to electricity led to a key mathematical distinction: if a train is rerouted on the railway network because a line is out of service, the next best route according to the timetable may already be occupied by a different train. In the electricity grid, it is possible for the electrons that make up a power flow to be partially diverted to a network connection through which other current is already flowing. Thus, unlike trains, the electrical current can, in mathematical terms, be “partially” moved to a new connection.
    This partial rerouting enabled Spielman and his colleagues to compute such route changes much faster and, at the same time, to recalculate the entire network after each change. “We rejected Spielman’s approach of creating the most powerful algorithms we could for the entire network,” says Kyng. “Instead, we applied his idea of partial route computation to the earlier approaches of Hopcroft and Karp.” This computation of partial routes in each iteration played a major role in speeding up the overall flow computation.
    A turning point in theoretical principles
    Much of the ETH Zurich researchers’ progress comes down to the decision to extend their work beyond the development of new algorithms. The team also uses and designs new mathematical tools that speed up their algorithms even more. In particular, they have developed a new data structure for organising network data; this makes it possible to identify any change to a network connection extremely quickly; this, in turn, helps make the algorithmic solution so amazingly fast. With so many applications lined up for the almost-linear-time algorithms and for tools such as the new data structure, the overall innovation spiral could soon be turning much faster than before.
    Yet laying the foundations for solving very large problems that couldn’t previously be computed efficiently is only one benefit of these significantly faster flow algorithms — because they also change the way in which computers calculate complex tasks in the first place. “Over the past decade, there has been a revolution in the theoretical foundations for obtaining provably fast algorithms for foundational problems in theoretical computer science,” writes an international group of researchers from University of California, Berkeley, which includes among its members Rasmus Kyng and Deeksha Adil, a researcher at the Institute for Theoretical Studies at ETH Zurich. More

  • in

    Visual explanations of machine learning models to estimate charge states in quantum dots

    A group of researchers has successfully demonstrated automatic charge state recognition in quantum dot devices using machine learning techniques, representing a significant step towards automating the preparation and tuning of quantum bits (qubits) for quantum information processing.
    Semiconductor qubits use semiconductor materials to create quantum bits. These materials are common in traditional electronics, making them integrable with conventional semiconductor technology. This compatibility is why scientists consider them strong candidates for future qubits in the quest to realize quantum computers.
    In semiconductor spin qubits, the spin state of an electron confined in a quantum dot serves as the fundamental unit of data, or the qubit. Forming these qubit states requires tuning numerous parameters, such as gate voltage, something performed by human experts.
    However, as the number of qubits grows, tuning becomes more complex due to the excessive number of parameters. When it comes to realizing large-scale computers, this becomes problematic.
    “To overcome this, we developed a means of automating the estimation of charge states in double quantum dots, crucial for creating spin qubits where each quantum dot houses one electron,” points out Tomohiro Otsuka, an associate professor at Tohoku University’s Advanced Institute for Materials Research (WPI-AIMR).
    Using a charge sensor, Otsuka and his team obtained charge stability diagrams to identify optimal gate voltage combinations ensuring the presence of precisely one electron per dot. Automating this tuning process required developing an estimator capable of classifying charge states based on variations in charge transition lines within the stability diagram.
    To construct this estimator, the researchers employed a convolutional neural network (CNN) trained on data prepared using a lightweight simulation model: the Constant Interaction model (CI model). Pre-processing techniques enhanced data simplicity and noise robustness, optimizing the CNN’s ability to accurately classify charge states.
    Upon testing the estimator with experimental data, initial results showed effective estimation of most charge states, though some states exhibited higher error rates. To address this, the researchers utilized Grad-CAM visualization to uncover decision-making patterns within the estimator. They identified that errors were often attributed to coincidental-connected noise misinterpreted as charge transition lines. By adjusting the training data and refining the estimator’s structure, researchers significantly improved accuracy for previously error-prone charge states while maintaining the high performance for others.
    “Utilizing this estimator means that parameters for semiconductor spin qubits can be automatically tuned, something necessary if we are to scale up quantum computers,” adds Otsuka. “Additionally, by visualizing the previously black-boxed decision basis, we have demonstrated that it can serve as a guideline for improving the estimator’s performance.”
    Details of the research were published in the journal APL Machine Learning on April 15, 2024. More

  • in

    Synthetic fuels and chemicals from CO2: Ten experiments in parallel

    Why do just one experiment at a time when you can do ten? Empa researchers have developed an automated system, which allows them to research catalysts, electrodes, and reaction conditions for CO₂ electrolysis up to ten times faster. The system is complemented by an open-source software for data analysis.
    If you mix fossil fuel with a little oxygen and add a spark, three things are produced: water, climate-warming carbon dioxide, and lots of energy. This fundamental chemical reaction takes place in every combustion engine, whether it runs on gasoline, petrol, or kerosene. In theory, this reaction can be reversed: with the addition of (renewable) energy, previously released CO2 can be converted back into a (synthetic) fuel.
    This was the key idea behind the ETH Board funded Joint Initiative SynFuels. Researchers at Empa and the Paul Scherrer Institute (PSI) spent three years working on ways to produce synthetic fuels — known as synfuels — economically and efficiently from CO2. This reaction, however, comes with challenges: for one, CO2 electrolysis does not just yield the fuel that was previously burned. Rather, more than 20 different products can be simultaneously formed, and they are difficult to separate from each other.
    The composition of these products can be controlled in various ways, for example via the reaction conditions, the catalyst used, and the microstructure of the electrodes. The number of possible combinations is enormous and examining each one individually would take too long. How are scientists supposed to find the best one? Empa researchers have now accelerated this process by a factor of 10.
    Accelerating research
    As part of the SynFuels project, researchers led by Corsin Battaglia and Alessandro Senocrate from Empa’s Materials for Energy Conversion laboratory have developed a system that can be used to investigate up to ten different reaction conditions as well as catalyst and electrode materials simultaneously. The researchers have recently published the blueprint for the system and the accompanying software in the journal Nature Catalysis.
    The system consists of ten “reactors”: small chambers with catalysts and electrodes in which the reaction takes place. Each reactor is connected to multiple gas and liquid in- and outlets and various instruments via hundreds of meters of tubing. Numerous parameters are recorded fully automatically, such as the pressure, the temperature, gas flows, and the liquid and gaseous reaction products — all with high temporal resolution.

    “As far as we know, this is the first system of its kind for CO2 electrolysis,” says Empa postdoctoral researcher Alessandro Senocrate. “It yields a large number of high-quality datasets, which will help us make accelerated discoveries.” When the system was being developed, some of the necessary instruments were not even available on the market. In collaboration with the company Agilent Technologies, Empa researchers co-developed the world’s first online liquid chromatography device, which identifies and quantifies the liquid reaction products in real time during CO? electrolysis.
    Sharing research data
    Conducting experiments ten times faster also generates ten times as much data. In order to analyze this data, the researchers have developed a software solution that they are making available to scientists at other institutions on an open-source basis. They also want to share the data itself with other researchers. “Today, research data often disappears in a drawer as soon as the results are published,” explains Corsin Battaglia, Head of Empa’s Materials for Energy Conversion laboratory. A joint research project between Empa, PSI and ETH Zurich, which bears the name PREMISE, aims to prevent this: “We want to create standardized methods for storing and sharing data,” says Battaglia. “Then other researchers can gain new insights from our data — and vice versa.”
    Open access to research data is also a priority in other research activities of the Materials for Energy Conversion laboratory. This includes the National Center of Competence in Research NCCR Catalysis, which focuses on sustainable chemistry. The new parallel CO2 electrolysis system is set to play an important role in the second phase of this large-scale national project, with both the data generated and the know-how made available to other Swiss research institutions. To this end, the Empa researchers will continue to refine both the hardware and the software in the future. More

  • in

    Light-controlled artificial maple seeds could monitor the environment even in hard-to-reach locations

    Researchers from Tampere University, Finland, and the University of Pittsburgh, USA, have developed a tiny robot replicating the aerial dance of falling maple seeds. In the future, this robot could be used for real-time environmental monitoring or delivery of small samples even in inaccessible terrain such as deserts, mountains or cliffs, or the open sea. This technology could be a game changer for fields such as search-and-rescue, endangered species studies, or infrastructure monitoring.
    At Tampere University, Professor Hao Zeng and Doctoral Researcher Jianfeng Yang workat the interface between physics, soft mechanics, and material engineering in their Light Robots research group. They have drawn inspiration from nature todesign polymeric gliding structures that can be controlled using light.
    Now, Zeng and Yang, with Professor M. Ravi Shankar,from the University of Pittsburgh Swanson School of Engineering, utilized a light-activated smart material to control the gliding mode of an artificial maple seed. In nature, maple disperse to new growth sites with the help of flying wings in their samara, or dry fruit. The wings help the seed to rotate as it falls, allowing it to glide in a gentle breeze. The configuration of these wings defines their glide path.
    According to the researchers, the artificial maple seed can be actively controlled using light, where its dispersal in the wind can be actively tuned to achieve a range of gliding trajectories. In the future, it can also be equipped with various microsensors for environmental monitoring or be used to deliver, for example, small samples of soil.
    Hi-tech robot beats natural seed in adaptability
    The researchers were inspired by the variety of gliding seeds of Finnish trees, each exhibiting a unique and mesmerizing flight pattern. Their fundamental question was whether the structure of these seeds could be recreated using artificial materials to achieve a similar airborne elegance controlled by light.
    “The tiny light-controlled robots are designed to be released into the atmosphere, utilizing passive flight to disperse widely through interactions with surrounding airflows. Equipped with GPS and various sensors, they can provide real-time monitoring of local environmental indicators like pH levels and heavy metal concentrations” explains Yang.

    Inspired by natural maple samara, the team created azobenzene-based light-deformable liquid crystal elastomer that achieves reversible photochemical deformation to finely tune the aerodynamic properties.
    “The artificial maple seeds outperform their natural counterparts in adjustable terminal velocity, rotation rate, and hovering positions, enhancing wind-assisted long-distance travel through self-rotation,” says Zeng.
    In the beginning of 2023 Zeng and Yang released their first, dandelion seed like mini robot within the projectFlying Aero-robots based on Light Responsive Materials Assembly — FAIRY. The project, funded by the Research Council of Finland, started in September 2021, and will continue until August 2026.
    “Whether it is seeds or bacteria or insects, nature provides them with clever templates to move, feed and reproduce. Often this comes via a simple, but remarkably functional, mechanical design,” Shankar explains.
    “Thanks to advances in materials that are photosensitive, we are able to dictate mechanical behavior at almost the molecular level. We now have the potential to create micro robots, drones, and probes that can not only reach inaccessible areas but also relay critical information to the user. This could be a game changer for fields such as search-and-rescue, endangered or invasive species studies, or infrastructure monitoring,” he adds. More

  • in

    New deep-learning model outperforms Google AI system in predicting peptide structures

    Researchers at the University of Toronto have developed a deep-learning model, called PepFlow, that can predict all possible shapes of peptides — chains of amino acids that are shorter than proteins, but perform similar biological functions.
    PepFlow combines machine learning and physics to model the range of folding patterns that a peptide can assume based on its energy landscape. Peptides, unlike proteins, are very dynamic molecules that can take on a range of conformations.
    “We haven’t been able to model the full range of conformations for peptides until now,” said Osama Abdin, first author on the study and recent PhD graduate of molecular genetics at U of T’s Donnelly Centre for Cellular and Biomolecular Research. “PepFlow leverages deep-learning to capture the precise and accurate conformations of a peptide within minutes. There’s potential with this model to inform drug development through the design of peptides that act as binders.”
    The study was published today in the journal Nature Machine Intelligence.
    A peptide’s role in the human body is directly linked to how it folds, as its 3D structure determines the way it binds and interacts with other molecules. Peptides are known to be highly flexible, taking on a wide range of folding patterns, and are thus involved in many biological processes of interest to researchers in the development of therapeutics.
    “Peptides were the focus of the PepFlow model because they are very important biological molecules and they are naturally very dynamic, so we need to model their different conformations to understand their function,” said Philip M. Kim, principal investigator on the study and a professor at the Donnelly Centre. “They’re also important as therapeutics, as can be seen by the GLP1 analogues, like Ozempic, used to treat diabetes and obesity.”
    Peptides are also cheaper to produce than their larger protein counterparts, said Kim, who is also a professor of computer science at U of T’s Faculty of Arts & Science.

    The new model expands on the capabilities of the leading Google Deepmind AI system for predicting protein structure, AlphaFold. PepFlow can outperform AlphaFold2 by generating a range of conformations for a given peptide, which AlphaFold2 was not designed to do.
    What sets PepFlow apart is the technological innovations that power it. For instance, it is a generalized model that takes inspiration from Boltzmann generators, which are highly advanced physics-based machine learning models.
    PepFlow can also model peptide structures that take on unusual formations, such as the ring-like structure that results from a process called macrocyclization. Peptide macrocycles are currently a highly promising venue for drug development.
    While PepFlow improves upon AlphaFold2, it has limitations of its own, being the first version of a model. The study authors noted a number of ways in which PepFlow could be improved, including training the model with explicit data for solvent atoms, which would dissolve the peptides to form a solution, and for constraints on the distance between atoms in ring-like structures.
    PepFlow was built to be easily expanded to account for additional considerations and new information and potential uses. Even as a first version, PepFlow is a comprehensive and efficient model with potential for furthering the development of treatments that depend on peptide binding to activate or inhibit biological processes.
    “Modelling with PepFlow offers insight into the real energy landscape of peptides,” saidAbdin. “It took two-and-a-half years to develop PepFlow and one month to train it, but it was worthwhile to move to the next frontier, beyond models that only predict one structure of a peptide.” More

  • in

    Understanding quantum states: New research shows importance of precise topography in solid neon qubits

    Quantum computers have the potential to be revolutionary tools for their ability to perform calculations that would take classical computers many years to resolve.
    But to make an effective quantum computer, you need a reliable quantum bit, or qubit, that can exist in a simultaneous 0 or 1 state for a sufficiently long period, known as its coherence time.
    One promising approach is trapping a single electron on a solid neon surface, called an electron-on-solid-neon qubit. A study led by FAMU-FSU College of Engineering Professor Wei Guo that was published in Physical Review Letters shows new insight into the quantum state that describes the condition of electrons on such a qubit, information that can help engineers build this innovative technology.
    Guo’s team found that small bumps on the surface of solid neon in the qubit can naturally bind electrons, which creates ring-shaped quantum states of these electrons. The quantum state refers to the various properties of an electron, such as position, momentum and other characteristics, before they are measured. When the bumps are a certain size, the electron’s transition energy — the amount of energy required for an electron to move from one quantum ring state to another — aligns with the energy of microwave photons, another elementary particle.
    This alignment allows for controlled manipulation of the electron, which is needed for quantum computing.
    “This work significantly advances our understanding of the electron-trapping mechanism on a promising quantum computing platform,” Guo said. “It not only clarifies puzzling experimental observations but also delivers crucial insights for the design, optimization and control of electron-on-solid-neon qubits.”
    Previous work by Guo and collaborators demonstrated the viability of a solid-state single-electron qubit platform using electrons trapped on solid neon. Recent research showed coherence times as great as 0.1 millisecond, or 100 times longer than typical coherence times of 1 microsecond for conventional semiconductor-based and superconductor-based charge qubits.

    Coherence time determines how long a quantum system can maintain a superposition state — the ability of the system to be in multiple states at the same time until it is measured, which is one characteristic that gives quantum computers their unique abilities.
    The extended coherence time of the electron-on-solid-neon qubit can be attributed to the inertness and purity of solid neon. This qubit system also addresses the issue of liquid surface vibrations, a problem inherent in the more extensively studied electron-on-liquid-helium qubit. The current research offers crucial insights into optimizing the electron-on-solid-neon qubit further.
    A crucial part of that optimization is creating qubits that are smooth through most of the solid neon surface but have bumps of the right size where they are needed. Designers want minimal naturally occurring bumps on the surface that attract disruptive background electrical charge. At the same time, intentionally fabricating bumps of the correct size within the microwave resonator on the qubit improves the ability to trap electrons.
    “This research underscores the critical need for further study of how different conditions affect neon qubit manufacturing,” Guo said. “Neon injection temperatures and pressure influence the final qubit product. The more control we have over this process, the more precise we can build, and the closer we move to quantum computing that can solve currently unmanageable calculations.”
    Co-authors on this paper were Toshiaki Kanai, a former graduate research student in the FSU Department of Physics, and Dafei Jin, an associate professor at the University of Notre Dame.
    The research was supported by the National Science Foundation, the Gordon and Betty Moore Foundation, and the Air Force Office of Scientific Research. More

  • in

    Public perception of scientists’ credibility slips

    New analyses from the Annenberg Public Policy Center find that public perceptions of scientists’ credibility — measured as their competence, trustworthiness, and the extent to which they are perceived to share an individual’s values — remain high, but their perceived competence and trustworthiness eroded somewhat between 2023 and 2024. The research also found that public perceptions of scientists working in artificial intelligence (AI) differ from those of scientists as a whole.
    From 2018-2022, the Annenberg Public Policy Center (APPC) of the University of Pennsylvania relied on national cross-sectional surveys to monitor perceptions of science and scientists. In 2023-24, APPC moved to a nationally representative empaneled sample to make it possible to observe changes in individual perceptions.
    The February 2024 findings, released today to coincide with the address by National Academy of Sciences President Marcia McNutt on “The State of the Science,” come from an analysis of responses from an empaneled national probability sample of U.S. adults surveyed in February 2023 (n=1,638 respondents), November 2023 (n=1,538), and February 2024 (n=1,555).
    Drawing on the 2022 cross-sectional data, in an article titled “Factors Assessing Science’s Self-Presentation model and their effect on conservatives’ and liberals’ support for funding science,” published in Proceedings of the National Academy of Sciences (September 2023), Annenberg-affiliated researchers Yotam Ophir (State University of New York at Buffalo and an APPC distinguished research fellow), Dror Walter (Georgia State University and an APPC distinguished research fellow), and Patrick E. Jamieson and Kathleen Hall Jamieson of the Annenberg Public Policy Center isolated factors that underlie perceptions of scientists (Factors Assessing Science’s Self-Presentation, or FASS). These factors predict public support for increased funding of science and support for federal funding of basic research.
    The five factors in FASS are whether science and scientists are perceived to be credible and prudent, and whether they are perceived to overcome bias, correct error (self-correcting), and whether their work benefits people like the respondent and the country as a whole (beneficial). In a 2024 publication titled “The Politicization of Climate Science: Media Consumption, Perceptions of Science and Scientists, and Support for Policy” (May 26, 2024) in the Journal of Health Communication, the same team showed that these five factors mediate the relationship between exposure to media sources such as Fox News and outcomes such as belief in anthropogenic climate change, perception of the threat it poses, and support for climate-friendly policies such as a carbon tax.
    Speaking about the FASS model, Jamieson, director of the Annenberg Public Policy Center and director of the survey, said that “because our 13 core questions reliably reduce to five factors with significant predictive power, the ASK survey’s core questions make it possible to isolate both stability and changes in public perception of science and scientists across time.”
    The new research finds that while scientists are held in high regard, two of the three dimensions that make up credibility — perceptions of competence and trustworthiness — showed a small but statistically significant drop from 2023 to 2024, as did both measures of beneficial. The 2024 survey data also indicate that the public considers AI scientists less credible than scientists in general, with notably fewer people saying that AI scientists are competent and trustworthy and “share my values” than scientists generally.

    “Although confidence in science remains high overall, the survey reveals concerns about AI science,” Jamieson said. “The finding is unsurprising. Generative AI is an emerging area of science filled with both great promise and great potential peril.”
    The data are based on Annenberg Science Knowledge (ASK) waves of the Annenberg Science and Public Health (ASAPH) surveys conducted in 2023 and 2024. The findings labeled 2023 are based on a February 2023 survey, and the findings labeled 2024 are based on combined ASAPH survey half-samples surveyed in November 2023 and February 2024.
    For further details, download the toplines and a series of figures that accompany this summary.
    Perceptions of scientists overall
    In the FASS model, perceptions of scientists’ credibility are assessed through perceptions of whether scientists are competent, trustworthy, and “share my values.” The first two of those values slipped in the most recent survey. In 2024, 70% of those surveyed strongly or somewhat agree that scientists are competent (down from 77% in 2023) and 59% strongly or somewhat agree that scientists are trustworthy (down from 67% in 2023).
    The survey also found that in 2024, fewer people felt that scientists’ findings benefit “the country as a whole” and “benefit people like me.” In 2024, 66% strongly or somewhat agreed that findings benefit the country as a whole (down from 75% in 2023). Belief that scientists’ findings “benefit people like me,” also declined, to 60% from 68%. Taken together those two questions make up the beneficial factor of FASS.

    The findings follow sustained attacks on climate and Covid-19-related science and, more recently, public concerns about the rapid development and deployment of artificial intelligence.
    Comparing perceptions of scientists in general with climate and AI scientists
    Credibility: When asked about three factors underlying scientists’ credibility, AI scientists have lower credibility in all three values. Competent: 70% strongly/somewhat agree that scientists are competent, but only 62% for climate scientists and 49% for AI scientists. Trustworthy: 59% agree scientists are trustworthy, 54% agree for climate scientists, 28% for AI scientists. Share my values: A higher number (38%) agree that climate scientists share my values than for scientists in general (36%) and AI scientists (15%). More people disagree with this for AI scientists (35%) than for the others.Prudence: Asked whether they agree or disagree that science by various groups of scientists “creates unintended consequences and replaces older problems with new ones,” over half of those surveyed (59%) agree that AI scientists create unintended consequences and just 9% disagree.
    Overcoming bias: Just 42% of those surveyed agree that scientists “are able to overcome human and political biases,” but only 21% feel that way about AI scientists. In fact, 41% disagree that AI scientists are able to overcome human political biases. In another area, just 23% agree that AI scientists provide unbiased conclusions in their area of inquiry and 38% disagree.
    Self-correction: Self-correction, or “organized skepticism expressed in expectations sustaining a culture of critique,” as the FASS paper puts it, is considered by some as a “hallmark of science.” AI scientists are seen as less likely than scientists or climate scientists to take action to prevent fraud; take responsibility for mistakes; or to have mistakes that are caught by peer review. (
    Benefits: Asked about the benefits from scientists’ findings, 60% agree that scientists’ “findings benefit people like me,” though just 44% agree for climate scientists and 35% for AI scientists. Asked about whether findings benefit the country as a whole, 66% agree for scientists, 50% for climate scientists and 41% for AI scientists.
    Your best interest: The survey also asked respondents how much trust they have in scientists to act in the best interest of people like you. (This specific trust measure is not a part of the FASS battery.) Respondents have less trust in AI scientists than in others: 41% have a great deal/a lot of trust in medical scientists; 39% in climate scientists; 36% in scientists; and 12% in AI scientists. More

  • in

    A chip-scale Titanium-sapphire laser

    As lasers go, those made of Titanium-sapphire (Ti:sapphire) are considered to have “unmatched” performance. They are indispensable in many fields, including cutting-edge quantum optics, spectroscopy, and neuroscience. But that performance comes at a steep price. Ti:sapphire lasers are big, on the order of cubic feet in volume. They are expensive, costing hundreds of thousands of dollars each. And they require other high-powered lasers, themselves costing $30,000 each, to supply them with enough energy to function.
    As a result, Ti:sapphire lasers have never achieved the broad, real-world adoption they deserve — until now. In a dramatic leap forward in scale, efficiency, and cost, researchers at Stanford University have built a Ti:sapphire laser on a chip. The prototype is four orders of magnitude smaller (10,000x) and three orders less expensive (1,000x) than any Ti:sapphire laser ever produced.
    “This is a complete departure from the old model,” said Jelena Vu?kovi?, the Jensen Huang Professor in Global Leadership, a professor of electrical engineering, and senior author of the paper introducing the chip-scale Ti:sapphire laser published in the journal Nature. “Instead of one large and expensive laser, any lab might soon have hundreds of these valuable lasers on a single chip. And you can fuel it all with a green laser pointer.”
    Profound benefits
    “When you leap from tabletop size and make something producible on a chip at such a low cost, it puts these powerful lasers in reach for a lot of different important applications,” said Joshua Yang, a doctoral candidate in Vu?kovi?’s lab and co-first author of the study along with Vu?kovi?’sNanoscale and Quantum Photonics Lab colleagues, research engineerKasper Van Gasse and postdoctoral scholarDaniil M. Lukin.
    In technical terms, Ti:sapphire lasers are so valuable because they have the largest “gain bandwidth” of any laser crystal, explained Yang. In simple terms, gain bandwidth translates to the broader range of colors the laser can produce compared to other lasers. It’s also ultrafast, Yang said. Pulses of light issue forth every quadrillionth of a second.
    But Ti:sapphire lasers are also hard to come by. Even Vu?kovi?’s lab, which does cutting-edge quantum optics experiments, only has a few of these prized lasers to share. The new Ti:sapphire laser fits on a chip that is measured in square millimeters. If the researchers can mass-produce them on wafers, potentially thousands, perhaps tens-of-thousands of Ti:sapphire lasers could be squeezed on a disc that fits in the palm of a human hand.

    “A chip is light. It is portable. It is inexpensive and it is efficient. There are no moving parts. And it can be mass-produced,” Yang said. “What’s not to like? This democratizes Ti:sapphire lasers.”
    How it’s done
    To fashion the new laser, the researchers began with a bulk layer of Titanium-sapphire on a platform of silicon dioxide (SiO2), all riding atop true sapphire crystal. They then grind, etch, and polish the Ti:sapphire to an extremely thin layer, just a few hundred nanometers thick. Into that thin layer, they then pattern a swirling vortex of tiny ridges. These ridges are like fiber-optic cables, guiding the light around and around, building in intensity. In fact, the pattern is known as a waveguide.
    “Mathematically speaking, intensity is power divided by area. So, if you maintain the same power as the large-scale laser, but reduce the area in which it is concentrated, the intensity goes through the roof,” Yang says. “The small scale of our laser actually helps us make it more efficient.”
    The remaining piece of the puzzle is a microscale heater that warms the light traveling through the waveguides, allowing the Vu?kovi? team to change the wavelength of the emitted light to tune the color of the light anywhere between 700 and 1,000 nanometers — in the red to infrared.
    Spotlight on applications
    Vu?kovi?, Yang, and colleagues are most excited about the range of fields that such a laser might impact. In quantum physics, the new laser provides an inexpensive and practical solution that could dramatically scale down state-of-the-art quantum computers. In neuroscience, the researchers can foresee immediate application in optogenetics, a field that allows scientists to control neurons with light guided inside the brain by relatively bulky optical fiber. Small-scale lasers, they say, might be integrated into more compact probes opening up new experimental avenues. In ophthalmology, it might find new use with Nobel Prize-winning chirped pulse amplification in laser surgery or offer less expensive, more compact optical coherence tomography technologies used to assess retinal health.
    Next up, the team is working on perfecting their chip-scale Ti:sapphire laser and on ways to mass-produce them, thousands at a time, on wafers. Yang will earn his doctorate this summer based on this research and is working to bring the technology to market.
    “We could put thousands of lasers on a single 4-inch wafer,” Yang says. “That’s when the cost per laser starts to become almost zero. That’s pretty exciting.” More