More stories

  • in

    Researchers gain deeper understanding of mechanism behind superconductors

    Physicists at Leipzig University have once again gained a deeper understanding of the mechanism behind superconductors. This brings the research group led by Professor Jürgen Haase one step closer to their goal of developing the foundations for a theory for superconductors that would allow current to flow without resistance and without energy loss. The researchers found that in superconducting copper-oxygen bonds, called cuprates, there must be a very specific charge distribution between the copper and the oxygen, even under pressure.
    This confirmed their own findings from 2016, when Haase and his team developed an experimental method based on magnetic resonance that can measure changes that are relevant to superconductivity in the structure of materials. They were the first team in the world to identify a measurable material parameter that predicts the maximum possible transition temperature — a condition required to achieve superconductivity at room temperature. Now they have discovered that cuprates, which under pressure enhance superconductivity, follow the charge distribution predicted in 2016. The researchers have published their new findings in the journal PNAS.
    “The fact that the transition temperature of cuprates can be enhanced under pressure has puzzled researchers for 30 years. But until now we didn’t know which mechanism was responsible for this,” Haase said. He and his colleagues at the Felix Bloch Institute for Solid State Physics have now come a great deal closer to understanding the actual mechanism in these materials. “At Leipzig University — with support from the Graduate School Building with Molecules and Nano-objects (BuildMoNa) — we have established the basic conditions needed to research cuprates using nuclear resonance, and Michael Jurkutat was the first doctoral researcher to join us. Together, we established the Leipzig Relation, which says that you have to take electrons away from the oxygen in these materials and give them to the copper in order to increase the transition temperature. You can do this with chemistry, but also with pressure. But hardly anyone would have thought that we could measure all of this with nuclear resonance,” Haase said.
    Their current research finding could be exactly what is needed to produce a superconductor at room temperature, which has been the dream of many physicists for decades and is now expected to take only a few more years, according to Haase. To date, this has only been possible at very low temperatures around minus 150 degrees Celsius and below, which are not easy to find anywhere on Earth. About a year ago, a Canadian research group verified the findings of Professor Haase’s team from 2016 using newly developed, computer-aided calculations and thus substantiated the findings theoretically.
    Superconductivity is already used today in a variety of ways, for example, in magnets for MRI machines and in nuclear fusion. But it would be much easier and less expensive if superconductors operated at room temperature. The phenomenon of superconductivity was discovered in metals as early as 1911, but even Albert Einstein did not attempt to come up with an explanation back then. Nearly half a century passed before BCS theory provided an understanding of superconductivity in metals in 1957. In 1986, the discovery of superconductivity in ceramic materials (cuprate superconductors) at much higher temperatures by physicists Georg Bednorz and Karl Alexander Müller raised new questions, but also raised hopes that superconductivity could be achieved at room temperature. More

  • in

    Researchers use AI to triage patients with chest pain

    Artificial intelligence (AI) may help improve care for patients who show up at the hospital with acute chest pain, according to a study published in Radiology, a journal of the Radiological Society of North America (RSNA).
    “To the best of our knowledge, our deep learning AI model is the first to utilize chest X-rays to identify individuals among acute chest pain patients who need immediate medical attention,” said the study’s lead author, Márton Kolossváry, M.D., Ph.D., radiology research fellow at Massachusetts General Hospital (MGH) in Boston.
    Acute chest pain syndrome may consist of tightness, burning or other discomfort in the chest or a severe pain that spreads to your back, neck, shoulders, arms, or jaw. It may be accompanied by shortness of breath.
    Acute chest pain syndrome accounts for over 7 million emergency department visits annually in the United States, making it one of the most common complaints.
    Fewer than 8% of these patients are diagnosed with the three major cardiovascular causes of acute chest pain syndrome, which are acute coronary syndrome, pulmonary embolism or aortic dissection. However, the life-threatening nature of these conditions and low specificity of clinical tests, such as electrocardiograms and blood tests, lead to substantial use of cardiovascular and pulmonary diagnostic imaging, often yielding negative results. As emergency departments struggle with high patient numbers and shortage of hospital beds, effectively triaging patients at very low risk of these serious conditions is important.
    Deep learning is an advanced type of artificial intelligence (AI) that can be trained to search X-ray images to find patterns associated with disease.

    For the study, Dr. Kolossváry and colleagues developed an open-source deep learning model to identify patients with acute chest pain syndrome who were at risk for 30-day acute coronary syndrome, pulmonary embolism, aortic dissection or all-cause mortality, based on a chest X-ray.
    The study used electronic health records of patients presenting with acute chest pain syndrome who had a chest X-ray and additional cardiovascular or pulmonary imaging and/or stress tests at MGH or Brigham and Women’s Hospital in Boston between January 2005 and December 2015. For the study, 5,750 patients (mean age 59, including 3,329 men) were evaluated.
    The deep-learning model was trained on 23,005 patients from MGH to predict a 30-day composite endpoint of acute coronary syndrome, pulmonary embolism or aortic dissection and all-cause mortality based on chest X-ray images.
    The deep-learning tool significantly improved prediction of these adverse outcomes beyond age, sex and conventional clinical markers, such as d-dimer blood tests. The model maintained its diagnostic accuracy across age, sex, ethnicity and race. Using a 99% sensitivity threshold, the model was able to defer additional testing in 14% of patients as compared to 2% when using a model only incorporating age, sex, and biomarker data.
    “Analyzing the initial chest X-ray of these patients using our automated deep learning model, we were able to provide more accurate predictions regarding patient outcomes as compared to a model that uses age, sex, troponin or d-dimer information,” Dr. Kolossváry said. “Our results show that chest X-rays could be used to help triage chest pain patients in the emergency department.”
    According to Dr. Kolossváry, in the future such an automated model could analyze chest X-rays in the background and help select those who would benefit most from immediate medical attention and may help identify patients who may be discharged safely from the emergency department.
    “Deep Learning Analysis of Chest Radiographs to Triage Patients with Acute Chest Pain Syndrome.” Collaborating with Dr. Kolossváry were Vineet K. Raghu, Ph.D., John T. Nagurney, M.D., Udo Hoffmann, M.D., M.P.H., and Michael T. Lu, M.D., M.P.H. More

  • in

    Novel framework provides 'measuring stick' for assessing patient matching tools

    Accurate linking of an individual’s medical records from disparate sources within and between health systems, known as patient matching, plays a critical role in patient safety and quality of care, but has proven difficult to accomplish in the United States, the last developed country without a unique patient identifier. In the U.S., linking patient data is dependent on algorithms designed by researchers, vendors and others. Research scientists led by Regenstrief Institute Vice President for Data and Analytics Shaun Grannis, M.D., M.S., have developed an eight-point framework for evaluating the validity and performance of algorithms to match medical records to the correct patient.
    “The value of data standardization is well recognized. There are national healthcare provider IDs. There are facility IDs and object identifiers. There are billing codes. There are standard vocabularies for healthcare lab test results and medical observations — such as LOINC® here at Regenstrief. Patient identity is the last gaping hole in our health infrastructure,” said Dr. Grannis. “We are providing a framework to evaluate patient matching algorithms for accuracy.
    “We recognize that the need for patient matching is not going away and that we need standardized methods to uniquely identify patients,” said Dr. Grannis. “Current patient matching algorithms come in many different flavors, shapes and sizes. To be able to compare how one performs against the other, or even to understand how they might interact together, we have to have a standard way of assessment. We have produced a novel, robust framework for consistent and reproducible evaluation. Simply put, the framework we’ve developed at Regenstrief provides a ‘measuring stick’ for the effectiveness of patient matching tools.”
    Individuals increasingly receive care from multiple sources. While patient matching is complex, it is crucial to health information exchange. Is the William Jones seen at one healthcare system the same person as the William, Will or Willy Jones or perhaps Bill or Billy Jones receiving care at other facilities? Does Elizabeth Smith’s name appear at different medical offices or perhaps at a physical therapy or a dialysis facility as Liz or Beth? To which Juan J. Gomez do various lab test results belong? Typos, missing information and other data errors as well as typical variations add to the complexity.
    The framework’s eight-point approach to the creation of gold standard matching data sets necessary for record linkage encompasses technical areas including data preprocessing, blocking, record adjudication, linkage evaluation and reviewer characteristics. The authors note that the framework “can help record linkage method developers provide necessary transparency when creating and validating gold standard reference matching data sets. In turn, this transparency will support both the internal and external validity of recording linkage studies and improve the robustness of new record linkage strategies.”
    Measures and standards are ubiquitous. “When you go to a gas station pump, the measure of how much gas goes through is standardized so that we know exactly how much is flowing. Similarly, we need to have a common way of measuring and understanding how algorithms for patient matching work,” said Dr. Grannis. “Our eight-pronged approach helps to cover the waterfront of what needs to be evaluated. Laying out the framework and specifying the tasks and activities that need to be completed goes a long way toward standardizing patient matching.”
    In addition to playing a critical role in patient safety and quality of care, improved patient matching accuracy supports more cost-effective healthcare delivery in a variety of ways including reduction in the number of duplicate medical tests. More

  • in

    New small laser device can help detect signs of life on other planets

    As space missions delve deeper into the outer solar system, the need for more compact, resource-conserving and accurate analytical tools has become increasingly critical — especially as the hunt for extraterrestrial life and habitable planets or moons continues.
    A University of Maryland-led team developed a new instrument specifically tailored to the needs of NASA space missions. Their mini laser-sourced analyzer is significantly smaller and more resource efficient than its predecessors — all without compromising the quality of its ability to analyze planetary material samples and potential biological activity onsite. The team’s paper on this new device was published in the journal Nature Astronomy on January 16, 2023.
    Weighing only about 17 pounds, the instrument is a physically scaled-down combination of two important tools for detecting signs of life and identifying compositions of materials: a pulsed ultraviolet laser that removes small amounts of material from a planetary sample and an OrbitrapTM analyzer that delivers high-resolution data about the chemistry of the examined materials.
    “The Orbitrap was originally built for commercial use,” explained Ricardo Arevalo, lead author of the paper and an associate professor of geology at UMD. “You can find them in the labs of pharmaceutical, medical and proteomic industries. The one in my own lab is just under 400 pounds, so they’re quite large, and it took us eight years to make a prototype that could be used efficiently in space — significantly smaller and less resource-intensive, but still capable of cutting-edge science.”
    The team’s new gadget shrinks down the original Orbitrap while pairing it with laser desorption mass spectrometry (LDMS) — techniques that have yet to be applied in an extraterrestrial planetary environment. The new device boasts the same benefits as its larger predecessors but is streamlined for space exploration and onsite planetary material analysis, according to Arevalo.
    Thanks to its diminutive mass and minimal power requirements, the mini Orbitrap LDMS instrument can be easily stowed away and maintained on space mission payloads. The instrument’s analyses of a planetary surface or substance are also far less intrusive and thus much less likely to contaminate or damage a sample than many current methods that attempt to identify unknown compounds.

    “The good thing about a laser source is that anything that can be ionized can be analyzed. If we shoot our laser beam at an ice sample, we should be able to characterize the composition of the ice and see biosignatures in it,” Arevalo said. “This tool has such a high mass resolution and accuracy that any molecular or chemical structures in a sample become much more identifiable.”
    The laser component of the mini LDMS Orbitrap also allows researchers access to larger, more complex compounds that are more likely to be associated with biology. Smaller organic compounds like amino acids, for example, are more ambiguous signatures of life forms.
    “Amino acids can be produced abiotically, meaning that they’re not necessarily proof of life. Meteorites, many of which are chock full of amino acids, can crash onto a planet’s surface and deliver abiotic organics to the surface,” Arevalo said. “We know now that larger and more complex molecules, like proteins, are more likely to have been created by or associated with living systems. The laser lets us study larger and more complex organics that can reflect higher fidelity biosignatures than smaller, simpler compounds.”
    For Arevalo and his team, the mini LDMS Orbitrap will offer much-needed insight and flexibility for future ventures into the outer solar system, such as missions focused on life detection objectives (e.g., Enceladus Orbilander) and exploration of the lunar surface (e.g., the NASA Artemis Program). They hope to send their device into space and deploy it on a planetary target of interest within the next few years.
    “I view this prototype as a pathfinder for other future LDMS and Orbitrap-based instruments,” Arevalo said. “Our mini Orbitrap LDMS instrument has the potential to significantly enhance the way we currently study the geochemistry or astrobiology of a planetary surface.”
    Other UMD-affiliated researchers on the team include geology graduate students Lori Willhite and Ziqin “Grace” Ni, geology postdoctoral associates Anais Bardyn and Soumya Ray, and astronomy visiting associate research engineer Adrian Southard.
    This study was supported by NASA (Award Nos. 80NSSC19K0610, 80NSSC19K0768, 80GSFC21M0002), NASA Goddard Space Flight Center Internal Research Development (IRAD), and the University of Maryland Faculty Incentive Program. More

  • in

    Blocking radio waves and electromagnetic interference with the flip of a switch

    Researchers in Drexel University’s College of Engineering have developed a thin film device, fabricated by spray coating, that can block electromagnetic radiation with the flip of a switch. The breakthrough, enabled by versatile two-dimensional materials called MXenes, could adjust the performance of electronic devices, strengthen wireless connections and secure mobile communications against intrusion.
    The team, led by Yury Gogotsi, PhD, Distinguished University and Bach professor in Drexel’s College of Engineering, previously demonstrated that the two-dimensional layered MXene materials, discovered just over a decade ago, when combined with an electrolyte solution, can be turned into a potent active shield against electromagnetic waves. This latest MXene discovery, reported in Nature Nanotechnology, shows how this shielding can be tuned when a small voltage — less than that produced by an alkaline battery — is applied.
    “Dynamic control of electromagnetic wave jamming has been a significant technological challenge for protecting electronic devices working at gigahertz frequencies and a variety of other communications technologies,” Gogotsi said. “As the number of wireless devices being used in industrial and private sectors has increased by orders of magnitude over the past decade, the urgency of this challenge has grown accordingly. This is why our discovery — which would dynamically mitigate the effect of electromagnetic interference on these devices — could have a broad impact.”
    MXene is a unique material in that it is highly conductive — making it perfectly suited for reflecting microwave radiation that could cause static, feedback or diminish the performance of communications devices — but its internal chemical structure can also be temporarily altered to allow these electromagnetic waves to pass through.
    This means that a thin coating on a device or electrical components prevents them from both emitting electromagnetic waves, as well as being penetrated by those emitted by other electronics. Eliminating the possibility of interference from both internal and external sources can ensure the performance of the device, but some waves must be allowed to exit and enter when it is being used for communication.
    “Without being able to control the ebb and flow of electromagnetic waves within and around a device, it’s a bit like a leaky faucet — you’re not really turning off the water and that constant dripping is no good,” Gogotsi said. “Our shielding ensures the plumbing is tight — so-to-speak — no electromagnetic radiation is leaking out or getting in until we want to use the device.”
    The key to eliciting bidirectional tunability of MXene’s shielding property is using the flow and expulsion of ions to alternately expand and compress the space between material’s layers, like an accordion, as well as to change the surface chemistry of MXenes.

    With a small voltage applied to the film, ions enter — or intercalate — between the MXene layers altering the charge of their surface and inducing electrostatic attraction, which serves to change the layer spacing, the conductivity and shielding efficiency of the material. When the ions are deintercalated, as the current is switched off, the MXene layers return to their original state.
    The team tested 10 different MXene-electrolyte combinations, applying each via paint sprayer in a layer about 30 to 100 times thinner than a human hair. The materials consistently demonstrated the dynamic tunability of shielding efficiency in blocking microwave radiation, which is impossible for traditional metals like copper and steel. And the device sustained the performance through more than 500 charge-discharge cycles.
    “These results indicate that the MXene films can convert from electromagnetic interference shielding to quasi-electromagnetic wave transmission by electrochemical oxidation of MXenes,” Gogotsi and his co-authors wrote. “The MXene film can potentially serve as a dynamic EMI shielding switch.”
    For security applications, Gogotsi suggests that the MXene shielding could hide devices from detection by radar or other tracing systems. The team also tested the potential of a one-way shielding switch. This would allow a device to remain undetectable and protected from unauthorized access until it is deployed for use.
    “A one-way switch could open the protection and allow a signal to be sent or communication to be opened in an emergency or at the required moment,” Gogotsi said. “This means it could protect communications equipment from being influenced or tampered with until it is in use. For example, it could encase the device during transportation or storage and then activate only when it is ready to be used.”
    The next step for Gogotsi’s team is to explore additional MXene-electrolyte combinations and mechanisms to fine-tune the shielding to achieve a stronger modulation of electromagnetic wave transmission and dynamic adjustment to block radiation at a variety of bandwidths. More

  • in

    COVID calculations spur solution to old problem in computer science

    During the corona epidemic many of us became amateur mathematicians. How quickly would the number of hospitalized patients rise, and when would herd immunity be achieved? Professional mathematicians were challenged as well, and a researcher at University of Copenhagen became inspired to solve a 30-year-old problem in computer science. The breakthrough has just been published in th Journal of the ACM (Association for Computing Machinery).
    “Like many others, I was out to calculate how the epidemic would develop. I wanted to investigate certain ideas from theoretical computer science in this context. However, I realized that the lack of solution to the old problem was a showstopper,” says Joachim Kock, Associate Professor at the Department of Mathematics, University of Copenhagen.
    His solution to the problem can be of use in epidemiology and computer science, and potentially in other fields as well. A common feature for these fields is the presence of systems where the various components exhibit mutual influence. For instance, when a healthy person meets a person infected with COVID, the result can be two people infected.
    Smart method invented by German teenager
    To understand the breakthrough, one needs to know that such complex systems can be described mathematically through so-called Petri nets. The method was invented in 1939 by German Carl Adam Petri (by the way at the age of only 13) for chemistry applications. Just like a healthy person meeting a person infected with COVID can trigger a change, the same may happen when two chemical substances mix and react.
    In a Petri net the various components are drawn as circles while events such as a chemical reaction or an infection are drawn as squares. Next, circles and squares are connected by arrows which show the interdependencies in the system.

    A simple version of a Petri net for COVID infection. The starting point is a non-infected person. “S” denotes “susceptible.” Contact with an infected person (“I”) is an event which leads to two persons being infected. Later another event will happen, removing a person from the group of infected. Here, “R” denotes “recovered” which in this context could be either cured or dead. Either outcome would remove the person from the infected group.
    Computer scientists regarded the problem as unsolvable
    In chemistry, Petri nets are applied for calculating how the concentrations of various chemical substances in a mixture will evolve. This manner of thinking has influenced the use of Petri nets in other fields such as epidemiology: we are starting out with a high “concentration” of un-infected people, whereafter the “concentration” of infected starts to rise. In computer science, the use of Petri nets is somewhat different: the focus is on individuals rather than concentrations, and the development happens in steps rather than continuously.
    What Joachim Kock had in mind was to apply the more individual-oriented Petri nets from computer science for COVID calculations. This was when he encountered the old problem:
    “Basically, the processes in a Petri net can be described through two separate approaches. The first approach regards a process as a series of events, while the second approach sees the net as a graphical expression of the interdependencies between components and events,” says Joachim Kock, adding:
    “The serial approach is well suited for performing calculations. However, it has a downside since it describes causalities less accurately than the graphical approach. Further, the serial approach tends to fall short when dealing with events that take place simultaneously.”

    “The problem was that nobody had been able to unify the two approaches. The computer scientists had more or less resigned, regarding the problem as unsolvable. This was because no-one had realized that you need to go all the way back and revise the very definition of a Petri net,” says Joachim Kock.
    Small modification with large impact
    The Danish mathematician realized that a minor modification to the definition of a Petri net would enable a solution to the problem:
    “By allowing parallel arrows rather than just counting them and writing a number, additional information is made available. Things work out and the two approaches can be unified.”
    The exact mathematical reason why this additional information matters is complex, but can be illustrated by an analogy:
    “Assigning numbers to objects has helped humanity greatly. For instance, it is highly practical that I can arrange the right number of chairs in advance for a dinner party instead of having to experiment with different combinations of chairs and guests after they have arrived. However, the number of chairs and guests does not reveal who will be sitting where. Some information is lost when we consider numbers instead of the real objects.”
    Similarly, information is lost when the individual arrows of the Petri net are replaced by a number.
    “It takes a bit more effort to treat the parallel arrows individually, but one is amply rewarded as it becomes possible to combine the two approaches so that the advantages of both can be obtained simultaneously.”
    The circle to COVID has been closed
    The solution helps our mathematical understanding of how to describe complex systems with many interdependencies, but will not have much practical effect on the daily work of computer scientists using Petri nets, according to Joachim Kock:
    “This is because the necessary modifications are mostly back-compatible and can be applied without need for revision of the entire Petri net theory.”
    “Somewhat surprisingly, some epidemiologists have started using the revised Petri nets. So, one might say the circle has been closed!”
    Joachim Kock does see a further point to the story:
    “I wasn’t out to find a solution to the old problem in computer science at all. I just wanted to do COVID calculations. This was a bit like looking for your pen but realizing that you must find your glasses first. So, I would like to take the opportunity to advocate the importance of research which does not have a predefined goal. Sometimes research driven by curiosity will lead to breakthroughs.” More

  • in

    Clinical trial results indicate low rate of adverse events associated with implanted brain computer interface

    For people with paralysis caused by neurologic injury or disease — such as ALS (also known as Lou Gehrig’s disease), stroke, or spinal cord injury — brain-computer interfaces (BCIs) have the potential to restore communication, mobility, and independence by transmitting information directly from the brain to a computer or other assistive technology.
    Although implanted brain sensors, the core component of many brain-computer interfaces, have been used in neuroscientific studies with animals for decades and have been approved for short term use ( More

  • in

    AI discovers new nanostructures

    Scientists at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory have successfully demonstrated that autonomous methods can discover new materials. The artificial intelligence (AI)-driven technique led to the discovery of three new nanostructures, including a first-of-its-kind nanoscale “ladder.” The research was published today in Science Advances.
    The newly discovered structures were formed by a process called self-assembly, in which a material’s molecules organize themselves into unique patterns. Scientists at Brookhaven’s Center for Functional Nanomaterials (CFN) are experts at directing the self-assembly process, creating templates for materials to form desirable arrangements for applications in microelectronics, catalysis, and more. Their discovery of the nanoscale ladder and other new structures further widens the scope of self-assembly’s applications.
    “Self-assembly can be used as a technique for nanopatterning, which is a driver for advances in microelectronics and computer hardware,” said CFN scientist and co-author Gregory Doerk. “These technologies are always pushing for higher resolution using smaller nanopatterns. You can get really small and tightly controlled features from self-assembling materials, but they do not necessarily obey the kind of rules that we lay out for circuits, for example. By directing self-assembly using a template, we can form patterns that are more useful.”
    Staff scientists at CFN, which is a DOE Office of Science User Facility, aim to build a library of self-assembled nanopattern types to broaden their applications. In previous studies, they demonstrated that new types of patterns are made possible by blending two self-assembling materials together.
    “The fact that we can now create a ladder structure, which no one has ever dreamed of before, is amazing,” said CFN group leader and co-author Kevin Yager. “Traditional self-assembly can only form relatively simple structures like cylinders, sheets, and spheres. But by blending two materials together and using just the right chemical grating, we’ve found that entirely new structures are possible.”
    Blending self-assembling materials together has enabled CFN scientists to uncover unique structures, but it has also created new challenges. With many more parameters to control in the self-assembly process, finding the right combination of parameters to create new and useful structures is a battle against time. To accelerate their research, CFN scientists leveraged a new AI capability: autonomous experimentation.

    In collaboration with the Center for Advanced Mathematics for Energy Research Applications (CAMERA) at DOE’s Lawrence Berkeley National Laboratory, Brookhaven scientists at CFN and the National Synchrotron Light Source II (NSLS-II), another DOE Office of Science User Facility at Brookhaven Lab, have been developing an AI framework that can autonomously define and perform all the steps of an experiment. CAMERA’s gpCAM algorithm drives the framework’s autonomous decision-making. The latest research is the team’s first successful demonstration of the algorithm’s ability to discover new materials.
    “gpCAM is a flexible algorithm and software for autonomous experimentation,” said Berkeley Lab scientist and co-author Marcus Noack. “It was used particularly ingeniously in this study to autonomously explore different features of the model.”
    “With help from our colleagues at Berkeley Lab, we had this software and methodology ready to go, and now we’ve successfully used it to discover new materials,” Yager said. “We’ve now learned enough about autonomous science that we can take a materials problem and convert it into an autonomous problem pretty easily.”
    To accelerate materials discovery using their new algorithm, the team first developed a complex sample with a spectrum of properties for analysis. Researchers fabricated the sample using the CFN nanofabrication facility and carried out the self-assembly in the CFN material synthesis facility.
    “An old school way of doing material science is to synthesize a sample, measure it, learn from it, and then go back and make a different sample and keep iterating that process,” Yager said. “Instead, we made a sample that has a gradient of every parameter we’re interested in. That single sample is thus a vast collection of many distinct material structures.”
    Then, the team brought the sample to NSLS-II, which generates ultrabright x-rays for studying the structure of materials. CFN operates three experimental stations in partnership with NSLS-II, one of which was used in this study, the Soft Matter Interfaces (SMI) beamline.

    “One of the SMI beamline’s strengths is its ability to focus the x-ray beam on the sample down to microns,” said NSLS-II scientist and co-author Masa Fukuto. “By analyzing how these microbeam x-rays get scattered by the material, we learn about the material’s local structure at the illuminated spot. Measurements at many different spots can then reveal how the local structure varies across the gradient sample. In this work, we let the AI algorithm pick, on the fly, which spot to measure next to maximize the value of each measurement.”
    As the sample was measured at the SMI beamline, the algorithm, without human intervention, created of model of the material’s numerous and diverse set of structures. The model updated itself with each subsequent x-ray measurement, making every measurement more insightful and accurate.
    In a matter of hours, the algorithm had identified three key areas in the complex sample for the CFN researchers to study more closely. They used the CFN electron microscopy facility to image those key areas in exquisite detail, uncovering the rails and rungs of a nanoscale ladder, among other novel features.
    From start to finish, the experiment ran about six hours. The researchers estimate they would have needed about a month to make this discovery using traditional methods.
    “Autonomous methods can tremendously accelerate discovery,” Yager said. “It’s essentially ‘tightening’ the usual discovery loop of science, so that we cycle between hypotheses and measurements more quickly. Beyond just speed, however, autonomous methods increase the scope of what we can study, meaning we can tackle more challenging science problems.”
    “Moving forward, we want to investigate the complex interplay among multiple parameters. We conducted simulations using the CFN computer cluster that verified our experimental results, but they also suggested how other parameters, such as film thickness, can also play an important role,” Doerk said.
    The team is actively applying their autonomous research method to even more challenging material discovery problems in self-assembly, as well as other classes of materials. Autonomous discovery methods are adaptable and can be applied to nearly any research problem.
    “We are now deploying these methods to the broad community of users who come to CFN and NSLS-II to conduct experiments,” Yager said. “Anyone can work with us to accelerate the exploration of their materials research. We foresee this empowering a host of new discoveries in the coming years, including in national priority areas like clean energy and microelectronics.”
    This research was supported by the DOE Office of Science. More