More stories

  • in

    Patterning method could pave the way for new fiber-based devices, smart textiles

    Multimaterial fibers that integrate metal, glass and semiconductors could be useful for applications such as biomedicine, smart textiles and robotics. But because the fibers are composed of the same materials along their lengths, it is difficult to position functional elements, such as electrodes or sensors, at specific locations. Now, researchers reporting in ACS Central Science have developed a method to pattern hundreds-of-meters-long multimaterial fibers with embedded functional elements.

    advertisement

    Youngbin Lee, Polina Anikeeva and colleagues developed a thiol-epoxy/thiol-ene polymer that could be combined with other materials, heated and drawn from a macroscale model into fibers that were coated with the polymer. When exposed to ultraviolet light, the polymer, which is photosensitive, crosslinked into a network that was insoluble to common solvents, such as acetone. By placing “masks” at specific locations along the fiber in a process known as photolithography, the researchers could protect the underlying areas from UV light. Then, they removed the masks and treated the fiber with acetone. The polymer in the areas that had been covered dissolved to expose the underlying materials.
    As a proof of concept, the researchers made patterns along fibers that exposed an electrically conducting filament underneath the thiol-epoxy/thiol-ene coating. The remaining polymer acted as an insulator along the length of the fiber. In this way, electrodes or other microdevices could be placed in customizable patterns along multimaterial fibers, the researchers say.

    make a difference: sponsored opportunity

    Story Source:
    Materials provided by American Chemical Society. Note: Content may be edited for style and length.

    Journal Reference:
    Youngbin Lee, Andres Canales, Gabriel Loke, Mehmet Kanik, Yoel Fink, Polina Anikeeva. Selectively Micro-Patternable Fibers via In-Fiber Photolithography. ACS Central Science, 2020; DOI: 10.1021/acscentsci.0c01188

    Cite This Page:

    American Chemical Society. “Patterning method could pave the way for new fiber-based devices, smart textiles.” ScienceDaily. ScienceDaily, 25 November 2020. .
    American Chemical Society. (2020, November 25). Patterning method could pave the way for new fiber-based devices, smart textiles. ScienceDaily. Retrieved November 25, 2020 from www.sciencedaily.com/releases/2020/11/201125091506.htm
    American Chemical Society. “Patterning method could pave the way for new fiber-based devices, smart textiles.” ScienceDaily. www.sciencedaily.com/releases/2020/11/201125091506.htm (accessed November 25, 2020). More

  • in

    When consumers trust AI recommendations, or resist them

    Researchers from Boston University and University of Virginia published a new paper in the Journal of Marketing that examines how consumers respond to AI recommenders when focused on the functional and practical aspects of a product (its utilitarian value) versus the experiential and sensory aspects of a product (its hedonic value).
    The study, forthcoming in the the Journal of Marketing, is titled “Artificial Intelligence in Utilitarian vs. Hedonic Contexts: The ‘Word-of-Machine’ Effect” and is authored by Chiara Longoni and Luca Cian.
    More and more companies are leveraging technological advances in AI, machine learning, and natural language processing to provide recommendations to consumers. As these companies evaluate AI-based assistance, one critical question must be asked: When do consumers trust the “word of machine,” and when do they resist it?
    A new Journal of Marketing study explores reasons behind the preference of recommendation source (AI vs. human). The key factor in deciding how to incorporate AI recommenders is whether consumers are focused on the functional and practical aspects of a product (its utilitarian value) or on the experiential and sensory aspects of a product (its hedonic value).
    Relying on data from over 3,000 study participants, the research team provides evidence supporting a word-of-machine effect, defined as the phenomenon by which the trade-offs between utilitarian and hedonic aspects of a product determine the preference for, or resistance to, AI recommenders. The word-of-machine effect stems from a widespread belief that AI systems are more competent than humans at dispensing advice when functional and practical qualities (utilitarian) are desired and less competent when the desired qualities are experiential and sensory-based (hedonic). Consequently, the importance or salience of utilitarian attributes determine preference for AI recommenders over human ones, while the importance or salience of hedonic attributes determine resistance to AI recommenders over human ones.
    The researchers tested the word-of-machine effect using experiments designed to assess people’s tendency to choose products based on consumption experiences and recommendation source. Longoni explains that “We found that when presented with instructions to choose products based solely on utilitarian/functional attributes, more participants chose AI-recommended products. When asked to only consider hedonic/experiential attributes, a higher percentage of participants chose human recommenders.”
    When utilitarian features are most important, the word-of-machine effect was more distinct. In one study, participants were asked to imagine buying a winter coat and rate how important utilitarian/functional attributes (e.g., breathability) and hedonic/experiential attributes (e.g., fabric type) were in their decision making. The more utilitarian/functional features were highly rated, the greater the preference for AI over human assistance, and the more hedonic/experiential features were highly rated, the greater the preference for human over AI assistance.
    Another study indicated that when consumers wanted recommendations matched to their unique preferences, they resisted AI recommenders and preferred human recommenders regardless of hedonic or utilitarian preferences. These results suggest that companies whose customers are known to be satisfied with “one size fits all” recommendations (i.e., not in need of a high level of customization) may rely on AI-systems. However, companies whose customers are known to desire personalized recommendations should rely on humans.
    Although there is a clear correlation between utilitarian attributes and consumer trust in AI recommenders, companies selling products that promise more sensorial experiences (e.g., fragrances, food, wine) may still use AI to engage customers. In fact, people embrace AI’s recommendations as long as AI works in partnership with humans. When AI plays an assistive role, “augmenting” human intelligence rather than replacing it, the AI-human hybrid recommender performs as well as a human-only assistant.
    Overall, the word-of-machine effect has important implications as the development and adoption of AI, machine learning, and natural language processing challenges managers and policy-makers to harness these transformative technologies. As Cian says, “The digital marketplace is crowded and consumer attention span is short. Understanding the conditions under which consumers trust, and do not trust, AI advice will give companies a competitive advantage in this space.”

    Story Source:
    Materials provided by American Marketing Association. Original written by Matt Weingarden. Note: Content may be edited for style and length. More

  • in

    Quantum magic squares

    Magic squares belong to the imagination of humanity for a long time. The oldest known magic square comes from China and is over 2000 years old. One of the most famous magic squares can be found in Albrecht Dürer’s copper engraving Melencolia I. Another one is on the facade of the Sagrada Família in Barcelona. A magic square is a square of numbers such that every column and every row sums to the same number. For example, in the magic square of the Sagrada Família every row and column sums to 33.
    If the magic square can contain real numbers, and every row and column sums to 1, then it is called a doubly stochastic matrix. One particular example would be a matrix that has 0’s everywhere except for one 1 in every column and every row. This is called a permutation matrix. A famous theorem says that every doubly stochastic matrix can be obtained as a convex combination of permutation matrices. In words, this means that permutation matrices “contain all the secrets” of doubly stochastic matrices — more precisely, that the latter can be fully characterized in terms of the former.
    In a new paper in the Journal of Mathematical Physics, Tim Netzer and Tom Drescher from the Department of Mathematics and Gemma De las Cuevas from the Department of Theoretical Physics have introduced the notion of the quantum magic square, which is a magic square but instead of numbers one puts in matrices. This is a non-commutative, and thus quantum, generalization of a magic square. The authors show that quantum magic squares cannot be as easily characterized as their “classical” cousins. More precisely, quantum magic squares are not convex combinations of quantum permutation matrices. “They are richer and more complicated to understand,” explains Tom Drescher. “This is the general theme when generalizations to the non-commutative case are studied.”

    Story Source:
    Materials provided by University of Innsbruck. Note: Content may be edited for style and length. More

  • in

    Lung-on-chip provides new insight on body's response to early tuberculosis infection

    Scientists have developed a lung-on-chip model to study how the body responds to early tuberculosis (TB) infection, according to findings published today in eLife.
    TB is a disease caused by the bacterium Mycobacterium tuberculosis (M. tuberculosis) and most often affects the lungs. The model reveals that respiratory system cells, called alveolar epithelial cells, play an essential role in controlling early TB infection. They do this by producing a substance called surfactant — a mixture of molecules (lipids and proteins) that reduce the surface tension where air and liquid meet in the lung.
    These findings add to our understanding of what happens during early TB infection, and may explain in part why those who smoke or have compromised surfactant functionality have a higher risk of contracting primary or recurrent infection.
    TB is one of the world’s top infectious killers and affects people of all ages. While it mostly affects adults, there are currently no effective vaccines available to this group. This is partly due to challenges with studying the early stages of infection, which take place when just one or two M. tuberculosis bacteria are deposited deep inside the lung.
    “We created the lung-on-chip model as a way of studying some of these early events,” explains lead author Vivek Thacker, a postdoctoral researcher at the McKinney Lab, École polytechnique fédérale de Lausanne (EPFL), Lausanne, Switzerland. “Previous studies have shown that components of surfactant produced by alveolar epithelial cells can impair bacterial growth, but that the alveolar epithelial cells themselves can allow intracellular bacterial growth. The roles of these cells in early infection are therefore not completely understood.
    “We used our model to observe where the sites of first contact are, how M. tuberculosis grows in alveolar epithelial cells compared to bacteria-killing cells called macrophages, and how the production of surfactant affects growth, all while maintaining these cells at the air-liquid interface found in the lung.”
    The team used their lung-on-chip model to recreate a deficiency in surfactant produced by alveolar epithelial cells and then see how the lung cells respond to early TB infection. The technology is optically transparent, meaning they could use an imaging technique called time-lapse microscopy to follow the growth of single M. tuberculosis bacteria in either macrophages or alveolar epithelial cells over multiple days.
    Their studies revealed that a lack of surfactant results in uncontrolled and rapid bacterial growth in both macrophages and alveolar epithelial cells. On the other hand, the presence of surfactant significantly reduces this growth in both cells and, in some cases, prevents it altogether.
    “Our work shines a light on the early events that take place during TB infection and provides a model for scientists to build on for future research into other respiratory infections,” says senior author John McKinney, Head of the Laboratory of Microbiology and Microtechnology at EPFL. “It also paves the way for experiments that increase the complexity of our model to help understand why some TB lesions progress while others heal, which can occur at the same time in the same patient. This knowledge could one day be harnessed to develop effective new interventions against TB and other diseases.”
    The authors add that they are currently using a human lung-on-chip model to study how our lungs may respond to a low-dose infection and inoculation of SARS-CoV-2, the virus that causes COVID-19.

    Story Source:
    Materials provided by eLife. Note: Content may be edited for style and length. More

  • in

    AI detects COVID-19 on chest X-rays with accuracy and speed

    Northwestern University researchers have developed a new artificial intelligence (A.I.) platform that detects COVID-19 by analyzing X-ray images of the lungs.
    Called DeepCOVID-XR, the machine-learning algorithm outperformed a team of specialized thoracic radiologists — spotting COVID-19 in X-rays about 10 times faster and 1-6% more accurately.
    The researchers believe physicians could use the A.I. system to rapidly screen patients who are admitted into hospitals for reasons other than COVID-19. Faster, earlier detection of the highly contagious virus could potentially protect health care workers and other patients by triggering the positive patient to isolate sooner.
    The study’s authors also believe the algorithm could potentially flag patients for isolation and testing who are not otherwise under investigation for COVID-19.
    The study will be published on Nov. 24 in the journal Radiology.
    “We are not aiming to replace actual testing,” said Northwestern’s Aggelos Katsaggelos, an A.I. expert and senior author of the study. “X-rays are routine, safe and inexpensive. It would take seconds for our system to screen a patient and determine if that patient needs to be isolated.”
    “It could take hours or days to receive results from a COVID-19 test,” said Dr. Ramsey Wehbe, a cardiologist and postdoctoral fellow in A.I. at the Northwestern Medicine Bluhm Cardiovascular Institute. “A.I. doesn’t confirm whether or not someone has the virus. But if we can flag a patient with this algorithm, we could speed up triage before the test results come back.”

    advertisement

    Katsaggelos is the Joseph Cummings Professor of Electrical and Computer Engineering in Northwestern’s McCormick School of Engineering. He also has courtesy appointments in computer science and radiology. Wehbe is a postdoctoral fellow at Bluhm Cardiovascular Institute at Northwestern Memorial Hospital.
    A trained eye
    For many patients with COVID-19, chest X-rays display similar patterns. Instead of clear, healthy lungs, their lungs appear patchy and hazy.
    “Many patients with COVID-19 have characteristic findings on their chest images,” Wehbe said. “These include ‘bilateral consolidations.’ The lungs are filled with fluid and inflamed, particularly along the lower lobes and periphery.”
    The problem is that pneumonia, heart failure and other illnesses in the lungs can look similar on X-rays. It takes a trained eye to tell the difference between COVID-19 and something less contagious.

    advertisement

    Katsaggelos’ laboratory specializes in using A.I. for medical imaging. He and Wehbe had already been working together on cardiology imaging projects and wondered if they could develop a new system to help fight the pandemic.
    “When the pandemic started to ramp up in Chicago, we asked each other if there was anything we could do,” Wehbe said. “We were working on medical imaging projects using cardiac echo and nuclear imaging. We felt like we could pivot and apply our joint expertise to help in the fight against COVID-19.”
    A.I. vs. human
    To develop, train and test the new algorithm, the researchers used 17,002 chest X-ray images — the largest published clinical dataset of chest X-rays from the COVID-19 era used to train an A.I. system. Of those images, 5,445 came from COVID-19-positive patients from sites across the Northwestern Memorial Healthcare System.
    The team then tested DeepCOVID-XR against five experienced cardiothoracic fellowship-trained radiologists on 300 random test images from Lake Forest Hospital. Each radiologist took approximately two-and-a-half to three-and-a-half hours to examine this set of images, whereas the A.I. system took about 18 minutes.
    The radiologists’ accuracy ranged from 76-81%. DeepCOVID-XR performed slightly better at 82% accuracy.
    “These are experts who are sub-specialty trained in reading chest imaging,” Wehbe said. “Whereas the majority of chest X-rays are read by general radiologists or initially interpreted by non-radiologists, such as the treating clinician. A lot of times decisions are made based off that initial interpretation.”
    “Radiologists are expensive and not always available,” Katsaggelos said. “X-rays are inexpensive and already a common element of routine care. This could potentially save money and time — especially because timing is so critical when working with COVID-19.”
    Limits to diagnosis
    Of course, not all COVID-19 patients show any sign of illness, including on their chest X-rays. Especially early in the virus’ progression, patients likely will not yet have manifestations on their lungs.
    “In those cases, the A.I. system will not flag the patient as positive,” Wehbe said. “But neither would a radiologist. Clearly there is a limit to radiologic diagnosis of COVID-19, which is why we wouldn’t use this to replace testing.”
    The Northwestern researchers have made the algorithm publicly available with hopes that others can continue to train it with new data. Right now, DeepCOVID-XR is still in the research phase, but could potentially be used in the clinical setting in the future.
    Study coauthors include Jiayue Sheng, Shinjan Dutta, Siyuan Chai, Amil Dravid, Semih Barutcu and Yunan Wu — all members of Katsaggelos’ lab — and Drs. Donald Cantrell, Nicholas Xiao, Bradly Allen, Gregory MacNealy, Hatice Savas, Rishi Agrawal and Nishant Parekh — all radiologists at Northwestern Medicine. More

  • in

    More skin-like, electronic skin that can feel

    What if we didn’t have skin? We would have no sense of touch, no detection of coldness or pain, leaving us inept to respond to any situation. The skin is not just a protective shell for organs, but rather a signaling system for survival that provides information on the external stimuli or temperature, or a meteorological observatory that reports the weather. Tactile receptors, tightly packed throughout the skin, feel the temperature or mechanical stimuli — such as touching or pinching — and convert them into electrical signals to the brain.
    The challenge for electronic skin, being developed for use in artificial skins or humanlike robots like the humanoids, is to make it feel the temperatures and movements like how human skin feels them as much as possible. So far, there are electronic skins that can detect movement or temperature separately, but none are able to recognize both simultaneously like the human skin.
    A joint research team consisting of POSTECH professor Unyong Jeong and Dr. Insang You of the Department of Materials Science and Engineering, and Professor Zhenan Bao of Stanford University have together developed the multimodal ion-electronic skin that can measure the temperature and mechanical stimulation at the same time. The research findings, published on November 20th edition of Science, are characterized by making very simple structures through applying special properties of the ion conductors.
    There are various tactile receptors in the human skin that can detect hot or cold temperatures as well as other tactile sensations such as pinching, twisting or pushing. Through these receptors, humans can distinguish between the mechanical stimuli and temperature. The conventional electronic skin fabricated so far had the issue of having large errors in measuring temperature if mechanical stimuli were applied to the skin.
    Human skin is freely stretchable yet unbreakable because it is full of electrolytes, so the joint research team made the sensor using them. They also took advantage of the fact that the ion conductor material containing electrolyte can have different measurable properties according to its measurement frequency. On the basis of the new finding, a multifunctional artificial receptor was created that can measure a tactile sensation and temperature at the same time.
    In addition, the research team derived variables — the charge relaxation time and the normalized capacitance — that only respond to temperatures in ion conductors and variables that only respond to mechanical stimuli. The outputs of the variables could be obtained measuring at only two measurement frequencies. The charge relaxation time, which is the time it takes for the polarization of the ions to disappear, can measure temperature and does not respond to movements, and the normalized capacitance can measure the movements without responding to temperature.
    This artificial receptor with a simple electrode-electrolyte-electrode structure has great commercialization potential and accurately measures the temperature of the object applied as well as the direction or strain profile upon external stimuli such as squeezing, pinching, spreading and twisting.
    The multimodal ion-electronic skin, which can be freely stretched or modified but can also detect temperature, is anticipated to be applicable in wearable temperature sensors or in robot skins for humanlike robots like humanoids.
    “When an index finger touches an electronic skin, the electronic skin detects contact as a temperature change, and when a finger pushes the skin, the back part of the contact area stretches and recognizes it as movement,” explained Dr. Insang You of POSTECH who is the first author of the paper. “I suspect that this mechanism is one of the ways that the actual human skin recognizes different stimuli like temperature and movement.”
    “This study is the first step in opening the door for multimodal electronic skin research using electrolytes,” remarked Professor Unyong Jeong of POSTECH and the corresponding author. “The ultimate goal of this research is to create artificial ion-electronic skin that simulates human tactile receptors and neurotransmitters, which will help restore the sense of touch in patients who have lost their tactile sensation due to illness or accidents.”
    The research was conducted with the support from the Global Frontier Project and the Mid-career Researcher Program of the Ministry of Science and ICT, and the Industrial Strategic Technology Development Program of the Ministry of Trade, Industry and Energy of Korea. More

  • in

    AI helps scientists understand brain activity behind thoughts

    A team led by researchers at Baylor College of Medicine and Rice University has developed artificial intelligence (AI) models that help them better understand the brain computations that underlie thoughts. This is new, because until now there has been no method to measure thoughts. The researchers first developed a new model that can estimate thoughts by evaluating behavior, and then tested their model on a trained artificial brain where they found neural activity associated with those estimates of thoughts. The theoretical study appears in the Proceedings of the National Academy of Sciences.
    “For centuries, neuroscientists have studied how the brain works by relating brain activity to inputs and outputs. For instance, when studying the neuroscience of movement, scientists measure muscle movements as well as neuronal activity, and then relate those two measurements,” said corresponding author Dr. Xaq Pitkow, assistant professor of neuroscience at Baylor and of electrical and computer engineering at Rice. “To study cognition in the brain, however, we don’t have anything to compare the measured neural activity to.”
    To understand how the brain gives rise to thought, researchers first need to measure a thought. They developed a method called “Inverse Rational Control” that looks at a behavior and infers the beliefs or thoughts that best explain that behavior.
    Traditionally, researchers in this field have worked with the idea that animals solve tasks optimally, behaving in a way that maximizes their net benefits. But when scientists study animal behavior, they find that this is not always the case.
    “Sometimes animals have ‘wrong’ beliefs or assumptions about what’s going on in their environment, but still they try to find the best long-term outcomes for their task, given what they believe is going on around them. This could account for why animals seem to behave suboptimally,” said Pitkow, who also is a McNair Scholar at Baylor, co-director of Baylor’s Center for Neuroscience and Artificial Intelligence and member of the Rice Neuroengineering Initiative.
    For example, consider an animal that is hunting and hears many noises it associates with prey. If one potential prey is making all the noises, the optimal behavior for the hunter is to consistently target its movements to a single noise. If the hunter mistakenly believes the noises are coming from many different animals, it may choose a suboptimal behavior, like constantly scanning its surroundings to try and pinpoint one of them. By acting according to its belief or assumption that there are many potential prey nearby, the hunter is behaving in a way that is simultaneously ‘rational’ and ‘suboptimal.’
    In the second part of the work, Pitkow and his colleagues developed a model to relate the thoughts that were identified using the Inverse Rational Control method to brain activity.
    “We can look at the dynamics of the modeled thoughts and at the dynamics of the brain’s representations of those thoughts. If those dynamics run parallel to each other, then we have confidence that we are capturing the aspects of the brain computations involved in those thoughts,” Pitkow said. “By providing methods to estimate thoughts and interpret neural activity associated with them, this study can help scientists understand how the brain produces complex behavior and provide new perspectives on neurological conditions.”
    Other contributors to this work include Zhengwei Wu, Minhae Kwon, Saurabh Daptardar and Paul Schrater. The authors are affiliated with one or more of the following institutions: Baylor College of Medicine, Rice University, Soongsil University, Google Maps, and the University of Minnesota.
    This work was supported in part by in part by BRAIN Initiative grant NIH 5U01NS094368, an award from the McNair Foundation, the Simons Collaboration on the Global Brain award 324143, the National Science Foundation award 1450923 BRAIN 43092 and NSF CAREER Award IOS-1552868.

    Story Source:
    Materials provided by Baylor College of Medicine. Note: Content may be edited for style and length. More

  • in

    AI system discovers useful new material

    When the words “artificial intelligence” (AI) come to mind, your first thoughts may be of super-smart computers, or robots that perform tasks without needing any help from humans. Now, a multi-institutional team including researchers from the National Institute of Standards and Technology (NIST) has accomplished something not too far off: They developed an AI algorithm called CAMEO that discovered a potentially useful new material without requiring additional training from scientists. The AI system could help reduce the amount of trial-and-error time scientists spend in the lab, while maximizing productivity and efficiency in their research.
    The research team published their work on CAMEO in Nature Communications.
    In the field of materials science, scientists seek to discover new materials that can be used in specific applications, such as a “metal that’s light but also strong for building a car, or one that can withstand high stresses and temperatures for a jet engine,” said NIST researcher Aaron Gilad Kusne.
    But finding such new materials usually takes a large number of coordinated experiments and time-consuming theoretical searches. If a researcher is interested in how a material’s properties vary with different temperatures, then the researcher may need to run 10 experiments at 10 different temperatures. But temperature is just one parameter. If there are five parameters, each with 10 values, then that researcher must run the experiment 10 x 10 x 10 x 10 x 10 times, a total of 100,000 experiments. It’s nearly impossible for a researcher to run that many experiments due to the years or decades it may take, Kusne said.
    That’s where CAMEO comes in. Short for Closed-Loop Autonomous System for Materials Exploration and Optimization, CAMEO can ensure that each experiment maximizes the scientist’s knowledge and understanding, skipping over experiments that would give redundant information. Helping scientists reach their goals faster with fewer experiments also enables labs to use their limited resources more efficiently. But how is CAMEO able to do this?
    The Method Behind the Machine
    Machine learning is a process in which computer programs can access data and process it themselves, automatically improving on their own instead of relying on repeated training. This is the basis for CAMEO, a self-learning AI that uses prediction and uncertainty to determine which experiment to try next.

    advertisement

    As implied by its name, CAMEO looks for a useful new material by operating in a closed loop: It determines which experiment to run on a material, does the experiment, and collects the data. It can also ask for more information, such as the crystal structure of the desired material, from the scientist before running the next experiment, which is informed by all past experiments performed in the loop.
    “The key to our experiment was that we were able to unleash CAMEO on a combinatorial library where we had made a large array of materials with all different compositions,” said Ichiro Takeuchi, a materials science and engineering researcher and professor at the University of Maryland. In a usual combinatorial study, every material in the array would be measured sequentially to look for the compound with the best properties. Even with a fast measurement setup, that takes a long time. With CAMEO, it took only a small fraction of the usual number of measurements to home in on the best material.
    The AI is also designed to contain knowledge of key principles, including knowledge of past simulations and lab experiments, how the equipment works, and physical concepts. For example, the researchers armed CAMEO with the knowledge of phase mapping, which describes how the arrangement of atoms in a material changes with chemical composition and temperature.
    Understanding how atoms are arranged in a material is important in determining its properties such as how hard or how electrically insulating it is, and how well it is suited for a specific application.
    “The AI is unsupervised. Many types of AI need to be trained or supervised. Instead of asking it to learn physical laws, we encode them into the AI. You don’t need a human to train the AI,” said Kusne.

    advertisement

    One of the best ways to figure out the structure of a material is by bombarding it with X-rays, in a technique called X-ray diffraction. By identifying the angles at which the X-rays bounce off, scientists can determine how atoms are arranged in a material, enabling them to figure out its crystal structure. However, a single in-house X-ray diffraction experiment can take an hour or more. At a synchrotron facility where a large machine the size of a football field accelerates electrically charged particles at close to the speed of light, this process can take 10 seconds because the fast-moving particles emit large numbers of X-rays. This is the method used in the experiments, which were performed at the Stanford Synchrotron Radiation Lightsource (SSRL).
    The algorithm is installed on a computer that connects to the X-ray diffraction equipment over a data network. CAMEO decides which material composition to study next by choosing which material the X-rays focus on to investigate its atomic structure. With each new iteration, CAMEO learns from past measurements and identifies the next material to study. This allows the AI to explore how a material’s composition affects its structure and identify the best material for the task.
    “Think of this process as trying to make the perfect cake,” Kusne said. “You’re mixing different types of ingredients, flour, eggs, or butter, using a variety of recipes to make the best cake.” With the AI, it’s searching through the “recipes” or experiments to determine the best composition for the material.
    That approach is how CAMEO discovered the material ?Ge?_4 ?Sb?_6 ?Te?_(7,) which the group shortened to GST467. CAMEO was given 177 potential materials to investigate, covering a large range of compositional recipes. To arrive at this material, CAMEO performed 19 different experimental cycles, which took 10 hours, compared with the estimated 90 hours it would have taken a scientist with the full set of 177 materials.
    The New Material
    The material is composed of three different elements (germanium, antimony and tellurium, Ge-Sb-Te) and is a phase-change memory material, that is, it changes its atomic structure from crystalline (solid material with atoms in designated, regular positions) to amorphous (solid material with atoms in random positions) when quickly melted by applying heat. This type of material is used in electronic memory applications such as data storage. Although there are infinite composition variations possible in the Ge-Sb-Te alloy system, the new material GST467 discovered by CAMEO is optimal for phase-change applications.
    Researchers wanted CAMEO to find the best Ge-Sb-Te alloy, one that had the largest difference in “optical contrast” between the crystalline and amorphous states. On a DVD or Blu-ray disc, for example, optical contrast allows a scanning laser to read the disc by distinguishing between regions that have high or low reflectivity. They found that GST467 has twice the optical contrast of ?Ge?_2 ?Sb?_2 ?Te?_5, a well-known material that’s commonly used for DVDs. The larger contrast enables the new material to outperform the old material by a significant margin.
    GST467 also has applications for photonic switching devices, which control the direction of light in a circuit. They can also be applied in neuromorphic computing, a field of study focused on developing devices that emulate the structure and function of neurons in the brain, opening possibilities for new kinds of computers as well as other applications such as extracting useful data from complex images.
    CAMEO’s Wider Applications
    The researchers believe CAMEO can be used for many other materials applications. The code for CAMEO is open source and will be freely available for use by scientists and researchers. And unlike similar machine-learning approaches, CAMEO discovered a useful new compound by focusing on the composition-structure-property relationship of crystalline materials. In this way, the algorithm navigated the course of discovery by tracking the structural origins of a material’s functions.
    One benefit of CAMEO is minimizing costs, since proposing, planning and running experiments at synchrotron facilities requires time and money. Researchers estimate a tenfold reduction in time for experiments using CAMEO, since the number of experiments performed can be cut by one tenth. Because the AI is running the measurements, collecting data and performing the analysis, this also reduces the amount of knowledge a researcher needs to run the experiment. All the researcher must focus on is running the AI.
    Another benefit is providing the ability for scientists to work remotely. “This opens up a wave of scientists to still work and be productive without actually being in the lab,” said Apurva Mehta, a researcher at the SLAC National Accelerator Laboratory. This could mean that if scientists wanted to work on research involving contagious diseases or viruses, such as COVID-19, they could do so safely and remotely while relying on the AI to conduct the experiments in the lab.
    For now, researchers will continue to improve the AI and try to make the algorithms capable of solving ever more complex problems. “CAMEO has the intelligence of a robot scientist, and it’s built to design, run and learn from experiments in a very efficient way,” said Kusne.
    The SSRL where the experiments took place is part of the SLAC National Accelerator Laboratory, operated by Stanford University for the U.S. Department of Energy Office of Science. SLAC researchers helped oversee the experiments run by CAMEO.
    Researchers at the University of Maryland provided the materials used in the experiments, and researchers at the University of Washington demonstrated the new material in a phase-change memory device. More