More stories

  • in

    A window into the nanoworld: Scientists develop new technique to image fluctuations in materials

    A team of scientists, led by researchers from the Max Born Institute in Berlin and Helmholtz-Zentrum Berlin in Germany and from Brookhaven National Laboratory and the Massachusetts Institute of Technology in the United States has developed a revolutionary new method for capturing high-resolution images of fluctuations in materials at the nanoscale using powerful X-ray sources. The technique, which they call Coherent Correlation Imaging (CCI), allows for the creation of sharp, detailed movies without damaging the sample by excessive radiation. By using an algorithm to detect patterns in underexposed images, CCI opens paths to previously inaccessible information. The team demonstrated CCI on samples made of thin magnetic layers, and their results have been published in Nature.
    The microscopic realm of the world is constantly in motion and marked by unceasing alteration. Even in seemingly unchanging solid materials, these fluctuations can give rise to unusual properties; one example being the lossless transmission of electrical current in high-temperature superconductors. Fluctuations are particularly pronounced during phase transitions, where a material changes its state, such as from solid to liquid during melting. Scientists also investigate very different phase transitions, such as from non-conductive to conductive, non-magnetic to magnetic, and changes in crystal structure. Many of these processes are utilized in technology, and also play a crucial role in the functioning of living organisms.
    The problem: Too much illumination might damage the sample
    Studying these processes in detail, however, is a difficult task, and capturing a movie of these fluctuation patterns is even more challenging. This is because the fluctuations happen quickly and take place at the nanometer scale — a millionth of a millimeter. Even the most advanced high-resolution X-ray and electron microscopes are unable to capture this rapid, random motion. The problem is fundamentally rooted, as exemplified by this principle of photography: in order to capture a clear image of an object, a certain level of illumination is required. To magnify the object, that is to “zoom in,” more illumination is needed. Even more light is necessary when attempting to capture a fast motion with a short exposure time. Ultimately, increasing the resolution and decreasing the exposure time leads to a point where the object would be damaged or even destroyed by the illumination required. This is exactly the point science has reached in recent years: snapshots taken with free-electron lasers, the most intense X-ray sources available today, inevitably led to the destruction of the sample under study. As a result, capturing a movie of these random processes consisting of multiple images has been deemed impossible.
    New approach: using an algorithm to detect patterns in dimly lit pictures
    An international team of scientists has now found a solution to this problem. The key to their solution was the realization that the fluctuation patterns in materials are often not entirely random. By focusing on a small portion of the sample, the researchers observed that certain spatial patterns repeatedly emerged, but the exact timing and frequency of these patterns were unpredictable.
    The scientists have developed a novel non-destructive imaging method called Coherent Correlation Imaging (CCI). To create a movie, they take multiple snapshots of the sample in quick succession while reducing the illumination enough to keep the sample intact. However, this results in individual images where the fluctuation pattern in the sample becomes indistinct. Nevertheless, the images still contain sufficient information to separate them into groups. To accomplish this, the team first had to create a new algorithm that analyzes the correlations between the images, hence the method’s name. The snapshots within each group are very similar and thus likely to originate from the same specific fluctuation pattern. It is only when all shots in a group are viewed together that a clear image of the sample emerges. The scientists are now able to rewind the film and associate each snapshot with a clear image of the sample’s state at that moment in time.
    An example: Filming the “dance of domains” in magnetic layers
    The scientists created this new method to tackle a specific problem in the field of magnetism: microscopic patterns that occur in thin ferromagnetic layers. These layers are divided into regions known as domains, in which the magnetization points either upward or downward. Similar magnetic films are used in modern hard drives where the two different types of domains encode bits with “0” or “1.” Until now, it was believed that these patterns were extremely stable. But is this really true?
    To answer this question, the team investigated a sample consisting of such a magnetic layer at the National Synchrotron Light Source II on Long Island near New York City, using the newly developed CCI method. Indeed, the patterns remained unchanged at room temperature. But at a slightly elevated temperature of 37°C (98°F), the domains began to move back and forth erratically, displacing each other. The scientists observed this “dance of the domains” for several hours. Subsequently, they created a map showing the preferred location of the boundaries between the domains. This map and the movie of the movements led to a better understanding of the magnetic interactions in the materials, promoting future applications in advanced computer architectures.
    New opportunities for materials research at X-ray sources
    The scientists’ next objective is to employ the novel imaging method on free-electron lasers, such as the European XFEL in Hamburg, to gain deeper insights into even faster processes at the smallest length scales. They are confident that this method will improve our understanding of the role of fluctuations and stochastic processes in the properties of modern materials, and as a result, discover new methods of utilizing them in a more directed manner. More

  • in

    Can you trust your quantum simulator?

    At the scale of individual atoms, physics gets weird. Researchers are working to reveal, harness, and control these strange quantum effects using quantum analog simulators — laboratory experiments that involve super-cooling tens to hundreds of atoms and probing them with finely tuned lasers and magnets.
    Scientists hope that any new understanding gained from quantum simulators will provide blueprints for designing new exotic materials, smarter and more efficient electronics, and practical quantum computers. But in order to reap the insights from quantum simulators, scientists first have to trust them.
    That is, they have to be sure that their quantum device has “high fidelity” and accurately reflects quantum behavior. For instance, if a system of atoms is easily influenced by external noise, researchers could assume a quantum effect where there is none. But there has been no reliable way to characterize the fidelity of quantum analog simulators, until now.
    In a study appearing in Nature, physicists from MIT and Caltech report a new quantum phenomenon: They found that there is a certain randomness in the quantum fluctuations of atoms and that this random behavior exhibits a universal, predictable pattern. Behavior that is both random and predictable may sound like a contradiction. But the team confirmed that certain random fluctuations can indeed follow a predictable, statistical pattern.
    What’s more, the researchers have used this quantum randomness as a tool to characterize the fidelity of a quantum analog simulator. They showed through theory and experiments that they could determine the accuracy of a quantum simulator by analyzing its random fluctuations.
    The team developed a new benchmarking protocol that can be applied to existing quantum analog simulators to gauge their fidelity based on their pattern of quantum fluctuations. The protocol could help to speed the development of new exotic materials and quantum computing systems.

    “This work would allow characterizing many existing quantum devices with very high precision,” says study co-author Soonwon Choi, assistant professor of physics at MIT. “It also suggests there are deeper theoretical structures behind the randomness in chaotic quantum systems than we have previously thought about.”
    The study’s authors include MIT graduate student Daniel Mark and collaborators at Caltech, the University of Illinois at Urbana-Champaign, Harvard University, and the University of California at Berkeley.
    Random evolution
    The new study was motivated by an advance in 2019 by Google, where researchers had built a digital quantum computer, dubbed “Sycamore,” that could carry out a specific computation more quickly than a classical computer.
    Whereas the computing units in a classical computer are “bits” that exist as either a 0 or a 1, the units in a quantum computer, known as “qubits,” can exist in a superposition of multiple states. When multiple qubits interact, they can in theory run special algorithms that solve difficult problems in far shorter time than any classical computers.

    The Google researchers engineered a system of superconducting loops to behave as 53 qubits, and showed that the “computer” could carry out a specific calculation that would normally be too thorny for even the fastest supercomputer in the world to solve.
    Google also happened to show that it could quantify the system’s fidelity. By randomly changing the state of individual qubits and comparing the resulting states of all 53 qubits with what the principles of quantum mechanics predict, they were able to measure the system’s accuracy.
    Choi and his colleagues wondered whether they could use a similar, randomized approach to gauge the fidelity of quantum analog simulators. But there was one hurdle they would have to clear: Unlike Google’s digital quantum system, individual atoms and other qubits in analog simulators are incredibly difficult to manipulate and therefore randomly control.
    But through some theoretical modeling, Choi realized that the collective effect of individually manipulating qubits in Google’s system could be reproduced in an analog quantum simulator by simply letting the qubits naturally evolve.
    “We figured out that we don’t have to engineer this random behavior,” Choi says. “With no fine-tuning, we can just let the natural dynamics of quantum simulators evolve, and the outcome would lead to a similar pattern of randomness due to chaos.”
    Building trust
    As an extremely simplified example, imagine a system of five qubits. Each qubit can exist simultaneously as a 0 or a 1, until a measurement is made, whereupon the qubits settle into one or the other state. With any one measurement, the qubits can take on one of 32 different combinations: 0-0-0-0-0, 0-0-0-0-1, and so on.
    “These 32 configurations will occur with a certain probability distribution, which people believe should be similar to predictions of statistical physics,” Choi explains. “We show they agree on average, but there are deviations and fluctuations that exhibit a universal randomness that we did not know. And that randomness looks the same as if you ran those random operations that Google did.”
    The researchers hypothesized that if they could develop a numerical simulation that precisely represents the dynamics and universal random fluctuations of a quantum simulator, they could compare the predicted outcomes with the simulator’s actual outcomes. The closer the two are, the more accurate the quantum simulator must be.
    To test this idea, Choi teamed up with experimentalists at Caltech, who engineered a quantum analog simulator comprising 25 atoms. The physicists shone a laser on the experiment to collectively excite the atoms, then let the qubits naturally interact and evolve over time. They measured the state of each qubit over multiple runs, gathering 10,000 measurements in all.
    Choi and colleagues also developed a numerical model to represent the experiment’s quantum dynamics, and incorporated an equation that they derived to predict the universal, random fluctuations that should arise. The researchers then compared their experimental measurements with the model’s predicted outcomes and observed a very close match — strong evidence that this particular simulator can be trusted as reflecting pure, quantum mechanical behavior.
    More broadly, the results demonstrate a new way to characterize almost any existing quantum analog simulator.
    “The ability to characterize quantum devices forms a very basic technical tool to build increasingly larger, more precise and complex quantum systems,” Choi says. “With our tool, people can know whether they are working with a trustable system.”
    This research was funded, in part, by the U.S. National Science Foundation, the Defense Advanced Research Projects Agency, the Army Research Office, and the Department of Energy. More

  • in

    Engineers grow 'perfect' atom-thin materials on industrial silicon wafers

    True to Moore’s Law, the number of transistors on a microchip has doubled every year since the 1960s. But this trajectory is predicted to soon plateau because silicon — the backbone of modern transistors — loses its electrical properties once devices made from this material dip below a certain size.
    Enter 2D materials — delicate, two-dimensional sheets of perfect crystals that are as thin as a single atom. At the scale of nanometers, 2D materials can conduct electrons far more efficiently than silicon. The search for next-generation transistor materials therefore has focused on 2D materials as potential successors to silicon.
    But before the electronics industry can transition to 2D materials, scientists have to first find a way to engineer the materials on industry-standard silicon wafers while preserving their perfect crystalline form. And MIT engineers may now have a solution.
    The team has developed a method that could enable chip manufacturers to fabricate ever-smaller transistors from 2D materials by growing them on existing wafers of silicon and other materials. The new method is a form of “nonepitaxial, single-crystalline growth,” which the team used for the first time to grow pure, defect-free 2D materials onto industrial silicon wafers.
    With their method, the team fabricated a simple functional transistor from a type of 2D materials called transition-metal dichalcogenides, or TMDs, which are known to conduct electricity better than silicon at nanometer scales.
    “We expect our technology could enable the development of 2D semiconductor-based, high-performance, next-generation electronic devices,” says Jeehwan Kim, associate professor of mechanical engineering at MIT. “We’ve unlocked a way to catch up to Moore’s Law using 2D materials.”
    Kim and his colleagues detail their method in a paper appearing in Nature. The study’s MIT co-authors include Ki Seok Kim, Doyoon Lee, Celesta Chang, Seunghwan Seo, Hyunseok Kim, Jiho Shin, Sangho Lee, Jun Min Suh, and Bo-In Park, along with collaborators at the University of Texas at Dallas, the University of California at Riverside, Washington University in Saint Louis, and institutions across South Korea.

    A crystal patchwork
    To produce a 2D material, researchers have typically employed a manual process by which an atom-thin flake is carefully exfoliated from a bulk material, like peeling away the layers of an onion.
    But most bulk materials are polycrystalline, containing multiple crystals that grow in random orientations. Where one crystal meets another, the “grain boundary” acts as an electric barrier. Any electrons flowing through one crystal suddenly stop when met with a crystal of a different orientation, damping a material’s conductivity. Even after exfoliating a 2D flake, researchers must then search the flake for “single-crystalline” regions — a tedious and time-intensive process that is difficult to apply at industrial scales.
    Recently, researchers have found other ways to fabricate 2D materials, by growing them on wafers of sapphire — a material with a hexagonal pattern of atoms which encourages 2D materials to assemble in the same, single-crystalline orientation.
    “But nobody uses sapphire in the memory or logic industry,” Kim says. “All the infrastructure is based on silicon. For semiconductor processing, you need to use silicon wafers.”
    However, wafers of silicon lack sapphire’s hexagonal supporting scaffold. When researchers attempt to grow 2D materials on silicon, the result is a random patchwork of crystals that merge haphazardly, forming numerous grain boundaries that stymie conductivity.

    “It’s considered almost impossible to grow single-crystalline 2D materials on silicon,” Kim says. “Now we show you can. And our trick is to prevent the formation of grain boundaries.”
    Seed pockets
    The team’s new “nonepitaxial, single-crystalline growth” does not require peeling and searching flakes of 2D material. Instead, the researchers use conventional vapor deposition methods to pump atoms across a silicon wafer. The atoms eventually settle on the wafer and nucleate, growing into two-dimensional crystal orientations. If left alone, each “nucleus,” or seed of a crystal, would grow in random orientations across the silicon wafer. But Kim and his colleagues found a way to align each growing crystal to create single-crystalline regions across the entire wafer.
    To do so, they first covered a silicon wafer in a “mask” — a coating of silicon dioxide that they patterned into tiny pockets, each designed to trap a crystal seed. Across the masked wafer, they then flowed a gas of atoms that settled into each pocket to form a 2D material — in this case, a TMD. The mask’s pockets corralled the atoms and encouraged them to assemble on the silicon wafer in the same, single-crystalline orientation.
    “That is a very shocking result,” Kim says “You have single-crystalline growth everywhere, even if there is no epitaxial relation between the 2D material and silicon wafer.”
    With their masking method, the team fabricated a simple TMD transistor and showed that its electrical performance was just as good as a pure flake of the same material.
    They also applied the method to engineer a multilayered device. After covering a silicon wafer with a patterned mask, they grew one type of 2D material to fill half of each square, then grew a second type of 2D material over the first layer to fill the rest of the squares. The result was an ultrathin, single-crystalline bilayer structure within each square. Kim says that going forward, multiple 2D materials could be grown and stacked together in this way to make ultrathin, flexible, and multifunctional films.
    “Until now, there has been no way of making 2D materials in single-crystalline form on silicon wafers, thus the whole community has almost given up on pursuing 2D materials for next-generation processors,” Kim says. “Now we have completely solved this problem, with a way to make devices smaller than a few nanometers. This will change the paradigm of Moore’s Law.”
    This research was supported in part by DARPA, Intel, the IARPA MicroE4AI program, MicroLink Devices, Inc., ROHM Co., and Samsung. More

  • in

    Light-based tech could inspire Moon navigation and next-gen farming

    Super-thin chips made from lithium niobate are set to overtake silicon chips in light-based technologies, according to world-leading scientists in the field, with potential applications ranging from remote ripening-fruit detection on Earth to navigation on the Moon.
    They say the artificial crystal offers the platform of choice for these technologies due to its superior performance and recent advances in manufacturing capabilities.
    RMIT University’s Distinguished Professor Arnan Mitchell and University of Adelaide’s Dr Andy Boes led this team of global experts to review lithium niobate’s capabilities and potential applications in the journal Science.
    The international team, including scientists from Peking University in China and Harvard University in the United States, is working with industry to make navigation systems that are planned to help rovers drive on the Moon later this decade.
    As it is impossible to use global positioning system (GPS) technology on the Moon, navigation systems in lunar rovers will need to use an alternative system, which is where the team’s innovation comes in.
    By detecting tiny changes in laser light, the lithium-niobate chip can be used to measure movement without needing external signals, according to Mitchell.

    “This is not science fiction — this artificial crystal is being used to develop a range of exciting applications. And competition to harness the potential of this versatile technology is heating up,” said Mitchell, Director of the Integrated Photonics and Applications Centre.
    He said while the lunar navigation device was in the early stages of development, the lithium niobate chip technology was “mature enough to be used in space applications.”
    “Our lithium niobate chip technology is also flexible enough to be rapidly adapted to almost any application that uses light,” Mitchell said.
    “We are focused on navigation now, but the same technology could also be used for linking internet on the Moon to the internet on Earth.”
    What is lithium niobate and how can it be used?
    Lithium niobate is an artificial crystal that was first discovered in 1949 but is “back in vogue,” according to Boes.

    “Lithium niobate has new uses in the field of photonics — the science and technology of light — because unlike other materials it can generate and manipulate electro-magnetic waves across the full spectrum of light, from microwave to UV frequencies,” he said.
    “Silicon was the material of choice for electronic circuits, but its limitations have become increasingly apparent in photonics.
    “Lithium niobate has come back into vogue because of its superior capabilities, and advances in manufacturing mean that it is now readily available as thin films on semiconductor wafers.”
    A layer of lithium niobate about 1,000 times thinner than a human hair is placed on a semiconductor wafer, Boes said.
    “Photonic circuits are printed into the lithium niobate layer, which are tailored according to the chip’s intended use. A fingernail-sized chip may contain hundreds of different circuits,” he said.
    How does the lunar navigation tech work?
    The team is working with the Australian company Advanced Navigation to create optical gyroscopes, where laser light is launched in both clockwise and anticlockwise directions in a coil of fibre, Mitchell said.
    “As the coil is moved the fibre is slightly shorter in one direction than the other, according to Albert Einstein’s theory of relativity,” he said.
    “Our photonic chips are sensitive enough to measure this tiny difference and use it to determine how the coil is moving. If you can keep track of your movements, then you know where you are relative to where you started. This is called inertial navigation.”
    Potential applications closer to home
    This technology can also be used to remotely detect the ripeness of fruit.
    “Gas emitted by ripe fruit is absorbed by light in the mid-infrared part of the spectrum,” Mitchell said.
    “A drone hovering in an orchard would transmit light to another which would sense the degree to which the light is absorbed and when fruit is ready for harvesting.
    “Our microchip technology is much smaller, cheaper and more accurate than current technology and can be used with very small drones that won’t damage fruit trees.”
    Next steps
    Australia could become a global hub for manufacturing integrated photonic chips from lithium niobate that would have a major impact on applications in technology that use every part of the spectrum of light, Mitchell said.
    “We have the technology to manufacture these chips in Australia and we have the industries that will use them,” he said.
    “Photonic chips can now transform industries well beyond optical fibre communications.” More

  • in

    Blast chiller for the quantum world

    The quantum nature of objects visible to the naked eye is currently a much-discussed research question. A team led by Innsbruck physicist Gerhard Kirchmair has now demonstrated a new method in the laboratory that could make the quantum properties of macroscopic objects more accessible than before. With the method, the researchers were able to increase the efficiency of an established cooling method by an order of a magnitude.
    With optomechanical experiments, scientists are trying to explore the limits of the quantum world and to create a foundation for the development of highly sensitive quantum sensors. In these experiments, objects visible to the naked eye are coupled to superconducting circuits via electromagnetic fields. To get functioning superconductors, such experiments take place in cryostats at a temperature of about 100 millikelvin. But this is still far from sufficient to really dive into the quantum world. In order to observe quantum effects on macroscopic objects, they must be cooled to nearly absolute zero using sophisticated cooling methods. Physicists led by Gerhard Kirchmair from the Department of Experimental Physics at the University of Innsbruck and the Institute of Quantum Optics and Quantum Information (IQOQI) have now demonstrated a nonlinear cooling mechanism with which even massive objects can be cooled well.
    Cooling capacity higher than common
    In the experiment, the Innsbruck researchers couple the mechanical object — in their case a vibrating beam — to the superconducting circuit via a magnetic field. To do this, they attached a magnet to the beam, which is about 100 micrometers long. When the magnet moves, it changes the magnetic flux through the circuit, the heart of which is a so-called SQUID, a superconducting quantum interference device. Its resonant frequency changes depending on the magnetic flux, which is measured using microwave signals. In this way, the micromechanical oscillator can be cooled to near the quantum mechanical ground state. Furthermore, David Zöpfl from Gerhard Kirchmair’s team explains, “The change in the resonant frequency of the SQUID circuit as a function of microwave power is not linear. As a consequence, we can cool the massive object by an order of magnitude more for the same power.” This new, simple method is particularly interesting for cooling more massive mechanical objects. Zöpfl and Kirchmair are confident that this could be the foundation for the search of quantum properties in larger macroscopic objects.
    The work was carried out in collaboration with scientists in Canada and Germany and has now been published in Physical Review Letters. The research was financially supported by the Austrian Science Fund FWF and the European Union, among others. Co-authors Christian Schneider and Lukas Deeg are or were members of the FWF Doctoral Program Atoms, Light and Molecules (DK-ALM). More

  • in

    Scientists re-writes an equation in FDA guidance to improve the accuracy of the drug interaction prediction

    Drugs absorbed into the body are metabolized and thus removed by enzymes from several organs like the liver. The how fast the drug is cleared out of the system can be increased by other drugs that increase the amount of enzyme secretion in the body. This dramatically decreases the concentration of a drug, reducing its efficacy, often leading to the failure of having any effect at all. Therefore, accurately predicting the clearance rate in the presence of drug-drug interaction* is critical in the process of drug prescription and development of a new drug.
    *Drug-drug interaction: In terms of metabolism, drug-drug interaction is a phenomenon in which one drug changes the metabolism of another drug to promote or inhibit its excretion from the body when two or more drugs are taken together. As a result, it increases the toxicity of medicines or causes loss of efficacy.
    Since it is practically impossible to evaluate all interactions between new drug candidates and all marketed drugs during the development process, the FDA recommends indirect evaluation of drug interactions using a formula suggested in their guidance first published in 1997, revised in January of 2020, in order to evaluate drug interactions and minimize side effects of having to use more than one type of drugs at once.
    The formula relies on the 110-year-old Michaelis-Menten (MM) model which has a fundamental limit of making a very broad and groundless assumption on the part of the presence of the enzymes that metabolizes the drug. While MM equation has been one of the most widely known equations in biochemistry used in more than 220,000 published papers, the MM equation is accurate only when the concentration of the enzyme that metabolizes the drug is almost non-existent, causing the accuracy of the equation highly unsatisfactory — only 38 percent of the predictions had less than two-fold errors.
    “To make up for the gap, researcher resorted to plugging in scientifically unjustified constants into the equation,” Professor Jung-woo Chae of Chungnam National Univeristy College of Pharmacy said. “This is comparable to having to have the epicyclic orbits introduced to explain the motion of the planets back in the days in order to explain the now-defunct Ptolemaic theory, because it was THE theory back then.”
    A joint research team composed of mathematicians from the Biomedical Mathematics Group within the Institute for Basic Science (IBS) and the Korea Advanced Institute of Science and Technology (KAIST) and pharmacological scientists from the Chungnam National University reported that they identified the major causes of the FDA-recommended equation’s inaccuracies and presented a solution.
    When estimating the gut bioavailability (Fg), which is the key parameter of the equation, the fraction absorbed from the gut lumen (Fa) is usually assumed to be 1. However, many experiments have shown that Fa is less than 1, obviously since it can’t be expected that all of the orally taken drugs to be completely absorbed by the intestines. To solve this problem, the research team used an “estimated Fa” value based on factors such as the drug’s transit time, intestine radius, and permeability values and used it to re-calculate Fg.
    Also, taking a different approach from the MM equation, the team used an alternative model they derived in a previous study back in 2020, which can more accurately predict the drug metabolism rate regardless of the enzyme concentration. Combining these changes, the modified equation with re-calculated Fg had a dramatically increased accuracy of the resulting estimate. The existing FDA formula predicted drug interactions within a 2-fold margin of error at the rate of 38%, whereas the accuracy rate of the revised formula reached 80%.
    “Such drastic improvement in drug-drug interaction prediction accuracy is expected to make great contribution to increasing the success rate of new drug development and drug efficacy in clinical trials. As the results of this study were published in one of the top clinical pharmacology journal, it is expected that the FDA guidance will be revised according to the results of this study.” said Professor Sang Kyum Kim from Chungnam National University College of Pharmacy.
    Furthermore, this study highlights the importance of collaborative research between research groups in vastly different disciplines, in a field that is as dynamic as drug interactions.
    “Thanks to the collaborative research between mathematics and pharmacy, we were able to recify the formula that we have accepted to be the right answer for so long to finally grasp on the leads toward healthier life for mankind.,” said Professor Jae Kyung Kim. He continued, “I hope seeing a ‘K-formula’ entered into the US FDA guidance one day.” More

  • in

    Want a ‘Shrinky Dinks’ approach to nano-sized devices? Try hydrogels

    High-tech shrink art may be the key to making tiny electronics, 3-D nanostructures or even holograms for hiding secret messages.

    A new approach to making tiny structures relies on shrinking them down after building them, rather than making them small to begin with, researchers report in the Dec. 23 Science.

    The key is spongelike hydrogel materials that expand or contract in response to surrounding chemicals (SN: 1/20/10). By inscribing patterns in hydrogels with a laser and then shrinking the gels down to about one-thirteenth their original size, the researchers created patterns with details as small as 25 billionths of a meter across.

    Science News headlines, in your inbox

    Headlines and summaries of the latest Science News articles, delivered to your email inbox every Thursday.

    Thank you for signing up!

    There was a problem signing you up.

    At that level of precision, the researchers could create letters small enough to easily write this entire article along the circumference of a typical human hair.

    Biological scientist Yongxin Zhao and colleagues deposited a variety of materials in the patterns to create nanoscopic images of Chinese zodiac animals. By shrinking the hydrogels after laser etching, several of the images ended up roughly the size of a red blood cell. They included a monkey made of silver, a gold-silver alloy pig, a titanium dioxide snake, an iron oxide dog and a rabbit made of luminescent nanoparticles.

    These two dragons, each roughly 40 micrometers long, were made by depositing cadmium selenide quantum dots onto a laser-etched hydrogel. The red stripes on the left dragon are each just 200 nanometers thick.The Chinese University of Hong Kong, Carnegie Mellon University

    Because the hydrogels can be repeatedly shrunk and expanded with chemical baths, the researchers were also able to create holograms in layers inside a chunk of hydrogel to encode secret information. Shrinking a hydrogel hologram makes it unreadable. “If you want to read it, you have to expand the sample,” says Zhao, of Carnegie Mellon University in Pittsburgh. “But you need to expand it to exactly the same extent” as the original. In effect, knowing how much to expand the hydrogel serves as a key to unlock the information hidden inside.  

    But the most exciting aspect of the research, Zhao says, is the wide range of materials that researchers can use on such minute scales. “We will be able to combine different types of materials together and make truly functional nanodevices.” More

  • in

    Deepfake challenges 'will only grow'

    Although most public attention surrounding deepfakes has focused on large propaganda campaigns, the problematic new technology is much more insidious, according to a new report by artificial intelligence (AI) and foreign policy experts at Northwestern University and the Brookings Institution.
    In the new report, the authors discuss deepfake videos, images and audio as well as their related security challenges. The researchers predict the technology is on the brink of being used much more widely, including in targeted military and intelligence operations.
    Ultimately, the experts make recommendations to security officials and policymakers for how to handle the unsettling new technology. Among their recommendations, the authors emphasize a need for the United States and its allies to develop a code of conduct for governments’ use of deepfakes.
    The research report, “Deepfakes and international conflict,” was published this month by Brookings.
    “The ease with which deepfakes can be developed for specific individuals and targets, as well as their rapid movement — most recently through a form of AI known as stable diffusion — point toward a world in which all states and nonstate actors will have the capacity to deploy deepfakes in their security and intelligence operations,” the authors write. “Security officials and policymakers will need to prepare accordingly.”
    Northwestern co-authors include AI and security expert V.S. Subrahmanian, the Walter P. Murphy Professor of Computer Science at Northwestern’s McCormick School of Engineering and Buffett Faculty Fellow at the Buffett Institute of Global Affairs, and Chongyang Gao, a Ph.D. student in Subrahmanian’s lab. Brookings Institute co-authors include Daniel L. Bynam and Chris Meserole.

    Deepfakes require ‘little difficulty’
    Leader of the Northwestern Security and AI Lab, Subrahmanian and his student Gao previously developed TREAD (Terrorism Reduction with Artificial Intelligence Deepfakes), a new algorithm that researchers can use to generate their own deepfake videos. By creating convincing deepfakes, researchers can better understand the technology within the context of security.
    Using TREAD, Subrahmanian and his team created sample deepfake videos of deceased Islamic State terrorist Abu Mohammed al-Adnani. While the resulting video looks and sounds like al-Adnani — with highly realistic facial expressions and audio — he is actually speaking words by Syrian President Bashar al-Assad.
    The researchers created the lifelike video within hours. The process was so straight-forward that Subrahmanian and his coauthors said militaries and security agencies should just assume that rivals are capable of generating deepfake videos of any official or leader within minutes.
    “Anyone with a reasonable background in machine learning can — with some systematic work and the right hardware — generate deepfake videos at scale by building models similar to TREAD,” the authors write. “The intelligence agencies of virtually any country, which certainly includes U.S. adversaries, can do so with little difficulty.”
    Avoiding ‘cat-and-mouse games’

    The authors believe that state and non-state actors will leverage deepfakes to strengthen ongoing disinformation efforts. Deepfakes could help fuel conflict by legitimizing war, sowing confusion, undermining popular support, polarizing societies, discrediting leaders and more. In the short-term, security and intelligence experts can counteract deepfakes by designing and training algorithms to identify potentially fake videos, images and audio. This approach, however, is unlikely to remain effective in the long term.
    “The result will be a cat-and-mouse game similar to that seen with malware: When cybersecurity firms discover a new kind of malware and develop signatures to detect it, malware developers make ‘tweaks’ to evade the detector,” the authors said. “The detect-evade-detect-evade cycle plays out over time…Eventually, we may reach an endpoint where detection becomes infeasible or too computationally intensive to carry out quickly and at scale.”
    For long-term strategies, the report’s authors make several recommendations: Educate the general public to increase digital literacy and critical reasoning Develop systems capable of tracking the movement of digital assets by documenting each person or organization that handles the asset Encourage journalists and intelligence analysts to slow down and verify information before including it in published articles. “Similarly, journalists might emulate intelligence products that discuss ‘confidence levels’ with regard to judgments.” Use information from separate sources, such as verification codes, to confirm legitimacy of digital assetsAbove all, the authors argue that the government should enact policies that offer robust oversight and accountability mechanisms for governing the generation and distribution of deepfake content. If the United States or its allies want to “fight fire with fire” by creating their own deepfakes, then policies first need to be agreed upon and put in place. The authors say this could include establishing a “Deepfakes Equities Process,” modeled after similar processes for cybersecurity.
    “The decision to generate and use deepfakes should not be taken lightly and not without careful consideration of the trade-offs,” the authors write. “The use of deepfakes, particularly designed to attack high-value targets in conflict settings, will affect a wide range of government offices and agencies. Each stakeholder should have the opportunity to offer input, as needed and as appropriate. Establishing such a broad-based, deliberative process is the best route to ensuring that democratic governments use deepfakes responsibly.”
    Further information: https://www.brookings.edu/research/deepfakes-and-international-conflict/ More