More stories

  • in

    Blast chiller for the quantum world

    The quantum nature of objects visible to the naked eye is currently a much-discussed research question. A team led by Innsbruck physicist Gerhard Kirchmair has now demonstrated a new method in the laboratory that could make the quantum properties of macroscopic objects more accessible than before. With the method, the researchers were able to increase the efficiency of an established cooling method by an order of a magnitude.
    With optomechanical experiments, scientists are trying to explore the limits of the quantum world and to create a foundation for the development of highly sensitive quantum sensors. In these experiments, objects visible to the naked eye are coupled to superconducting circuits via electromagnetic fields. To get functioning superconductors, such experiments take place in cryostats at a temperature of about 100 millikelvin. But this is still far from sufficient to really dive into the quantum world. In order to observe quantum effects on macroscopic objects, they must be cooled to nearly absolute zero using sophisticated cooling methods. Physicists led by Gerhard Kirchmair from the Department of Experimental Physics at the University of Innsbruck and the Institute of Quantum Optics and Quantum Information (IQOQI) have now demonstrated a nonlinear cooling mechanism with which even massive objects can be cooled well.
    Cooling capacity higher than common
    In the experiment, the Innsbruck researchers couple the mechanical object — in their case a vibrating beam — to the superconducting circuit via a magnetic field. To do this, they attached a magnet to the beam, which is about 100 micrometers long. When the magnet moves, it changes the magnetic flux through the circuit, the heart of which is a so-called SQUID, a superconducting quantum interference device. Its resonant frequency changes depending on the magnetic flux, which is measured using microwave signals. In this way, the micromechanical oscillator can be cooled to near the quantum mechanical ground state. Furthermore, David Zöpfl from Gerhard Kirchmair’s team explains, “The change in the resonant frequency of the SQUID circuit as a function of microwave power is not linear. As a consequence, we can cool the massive object by an order of magnitude more for the same power.” This new, simple method is particularly interesting for cooling more massive mechanical objects. Zöpfl and Kirchmair are confident that this could be the foundation for the search of quantum properties in larger macroscopic objects.
    The work was carried out in collaboration with scientists in Canada and Germany and has now been published in Physical Review Letters. The research was financially supported by the Austrian Science Fund FWF and the European Union, among others. Co-authors Christian Schneider and Lukas Deeg are or were members of the FWF Doctoral Program Atoms, Light and Molecules (DK-ALM). More

  • in

    Scientists re-writes an equation in FDA guidance to improve the accuracy of the drug interaction prediction

    Drugs absorbed into the body are metabolized and thus removed by enzymes from several organs like the liver. The how fast the drug is cleared out of the system can be increased by other drugs that increase the amount of enzyme secretion in the body. This dramatically decreases the concentration of a drug, reducing its efficacy, often leading to the failure of having any effect at all. Therefore, accurately predicting the clearance rate in the presence of drug-drug interaction* is critical in the process of drug prescription and development of a new drug.
    *Drug-drug interaction: In terms of metabolism, drug-drug interaction is a phenomenon in which one drug changes the metabolism of another drug to promote or inhibit its excretion from the body when two or more drugs are taken together. As a result, it increases the toxicity of medicines or causes loss of efficacy.
    Since it is practically impossible to evaluate all interactions between new drug candidates and all marketed drugs during the development process, the FDA recommends indirect evaluation of drug interactions using a formula suggested in their guidance first published in 1997, revised in January of 2020, in order to evaluate drug interactions and minimize side effects of having to use more than one type of drugs at once.
    The formula relies on the 110-year-old Michaelis-Menten (MM) model which has a fundamental limit of making a very broad and groundless assumption on the part of the presence of the enzymes that metabolizes the drug. While MM equation has been one of the most widely known equations in biochemistry used in more than 220,000 published papers, the MM equation is accurate only when the concentration of the enzyme that metabolizes the drug is almost non-existent, causing the accuracy of the equation highly unsatisfactory — only 38 percent of the predictions had less than two-fold errors.
    “To make up for the gap, researcher resorted to plugging in scientifically unjustified constants into the equation,” Professor Jung-woo Chae of Chungnam National Univeristy College of Pharmacy said. “This is comparable to having to have the epicyclic orbits introduced to explain the motion of the planets back in the days in order to explain the now-defunct Ptolemaic theory, because it was THE theory back then.”
    A joint research team composed of mathematicians from the Biomedical Mathematics Group within the Institute for Basic Science (IBS) and the Korea Advanced Institute of Science and Technology (KAIST) and pharmacological scientists from the Chungnam National University reported that they identified the major causes of the FDA-recommended equation’s inaccuracies and presented a solution.
    When estimating the gut bioavailability (Fg), which is the key parameter of the equation, the fraction absorbed from the gut lumen (Fa) is usually assumed to be 1. However, many experiments have shown that Fa is less than 1, obviously since it can’t be expected that all of the orally taken drugs to be completely absorbed by the intestines. To solve this problem, the research team used an “estimated Fa” value based on factors such as the drug’s transit time, intestine radius, and permeability values and used it to re-calculate Fg.
    Also, taking a different approach from the MM equation, the team used an alternative model they derived in a previous study back in 2020, which can more accurately predict the drug metabolism rate regardless of the enzyme concentration. Combining these changes, the modified equation with re-calculated Fg had a dramatically increased accuracy of the resulting estimate. The existing FDA formula predicted drug interactions within a 2-fold margin of error at the rate of 38%, whereas the accuracy rate of the revised formula reached 80%.
    “Such drastic improvement in drug-drug interaction prediction accuracy is expected to make great contribution to increasing the success rate of new drug development and drug efficacy in clinical trials. As the results of this study were published in one of the top clinical pharmacology journal, it is expected that the FDA guidance will be revised according to the results of this study.” said Professor Sang Kyum Kim from Chungnam National University College of Pharmacy.
    Furthermore, this study highlights the importance of collaborative research between research groups in vastly different disciplines, in a field that is as dynamic as drug interactions.
    “Thanks to the collaborative research between mathematics and pharmacy, we were able to recify the formula that we have accepted to be the right answer for so long to finally grasp on the leads toward healthier life for mankind.,” said Professor Jae Kyung Kim. He continued, “I hope seeing a ‘K-formula’ entered into the US FDA guidance one day.” More

  • in

    Deepfake challenges 'will only grow'

    Although most public attention surrounding deepfakes has focused on large propaganda campaigns, the problematic new technology is much more insidious, according to a new report by artificial intelligence (AI) and foreign policy experts at Northwestern University and the Brookings Institution.
    In the new report, the authors discuss deepfake videos, images and audio as well as their related security challenges. The researchers predict the technology is on the brink of being used much more widely, including in targeted military and intelligence operations.
    Ultimately, the experts make recommendations to security officials and policymakers for how to handle the unsettling new technology. Among their recommendations, the authors emphasize a need for the United States and its allies to develop a code of conduct for governments’ use of deepfakes.
    The research report, “Deepfakes and international conflict,” was published this month by Brookings.
    “The ease with which deepfakes can be developed for specific individuals and targets, as well as their rapid movement — most recently through a form of AI known as stable diffusion — point toward a world in which all states and nonstate actors will have the capacity to deploy deepfakes in their security and intelligence operations,” the authors write. “Security officials and policymakers will need to prepare accordingly.”
    Northwestern co-authors include AI and security expert V.S. Subrahmanian, the Walter P. Murphy Professor of Computer Science at Northwestern’s McCormick School of Engineering and Buffett Faculty Fellow at the Buffett Institute of Global Affairs, and Chongyang Gao, a Ph.D. student in Subrahmanian’s lab. Brookings Institute co-authors include Daniel L. Bynam and Chris Meserole.

    Deepfakes require ‘little difficulty’
    Leader of the Northwestern Security and AI Lab, Subrahmanian and his student Gao previously developed TREAD (Terrorism Reduction with Artificial Intelligence Deepfakes), a new algorithm that researchers can use to generate their own deepfake videos. By creating convincing deepfakes, researchers can better understand the technology within the context of security.
    Using TREAD, Subrahmanian and his team created sample deepfake videos of deceased Islamic State terrorist Abu Mohammed al-Adnani. While the resulting video looks and sounds like al-Adnani — with highly realistic facial expressions and audio — he is actually speaking words by Syrian President Bashar al-Assad.
    The researchers created the lifelike video within hours. The process was so straight-forward that Subrahmanian and his coauthors said militaries and security agencies should just assume that rivals are capable of generating deepfake videos of any official or leader within minutes.
    “Anyone with a reasonable background in machine learning can — with some systematic work and the right hardware — generate deepfake videos at scale by building models similar to TREAD,” the authors write. “The intelligence agencies of virtually any country, which certainly includes U.S. adversaries, can do so with little difficulty.”
    Avoiding ‘cat-and-mouse games’

    The authors believe that state and non-state actors will leverage deepfakes to strengthen ongoing disinformation efforts. Deepfakes could help fuel conflict by legitimizing war, sowing confusion, undermining popular support, polarizing societies, discrediting leaders and more. In the short-term, security and intelligence experts can counteract deepfakes by designing and training algorithms to identify potentially fake videos, images and audio. This approach, however, is unlikely to remain effective in the long term.
    “The result will be a cat-and-mouse game similar to that seen with malware: When cybersecurity firms discover a new kind of malware and develop signatures to detect it, malware developers make ‘tweaks’ to evade the detector,” the authors said. “The detect-evade-detect-evade cycle plays out over time…Eventually, we may reach an endpoint where detection becomes infeasible or too computationally intensive to carry out quickly and at scale.”
    For long-term strategies, the report’s authors make several recommendations: Educate the general public to increase digital literacy and critical reasoning Develop systems capable of tracking the movement of digital assets by documenting each person or organization that handles the asset Encourage journalists and intelligence analysts to slow down and verify information before including it in published articles. “Similarly, journalists might emulate intelligence products that discuss ‘confidence levels’ with regard to judgments.” Use information from separate sources, such as verification codes, to confirm legitimacy of digital assetsAbove all, the authors argue that the government should enact policies that offer robust oversight and accountability mechanisms for governing the generation and distribution of deepfake content. If the United States or its allies want to “fight fire with fire” by creating their own deepfakes, then policies first need to be agreed upon and put in place. The authors say this could include establishing a “Deepfakes Equities Process,” modeled after similar processes for cybersecurity.
    “The decision to generate and use deepfakes should not be taken lightly and not without careful consideration of the trade-offs,” the authors write. “The use of deepfakes, particularly designed to attack high-value targets in conflict settings, will affect a wide range of government offices and agencies. Each stakeholder should have the opportunity to offer input, as needed and as appropriate. Establishing such a broad-based, deliberative process is the best route to ensuring that democratic governments use deepfakes responsibly.”
    Further information: https://www.brookings.edu/research/deepfakes-and-international-conflict/ More

  • in

    Researchers gain deeper understanding of mechanism behind superconductors

    Physicists at Leipzig University have once again gained a deeper understanding of the mechanism behind superconductors. This brings the research group led by Professor Jürgen Haase one step closer to their goal of developing the foundations for a theory for superconductors that would allow current to flow without resistance and without energy loss. The researchers found that in superconducting copper-oxygen bonds, called cuprates, there must be a very specific charge distribution between the copper and the oxygen, even under pressure.
    This confirmed their own findings from 2016, when Haase and his team developed an experimental method based on magnetic resonance that can measure changes that are relevant to superconductivity in the structure of materials. They were the first team in the world to identify a measurable material parameter that predicts the maximum possible transition temperature — a condition required to achieve superconductivity at room temperature. Now they have discovered that cuprates, which under pressure enhance superconductivity, follow the charge distribution predicted in 2016. The researchers have published their new findings in the journal PNAS.
    “The fact that the transition temperature of cuprates can be enhanced under pressure has puzzled researchers for 30 years. But until now we didn’t know which mechanism was responsible for this,” Haase said. He and his colleagues at the Felix Bloch Institute for Solid State Physics have now come a great deal closer to understanding the actual mechanism in these materials. “At Leipzig University — with support from the Graduate School Building with Molecules and Nano-objects (BuildMoNa) — we have established the basic conditions needed to research cuprates using nuclear resonance, and Michael Jurkutat was the first doctoral researcher to join us. Together, we established the Leipzig Relation, which says that you have to take electrons away from the oxygen in these materials and give them to the copper in order to increase the transition temperature. You can do this with chemistry, but also with pressure. But hardly anyone would have thought that we could measure all of this with nuclear resonance,” Haase said.
    Their current research finding could be exactly what is needed to produce a superconductor at room temperature, which has been the dream of many physicists for decades and is now expected to take only a few more years, according to Haase. To date, this has only been possible at very low temperatures around minus 150 degrees Celsius and below, which are not easy to find anywhere on Earth. About a year ago, a Canadian research group verified the findings of Professor Haase’s team from 2016 using newly developed, computer-aided calculations and thus substantiated the findings theoretically.
    Superconductivity is already used today in a variety of ways, for example, in magnets for MRI machines and in nuclear fusion. But it would be much easier and less expensive if superconductors operated at room temperature. The phenomenon of superconductivity was discovered in metals as early as 1911, but even Albert Einstein did not attempt to come up with an explanation back then. Nearly half a century passed before BCS theory provided an understanding of superconductivity in metals in 1957. In 1986, the discovery of superconductivity in ceramic materials (cuprate superconductors) at much higher temperatures by physicists Georg Bednorz and Karl Alexander Müller raised new questions, but also raised hopes that superconductivity could be achieved at room temperature. More

  • in

    Researchers use AI to triage patients with chest pain

    Artificial intelligence (AI) may help improve care for patients who show up at the hospital with acute chest pain, according to a study published in Radiology, a journal of the Radiological Society of North America (RSNA).
    “To the best of our knowledge, our deep learning AI model is the first to utilize chest X-rays to identify individuals among acute chest pain patients who need immediate medical attention,” said the study’s lead author, Márton Kolossváry, M.D., Ph.D., radiology research fellow at Massachusetts General Hospital (MGH) in Boston.
    Acute chest pain syndrome may consist of tightness, burning or other discomfort in the chest or a severe pain that spreads to your back, neck, shoulders, arms, or jaw. It may be accompanied by shortness of breath.
    Acute chest pain syndrome accounts for over 7 million emergency department visits annually in the United States, making it one of the most common complaints.
    Fewer than 8% of these patients are diagnosed with the three major cardiovascular causes of acute chest pain syndrome, which are acute coronary syndrome, pulmonary embolism or aortic dissection. However, the life-threatening nature of these conditions and low specificity of clinical tests, such as electrocardiograms and blood tests, lead to substantial use of cardiovascular and pulmonary diagnostic imaging, often yielding negative results. As emergency departments struggle with high patient numbers and shortage of hospital beds, effectively triaging patients at very low risk of these serious conditions is important.
    Deep learning is an advanced type of artificial intelligence (AI) that can be trained to search X-ray images to find patterns associated with disease.

    For the study, Dr. Kolossváry and colleagues developed an open-source deep learning model to identify patients with acute chest pain syndrome who were at risk for 30-day acute coronary syndrome, pulmonary embolism, aortic dissection or all-cause mortality, based on a chest X-ray.
    The study used electronic health records of patients presenting with acute chest pain syndrome who had a chest X-ray and additional cardiovascular or pulmonary imaging and/or stress tests at MGH or Brigham and Women’s Hospital in Boston between January 2005 and December 2015. For the study, 5,750 patients (mean age 59, including 3,329 men) were evaluated.
    The deep-learning model was trained on 23,005 patients from MGH to predict a 30-day composite endpoint of acute coronary syndrome, pulmonary embolism or aortic dissection and all-cause mortality based on chest X-ray images.
    The deep-learning tool significantly improved prediction of these adverse outcomes beyond age, sex and conventional clinical markers, such as d-dimer blood tests. The model maintained its diagnostic accuracy across age, sex, ethnicity and race. Using a 99% sensitivity threshold, the model was able to defer additional testing in 14% of patients as compared to 2% when using a model only incorporating age, sex, and biomarker data.
    “Analyzing the initial chest X-ray of these patients using our automated deep learning model, we were able to provide more accurate predictions regarding patient outcomes as compared to a model that uses age, sex, troponin or d-dimer information,” Dr. Kolossváry said. “Our results show that chest X-rays could be used to help triage chest pain patients in the emergency department.”
    According to Dr. Kolossváry, in the future such an automated model could analyze chest X-rays in the background and help select those who would benefit most from immediate medical attention and may help identify patients who may be discharged safely from the emergency department.
    “Deep Learning Analysis of Chest Radiographs to Triage Patients with Acute Chest Pain Syndrome.” Collaborating with Dr. Kolossváry were Vineet K. Raghu, Ph.D., John T. Nagurney, M.D., Udo Hoffmann, M.D., M.P.H., and Michael T. Lu, M.D., M.P.H. More

  • in

    Novel framework provides 'measuring stick' for assessing patient matching tools

    Accurate linking of an individual’s medical records from disparate sources within and between health systems, known as patient matching, plays a critical role in patient safety and quality of care, but has proven difficult to accomplish in the United States, the last developed country without a unique patient identifier. In the U.S., linking patient data is dependent on algorithms designed by researchers, vendors and others. Research scientists led by Regenstrief Institute Vice President for Data and Analytics Shaun Grannis, M.D., M.S., have developed an eight-point framework for evaluating the validity and performance of algorithms to match medical records to the correct patient.
    “The value of data standardization is well recognized. There are national healthcare provider IDs. There are facility IDs and object identifiers. There are billing codes. There are standard vocabularies for healthcare lab test results and medical observations — such as LOINC® here at Regenstrief. Patient identity is the last gaping hole in our health infrastructure,” said Dr. Grannis. “We are providing a framework to evaluate patient matching algorithms for accuracy.
    “We recognize that the need for patient matching is not going away and that we need standardized methods to uniquely identify patients,” said Dr. Grannis. “Current patient matching algorithms come in many different flavors, shapes and sizes. To be able to compare how one performs against the other, or even to understand how they might interact together, we have to have a standard way of assessment. We have produced a novel, robust framework for consistent and reproducible evaluation. Simply put, the framework we’ve developed at Regenstrief provides a ‘measuring stick’ for the effectiveness of patient matching tools.”
    Individuals increasingly receive care from multiple sources. While patient matching is complex, it is crucial to health information exchange. Is the William Jones seen at one healthcare system the same person as the William, Will or Willy Jones or perhaps Bill or Billy Jones receiving care at other facilities? Does Elizabeth Smith’s name appear at different medical offices or perhaps at a physical therapy or a dialysis facility as Liz or Beth? To which Juan J. Gomez do various lab test results belong? Typos, missing information and other data errors as well as typical variations add to the complexity.
    The framework’s eight-point approach to the creation of gold standard matching data sets necessary for record linkage encompasses technical areas including data preprocessing, blocking, record adjudication, linkage evaluation and reviewer characteristics. The authors note that the framework “can help record linkage method developers provide necessary transparency when creating and validating gold standard reference matching data sets. In turn, this transparency will support both the internal and external validity of recording linkage studies and improve the robustness of new record linkage strategies.”
    Measures and standards are ubiquitous. “When you go to a gas station pump, the measure of how much gas goes through is standardized so that we know exactly how much is flowing. Similarly, we need to have a common way of measuring and understanding how algorithms for patient matching work,” said Dr. Grannis. “Our eight-pronged approach helps to cover the waterfront of what needs to be evaluated. Laying out the framework and specifying the tasks and activities that need to be completed goes a long way toward standardizing patient matching.”
    In addition to playing a critical role in patient safety and quality of care, improved patient matching accuracy supports more cost-effective healthcare delivery in a variety of ways including reduction in the number of duplicate medical tests. More

  • in

    New small laser device can help detect signs of life on other planets

    As space missions delve deeper into the outer solar system, the need for more compact, resource-conserving and accurate analytical tools has become increasingly critical — especially as the hunt for extraterrestrial life and habitable planets or moons continues.
    A University of Maryland-led team developed a new instrument specifically tailored to the needs of NASA space missions. Their mini laser-sourced analyzer is significantly smaller and more resource efficient than its predecessors — all without compromising the quality of its ability to analyze planetary material samples and potential biological activity onsite. The team’s paper on this new device was published in the journal Nature Astronomy on January 16, 2023.
    Weighing only about 17 pounds, the instrument is a physically scaled-down combination of two important tools for detecting signs of life and identifying compositions of materials: a pulsed ultraviolet laser that removes small amounts of material from a planetary sample and an OrbitrapTM analyzer that delivers high-resolution data about the chemistry of the examined materials.
    “The Orbitrap was originally built for commercial use,” explained Ricardo Arevalo, lead author of the paper and an associate professor of geology at UMD. “You can find them in the labs of pharmaceutical, medical and proteomic industries. The one in my own lab is just under 400 pounds, so they’re quite large, and it took us eight years to make a prototype that could be used efficiently in space — significantly smaller and less resource-intensive, but still capable of cutting-edge science.”
    The team’s new gadget shrinks down the original Orbitrap while pairing it with laser desorption mass spectrometry (LDMS) — techniques that have yet to be applied in an extraterrestrial planetary environment. The new device boasts the same benefits as its larger predecessors but is streamlined for space exploration and onsite planetary material analysis, according to Arevalo.
    Thanks to its diminutive mass and minimal power requirements, the mini Orbitrap LDMS instrument can be easily stowed away and maintained on space mission payloads. The instrument’s analyses of a planetary surface or substance are also far less intrusive and thus much less likely to contaminate or damage a sample than many current methods that attempt to identify unknown compounds.

    “The good thing about a laser source is that anything that can be ionized can be analyzed. If we shoot our laser beam at an ice sample, we should be able to characterize the composition of the ice and see biosignatures in it,” Arevalo said. “This tool has such a high mass resolution and accuracy that any molecular or chemical structures in a sample become much more identifiable.”
    The laser component of the mini LDMS Orbitrap also allows researchers access to larger, more complex compounds that are more likely to be associated with biology. Smaller organic compounds like amino acids, for example, are more ambiguous signatures of life forms.
    “Amino acids can be produced abiotically, meaning that they’re not necessarily proof of life. Meteorites, many of which are chock full of amino acids, can crash onto a planet’s surface and deliver abiotic organics to the surface,” Arevalo said. “We know now that larger and more complex molecules, like proteins, are more likely to have been created by or associated with living systems. The laser lets us study larger and more complex organics that can reflect higher fidelity biosignatures than smaller, simpler compounds.”
    For Arevalo and his team, the mini LDMS Orbitrap will offer much-needed insight and flexibility for future ventures into the outer solar system, such as missions focused on life detection objectives (e.g., Enceladus Orbilander) and exploration of the lunar surface (e.g., the NASA Artemis Program). They hope to send their device into space and deploy it on a planetary target of interest within the next few years.
    “I view this prototype as a pathfinder for other future LDMS and Orbitrap-based instruments,” Arevalo said. “Our mini Orbitrap LDMS instrument has the potential to significantly enhance the way we currently study the geochemistry or astrobiology of a planetary surface.”
    Other UMD-affiliated researchers on the team include geology graduate students Lori Willhite and Ziqin “Grace” Ni, geology postdoctoral associates Anais Bardyn and Soumya Ray, and astronomy visiting associate research engineer Adrian Southard.
    This study was supported by NASA (Award Nos. 80NSSC19K0610, 80NSSC19K0768, 80GSFC21M0002), NASA Goddard Space Flight Center Internal Research Development (IRAD), and the University of Maryland Faculty Incentive Program. More

  • in

    Blocking radio waves and electromagnetic interference with the flip of a switch

    Researchers in Drexel University’s College of Engineering have developed a thin film device, fabricated by spray coating, that can block electromagnetic radiation with the flip of a switch. The breakthrough, enabled by versatile two-dimensional materials called MXenes, could adjust the performance of electronic devices, strengthen wireless connections and secure mobile communications against intrusion.
    The team, led by Yury Gogotsi, PhD, Distinguished University and Bach professor in Drexel’s College of Engineering, previously demonstrated that the two-dimensional layered MXene materials, discovered just over a decade ago, when combined with an electrolyte solution, can be turned into a potent active shield against electromagnetic waves. This latest MXene discovery, reported in Nature Nanotechnology, shows how this shielding can be tuned when a small voltage — less than that produced by an alkaline battery — is applied.
    “Dynamic control of electromagnetic wave jamming has been a significant technological challenge for protecting electronic devices working at gigahertz frequencies and a variety of other communications technologies,” Gogotsi said. “As the number of wireless devices being used in industrial and private sectors has increased by orders of magnitude over the past decade, the urgency of this challenge has grown accordingly. This is why our discovery — which would dynamically mitigate the effect of electromagnetic interference on these devices — could have a broad impact.”
    MXene is a unique material in that it is highly conductive — making it perfectly suited for reflecting microwave radiation that could cause static, feedback or diminish the performance of communications devices — but its internal chemical structure can also be temporarily altered to allow these electromagnetic waves to pass through.
    This means that a thin coating on a device or electrical components prevents them from both emitting electromagnetic waves, as well as being penetrated by those emitted by other electronics. Eliminating the possibility of interference from both internal and external sources can ensure the performance of the device, but some waves must be allowed to exit and enter when it is being used for communication.
    “Without being able to control the ebb and flow of electromagnetic waves within and around a device, it’s a bit like a leaky faucet — you’re not really turning off the water and that constant dripping is no good,” Gogotsi said. “Our shielding ensures the plumbing is tight — so-to-speak — no electromagnetic radiation is leaking out or getting in until we want to use the device.”
    The key to eliciting bidirectional tunability of MXene’s shielding property is using the flow and expulsion of ions to alternately expand and compress the space between material’s layers, like an accordion, as well as to change the surface chemistry of MXenes.

    With a small voltage applied to the film, ions enter — or intercalate — between the MXene layers altering the charge of their surface and inducing electrostatic attraction, which serves to change the layer spacing, the conductivity and shielding efficiency of the material. When the ions are deintercalated, as the current is switched off, the MXene layers return to their original state.
    The team tested 10 different MXene-electrolyte combinations, applying each via paint sprayer in a layer about 30 to 100 times thinner than a human hair. The materials consistently demonstrated the dynamic tunability of shielding efficiency in blocking microwave radiation, which is impossible for traditional metals like copper and steel. And the device sustained the performance through more than 500 charge-discharge cycles.
    “These results indicate that the MXene films can convert from electromagnetic interference shielding to quasi-electromagnetic wave transmission by electrochemical oxidation of MXenes,” Gogotsi and his co-authors wrote. “The MXene film can potentially serve as a dynamic EMI shielding switch.”
    For security applications, Gogotsi suggests that the MXene shielding could hide devices from detection by radar or other tracing systems. The team also tested the potential of a one-way shielding switch. This would allow a device to remain undetectable and protected from unauthorized access until it is deployed for use.
    “A one-way switch could open the protection and allow a signal to be sent or communication to be opened in an emergency or at the required moment,” Gogotsi said. “This means it could protect communications equipment from being influenced or tampered with until it is in use. For example, it could encase the device during transportation or storage and then activate only when it is ready to be used.”
    The next step for Gogotsi’s team is to explore additional MXene-electrolyte combinations and mechanisms to fine-tune the shielding to achieve a stronger modulation of electromagnetic wave transmission and dynamic adjustment to block radiation at a variety of bandwidths. More