More stories

  • in

    Blast chiller for the quantum world

    The quantum nature of objects visible to the naked eye is currently a much-discussed research question. A team led by Innsbruck physicist Gerhard Kirchmair has now demonstrated a new method in the laboratory that could make the quantum properties of macroscopic objects more accessible than before. With the method, the researchers were able to increase the efficiency of an established cooling method by an order of a magnitude.
    With optomechanical experiments, scientists are trying to explore the limits of the quantum world and to create a foundation for the development of highly sensitive quantum sensors. In these experiments, objects visible to the naked eye are coupled to superconducting circuits via electromagnetic fields. To get functioning superconductors, such experiments take place in cryostats at a temperature of about 100 millikelvin. But this is still far from sufficient to really dive into the quantum world. In order to observe quantum effects on macroscopic objects, they must be cooled to nearly absolute zero using sophisticated cooling methods. Physicists led by Gerhard Kirchmair from the Department of Experimental Physics at the University of Innsbruck and the Institute of Quantum Optics and Quantum Information (IQOQI) have now demonstrated a nonlinear cooling mechanism with which even massive objects can be cooled well.
    Cooling capacity higher than common
    In the experiment, the Innsbruck researchers couple the mechanical object — in their case a vibrating beam — to the superconducting circuit via a magnetic field. To do this, they attached a magnet to the beam, which is about 100 micrometers long. When the magnet moves, it changes the magnetic flux through the circuit, the heart of which is a so-called SQUID, a superconducting quantum interference device. Its resonant frequency changes depending on the magnetic flux, which is measured using microwave signals. In this way, the micromechanical oscillator can be cooled to near the quantum mechanical ground state. Furthermore, David Zöpfl from Gerhard Kirchmair’s team explains, “The change in the resonant frequency of the SQUID circuit as a function of microwave power is not linear. As a consequence, we can cool the massive object by an order of magnitude more for the same power.” This new, simple method is particularly interesting for cooling more massive mechanical objects. Zöpfl and Kirchmair are confident that this could be the foundation for the search of quantum properties in larger macroscopic objects.
    The work was carried out in collaboration with scientists in Canada and Germany and has now been published in Physical Review Letters. The research was financially supported by the Austrian Science Fund FWF and the European Union, among others. Co-authors Christian Schneider and Lukas Deeg are or were members of the FWF Doctoral Program Atoms, Light and Molecules (DK-ALM). More

  • in

    Scientists re-writes an equation in FDA guidance to improve the accuracy of the drug interaction prediction

    Drugs absorbed into the body are metabolized and thus removed by enzymes from several organs like the liver. The how fast the drug is cleared out of the system can be increased by other drugs that increase the amount of enzyme secretion in the body. This dramatically decreases the concentration of a drug, reducing its efficacy, often leading to the failure of having any effect at all. Therefore, accurately predicting the clearance rate in the presence of drug-drug interaction* is critical in the process of drug prescription and development of a new drug.
    *Drug-drug interaction: In terms of metabolism, drug-drug interaction is a phenomenon in which one drug changes the metabolism of another drug to promote or inhibit its excretion from the body when two or more drugs are taken together. As a result, it increases the toxicity of medicines or causes loss of efficacy.
    Since it is practically impossible to evaluate all interactions between new drug candidates and all marketed drugs during the development process, the FDA recommends indirect evaluation of drug interactions using a formula suggested in their guidance first published in 1997, revised in January of 2020, in order to evaluate drug interactions and minimize side effects of having to use more than one type of drugs at once.
    The formula relies on the 110-year-old Michaelis-Menten (MM) model which has a fundamental limit of making a very broad and groundless assumption on the part of the presence of the enzymes that metabolizes the drug. While MM equation has been one of the most widely known equations in biochemistry used in more than 220,000 published papers, the MM equation is accurate only when the concentration of the enzyme that metabolizes the drug is almost non-existent, causing the accuracy of the equation highly unsatisfactory — only 38 percent of the predictions had less than two-fold errors.
    “To make up for the gap, researcher resorted to plugging in scientifically unjustified constants into the equation,” Professor Jung-woo Chae of Chungnam National Univeristy College of Pharmacy said. “This is comparable to having to have the epicyclic orbits introduced to explain the motion of the planets back in the days in order to explain the now-defunct Ptolemaic theory, because it was THE theory back then.”
    A joint research team composed of mathematicians from the Biomedical Mathematics Group within the Institute for Basic Science (IBS) and the Korea Advanced Institute of Science and Technology (KAIST) and pharmacological scientists from the Chungnam National University reported that they identified the major causes of the FDA-recommended equation’s inaccuracies and presented a solution.
    When estimating the gut bioavailability (Fg), which is the key parameter of the equation, the fraction absorbed from the gut lumen (Fa) is usually assumed to be 1. However, many experiments have shown that Fa is less than 1, obviously since it can’t be expected that all of the orally taken drugs to be completely absorbed by the intestines. To solve this problem, the research team used an “estimated Fa” value based on factors such as the drug’s transit time, intestine radius, and permeability values and used it to re-calculate Fg.
    Also, taking a different approach from the MM equation, the team used an alternative model they derived in a previous study back in 2020, which can more accurately predict the drug metabolism rate regardless of the enzyme concentration. Combining these changes, the modified equation with re-calculated Fg had a dramatically increased accuracy of the resulting estimate. The existing FDA formula predicted drug interactions within a 2-fold margin of error at the rate of 38%, whereas the accuracy rate of the revised formula reached 80%.
    “Such drastic improvement in drug-drug interaction prediction accuracy is expected to make great contribution to increasing the success rate of new drug development and drug efficacy in clinical trials. As the results of this study were published in one of the top clinical pharmacology journal, it is expected that the FDA guidance will be revised according to the results of this study.” said Professor Sang Kyum Kim from Chungnam National University College of Pharmacy.
    Furthermore, this study highlights the importance of collaborative research between research groups in vastly different disciplines, in a field that is as dynamic as drug interactions.
    “Thanks to the collaborative research between mathematics and pharmacy, we were able to recify the formula that we have accepted to be the right answer for so long to finally grasp on the leads toward healthier life for mankind.,” said Professor Jae Kyung Kim. He continued, “I hope seeing a ‘K-formula’ entered into the US FDA guidance one day.” More

  • in

    Want a ‘Shrinky Dinks’ approach to nano-sized devices? Try hydrogels

    High-tech shrink art may be the key to making tiny electronics, 3-D nanostructures or even holograms for hiding secret messages.

    A new approach to making tiny structures relies on shrinking them down after building them, rather than making them small to begin with, researchers report in the Dec. 23 Science.

    The key is spongelike hydrogel materials that expand or contract in response to surrounding chemicals (SN: 1/20/10). By inscribing patterns in hydrogels with a laser and then shrinking the gels down to about one-thirteenth their original size, the researchers created patterns with details as small as 25 billionths of a meter across.

    Science News headlines, in your inbox

    Headlines and summaries of the latest Science News articles, delivered to your email inbox every Thursday.

    Thank you for signing up!

    There was a problem signing you up.

    At that level of precision, the researchers could create letters small enough to easily write this entire article along the circumference of a typical human hair.

    Biological scientist Yongxin Zhao and colleagues deposited a variety of materials in the patterns to create nanoscopic images of Chinese zodiac animals. By shrinking the hydrogels after laser etching, several of the images ended up roughly the size of a red blood cell. They included a monkey made of silver, a gold-silver alloy pig, a titanium dioxide snake, an iron oxide dog and a rabbit made of luminescent nanoparticles.

    These two dragons, each roughly 40 micrometers long, were made by depositing cadmium selenide quantum dots onto a laser-etched hydrogel. The red stripes on the left dragon are each just 200 nanometers thick.The Chinese University of Hong Kong, Carnegie Mellon University

    Because the hydrogels can be repeatedly shrunk and expanded with chemical baths, the researchers were also able to create holograms in layers inside a chunk of hydrogel to encode secret information. Shrinking a hydrogel hologram makes it unreadable. “If you want to read it, you have to expand the sample,” says Zhao, of Carnegie Mellon University in Pittsburgh. “But you need to expand it to exactly the same extent” as the original. In effect, knowing how much to expand the hydrogel serves as a key to unlock the information hidden inside.  

    But the most exciting aspect of the research, Zhao says, is the wide range of materials that researchers can use on such minute scales. “We will be able to combine different types of materials together and make truly functional nanodevices.” More

  • in

    Deepfake challenges 'will only grow'

    Although most public attention surrounding deepfakes has focused on large propaganda campaigns, the problematic new technology is much more insidious, according to a new report by artificial intelligence (AI) and foreign policy experts at Northwestern University and the Brookings Institution.
    In the new report, the authors discuss deepfake videos, images and audio as well as their related security challenges. The researchers predict the technology is on the brink of being used much more widely, including in targeted military and intelligence operations.
    Ultimately, the experts make recommendations to security officials and policymakers for how to handle the unsettling new technology. Among their recommendations, the authors emphasize a need for the United States and its allies to develop a code of conduct for governments’ use of deepfakes.
    The research report, “Deepfakes and international conflict,” was published this month by Brookings.
    “The ease with which deepfakes can be developed for specific individuals and targets, as well as their rapid movement — most recently through a form of AI known as stable diffusion — point toward a world in which all states and nonstate actors will have the capacity to deploy deepfakes in their security and intelligence operations,” the authors write. “Security officials and policymakers will need to prepare accordingly.”
    Northwestern co-authors include AI and security expert V.S. Subrahmanian, the Walter P. Murphy Professor of Computer Science at Northwestern’s McCormick School of Engineering and Buffett Faculty Fellow at the Buffett Institute of Global Affairs, and Chongyang Gao, a Ph.D. student in Subrahmanian’s lab. Brookings Institute co-authors include Daniel L. Bynam and Chris Meserole.

    Deepfakes require ‘little difficulty’
    Leader of the Northwestern Security and AI Lab, Subrahmanian and his student Gao previously developed TREAD (Terrorism Reduction with Artificial Intelligence Deepfakes), a new algorithm that researchers can use to generate their own deepfake videos. By creating convincing deepfakes, researchers can better understand the technology within the context of security.
    Using TREAD, Subrahmanian and his team created sample deepfake videos of deceased Islamic State terrorist Abu Mohammed al-Adnani. While the resulting video looks and sounds like al-Adnani — with highly realistic facial expressions and audio — he is actually speaking words by Syrian President Bashar al-Assad.
    The researchers created the lifelike video within hours. The process was so straight-forward that Subrahmanian and his coauthors said militaries and security agencies should just assume that rivals are capable of generating deepfake videos of any official or leader within minutes.
    “Anyone with a reasonable background in machine learning can — with some systematic work and the right hardware — generate deepfake videos at scale by building models similar to TREAD,” the authors write. “The intelligence agencies of virtually any country, which certainly includes U.S. adversaries, can do so with little difficulty.”
    Avoiding ‘cat-and-mouse games’

    The authors believe that state and non-state actors will leverage deepfakes to strengthen ongoing disinformation efforts. Deepfakes could help fuel conflict by legitimizing war, sowing confusion, undermining popular support, polarizing societies, discrediting leaders and more. In the short-term, security and intelligence experts can counteract deepfakes by designing and training algorithms to identify potentially fake videos, images and audio. This approach, however, is unlikely to remain effective in the long term.
    “The result will be a cat-and-mouse game similar to that seen with malware: When cybersecurity firms discover a new kind of malware and develop signatures to detect it, malware developers make ‘tweaks’ to evade the detector,” the authors said. “The detect-evade-detect-evade cycle plays out over time…Eventually, we may reach an endpoint where detection becomes infeasible or too computationally intensive to carry out quickly and at scale.”
    For long-term strategies, the report’s authors make several recommendations: Educate the general public to increase digital literacy and critical reasoning Develop systems capable of tracking the movement of digital assets by documenting each person or organization that handles the asset Encourage journalists and intelligence analysts to slow down and verify information before including it in published articles. “Similarly, journalists might emulate intelligence products that discuss ‘confidence levels’ with regard to judgments.” Use information from separate sources, such as verification codes, to confirm legitimacy of digital assetsAbove all, the authors argue that the government should enact policies that offer robust oversight and accountability mechanisms for governing the generation and distribution of deepfake content. If the United States or its allies want to “fight fire with fire” by creating their own deepfakes, then policies first need to be agreed upon and put in place. The authors say this could include establishing a “Deepfakes Equities Process,” modeled after similar processes for cybersecurity.
    “The decision to generate and use deepfakes should not be taken lightly and not without careful consideration of the trade-offs,” the authors write. “The use of deepfakes, particularly designed to attack high-value targets in conflict settings, will affect a wide range of government offices and agencies. Each stakeholder should have the opportunity to offer input, as needed and as appropriate. Establishing such a broad-based, deliberative process is the best route to ensuring that democratic governments use deepfakes responsibly.”
    Further information: https://www.brookings.edu/research/deepfakes-and-international-conflict/ More

  • in

    Researchers gain deeper understanding of mechanism behind superconductors

    Physicists at Leipzig University have once again gained a deeper understanding of the mechanism behind superconductors. This brings the research group led by Professor Jürgen Haase one step closer to their goal of developing the foundations for a theory for superconductors that would allow current to flow without resistance and without energy loss. The researchers found that in superconducting copper-oxygen bonds, called cuprates, there must be a very specific charge distribution between the copper and the oxygen, even under pressure.
    This confirmed their own findings from 2016, when Haase and his team developed an experimental method based on magnetic resonance that can measure changes that are relevant to superconductivity in the structure of materials. They were the first team in the world to identify a measurable material parameter that predicts the maximum possible transition temperature — a condition required to achieve superconductivity at room temperature. Now they have discovered that cuprates, which under pressure enhance superconductivity, follow the charge distribution predicted in 2016. The researchers have published their new findings in the journal PNAS.
    “The fact that the transition temperature of cuprates can be enhanced under pressure has puzzled researchers for 30 years. But until now we didn’t know which mechanism was responsible for this,” Haase said. He and his colleagues at the Felix Bloch Institute for Solid State Physics have now come a great deal closer to understanding the actual mechanism in these materials. “At Leipzig University — with support from the Graduate School Building with Molecules and Nano-objects (BuildMoNa) — we have established the basic conditions needed to research cuprates using nuclear resonance, and Michael Jurkutat was the first doctoral researcher to join us. Together, we established the Leipzig Relation, which says that you have to take electrons away from the oxygen in these materials and give them to the copper in order to increase the transition temperature. You can do this with chemistry, but also with pressure. But hardly anyone would have thought that we could measure all of this with nuclear resonance,” Haase said.
    Their current research finding could be exactly what is needed to produce a superconductor at room temperature, which has been the dream of many physicists for decades and is now expected to take only a few more years, according to Haase. To date, this has only been possible at very low temperatures around minus 150 degrees Celsius and below, which are not easy to find anywhere on Earth. About a year ago, a Canadian research group verified the findings of Professor Haase’s team from 2016 using newly developed, computer-aided calculations and thus substantiated the findings theoretically.
    Superconductivity is already used today in a variety of ways, for example, in magnets for MRI machines and in nuclear fusion. But it would be much easier and less expensive if superconductors operated at room temperature. The phenomenon of superconductivity was discovered in metals as early as 1911, but even Albert Einstein did not attempt to come up with an explanation back then. Nearly half a century passed before BCS theory provided an understanding of superconductivity in metals in 1957. In 1986, the discovery of superconductivity in ceramic materials (cuprate superconductors) at much higher temperatures by physicists Georg Bednorz and Karl Alexander Müller raised new questions, but also raised hopes that superconductivity could be achieved at room temperature. More

  • in

    Researchers use AI to triage patients with chest pain

    Artificial intelligence (AI) may help improve care for patients who show up at the hospital with acute chest pain, according to a study published in Radiology, a journal of the Radiological Society of North America (RSNA).
    “To the best of our knowledge, our deep learning AI model is the first to utilize chest X-rays to identify individuals among acute chest pain patients who need immediate medical attention,” said the study’s lead author, Márton Kolossváry, M.D., Ph.D., radiology research fellow at Massachusetts General Hospital (MGH) in Boston.
    Acute chest pain syndrome may consist of tightness, burning or other discomfort in the chest or a severe pain that spreads to your back, neck, shoulders, arms, or jaw. It may be accompanied by shortness of breath.
    Acute chest pain syndrome accounts for over 7 million emergency department visits annually in the United States, making it one of the most common complaints.
    Fewer than 8% of these patients are diagnosed with the three major cardiovascular causes of acute chest pain syndrome, which are acute coronary syndrome, pulmonary embolism or aortic dissection. However, the life-threatening nature of these conditions and low specificity of clinical tests, such as electrocardiograms and blood tests, lead to substantial use of cardiovascular and pulmonary diagnostic imaging, often yielding negative results. As emergency departments struggle with high patient numbers and shortage of hospital beds, effectively triaging patients at very low risk of these serious conditions is important.
    Deep learning is an advanced type of artificial intelligence (AI) that can be trained to search X-ray images to find patterns associated with disease.

    For the study, Dr. Kolossváry and colleagues developed an open-source deep learning model to identify patients with acute chest pain syndrome who were at risk for 30-day acute coronary syndrome, pulmonary embolism, aortic dissection or all-cause mortality, based on a chest X-ray.
    The study used electronic health records of patients presenting with acute chest pain syndrome who had a chest X-ray and additional cardiovascular or pulmonary imaging and/or stress tests at MGH or Brigham and Women’s Hospital in Boston between January 2005 and December 2015. For the study, 5,750 patients (mean age 59, including 3,329 men) were evaluated.
    The deep-learning model was trained on 23,005 patients from MGH to predict a 30-day composite endpoint of acute coronary syndrome, pulmonary embolism or aortic dissection and all-cause mortality based on chest X-ray images.
    The deep-learning tool significantly improved prediction of these adverse outcomes beyond age, sex and conventional clinical markers, such as d-dimer blood tests. The model maintained its diagnostic accuracy across age, sex, ethnicity and race. Using a 99% sensitivity threshold, the model was able to defer additional testing in 14% of patients as compared to 2% when using a model only incorporating age, sex, and biomarker data.
    “Analyzing the initial chest X-ray of these patients using our automated deep learning model, we were able to provide more accurate predictions regarding patient outcomes as compared to a model that uses age, sex, troponin or d-dimer information,” Dr. Kolossváry said. “Our results show that chest X-rays could be used to help triage chest pain patients in the emergency department.”
    According to Dr. Kolossváry, in the future such an automated model could analyze chest X-rays in the background and help select those who would benefit most from immediate medical attention and may help identify patients who may be discharged safely from the emergency department.
    “Deep Learning Analysis of Chest Radiographs to Triage Patients with Acute Chest Pain Syndrome.” Collaborating with Dr. Kolossváry were Vineet K. Raghu, Ph.D., John T. Nagurney, M.D., Udo Hoffmann, M.D., M.P.H., and Michael T. Lu, M.D., M.P.H. More

  • in

    Novel framework provides 'measuring stick' for assessing patient matching tools

    Accurate linking of an individual’s medical records from disparate sources within and between health systems, known as patient matching, plays a critical role in patient safety and quality of care, but has proven difficult to accomplish in the United States, the last developed country without a unique patient identifier. In the U.S., linking patient data is dependent on algorithms designed by researchers, vendors and others. Research scientists led by Regenstrief Institute Vice President for Data and Analytics Shaun Grannis, M.D., M.S., have developed an eight-point framework for evaluating the validity and performance of algorithms to match medical records to the correct patient.
    “The value of data standardization is well recognized. There are national healthcare provider IDs. There are facility IDs and object identifiers. There are billing codes. There are standard vocabularies for healthcare lab test results and medical observations — such as LOINC® here at Regenstrief. Patient identity is the last gaping hole in our health infrastructure,” said Dr. Grannis. “We are providing a framework to evaluate patient matching algorithms for accuracy.
    “We recognize that the need for patient matching is not going away and that we need standardized methods to uniquely identify patients,” said Dr. Grannis. “Current patient matching algorithms come in many different flavors, shapes and sizes. To be able to compare how one performs against the other, or even to understand how they might interact together, we have to have a standard way of assessment. We have produced a novel, robust framework for consistent and reproducible evaluation. Simply put, the framework we’ve developed at Regenstrief provides a ‘measuring stick’ for the effectiveness of patient matching tools.”
    Individuals increasingly receive care from multiple sources. While patient matching is complex, it is crucial to health information exchange. Is the William Jones seen at one healthcare system the same person as the William, Will or Willy Jones or perhaps Bill or Billy Jones receiving care at other facilities? Does Elizabeth Smith’s name appear at different medical offices or perhaps at a physical therapy or a dialysis facility as Liz or Beth? To which Juan J. Gomez do various lab test results belong? Typos, missing information and other data errors as well as typical variations add to the complexity.
    The framework’s eight-point approach to the creation of gold standard matching data sets necessary for record linkage encompasses technical areas including data preprocessing, blocking, record adjudication, linkage evaluation and reviewer characteristics. The authors note that the framework “can help record linkage method developers provide necessary transparency when creating and validating gold standard reference matching data sets. In turn, this transparency will support both the internal and external validity of recording linkage studies and improve the robustness of new record linkage strategies.”
    Measures and standards are ubiquitous. “When you go to a gas station pump, the measure of how much gas goes through is standardized so that we know exactly how much is flowing. Similarly, we need to have a common way of measuring and understanding how algorithms for patient matching work,” said Dr. Grannis. “Our eight-pronged approach helps to cover the waterfront of what needs to be evaluated. Laying out the framework and specifying the tasks and activities that need to be completed goes a long way toward standardizing patient matching.”
    In addition to playing a critical role in patient safety and quality of care, improved patient matching accuracy supports more cost-effective healthcare delivery in a variety of ways including reduction in the number of duplicate medical tests. More

  • in

    These chemists cracked the code to long-lasting Roman concrete

    MIT chemist Admir Masic really hoped his experiment wouldn’t explode.

    Masic and his colleagues were trying to re-create an ancient Roman technique for making concrete, a mix of cement, gravel, sand and water. The researchers suspected that the key was a process called “hot mixing,” in which dry granules of calcium oxide, also called quicklime, are mixed with volcanic ash to make the cement. Then water is added.

    Hot mixing, they thought, would ultimately produce a cement that wasn’t completely smooth and mixed, but instead contained small calcium-rich rocks. Those little rocks, ubiquitous in the walls of the Romans’ concrete buildings, might be the key to why those structures have withstood the ravages of time.

    Science News headlines, in your inbox

    Headlines and summaries of the latest Science News articles, delivered to your email inbox every Thursday.

    Thank you for signing up!

    There was a problem signing you up.

    That’s not how modern cement is made. The reaction of quicklime with water is highly exothermic, meaning that it can produce a lot of heat — and possibly an explosion.

    “Everyone would say, ‘You are crazy,’” Masic says.

    But no big bang happened. Instead, the reaction produced only heat, a damp sigh of water vapor — and a Romans-like cement mixture bearing small white calcium-rich rocks.

    Researchers have been trying for decades to re-create the Roman recipe for concrete longevity — but with little success. The idea that hot mixing was the key was an educated guess.

    Masic and colleagues had pored over texts by Roman architect Vitruvius and historian Pliny, which offered some clues as to how to proceed. These texts cited, for example, strict specifications for the raw materials, such as that the limestone that is the source of the quicklime must be very pure, and that mixing quicklime with hot ash and then adding water could produce a lot of heat.

    The rocks were not mentioned, but the team had a feeling they were important.

    Subscribe to Science News

    Get great science journalism, from the most trusted source, delivered to your doorstep.

    “In every sample we have seen of ancient Roman concrete, you can find these white inclusions,” bits of rock embedded in the walls. For many years, Masic says, the origin of those inclusions was unclear — researchers suspected incomplete mixing of the cement, perhaps. But these are the highly organized Romans we’re talking about. How likely is it that “every operator [was] not mixing properly and every single [building] has a flaw?”

    What if, the team suggested, these inclusions in the cement were actually a feature, not a bug? The researchers’ chemical analyses of such rocks embedded in the walls at the archaeological site of Privernum in Italy indicated that the inclusions were very calcium-rich.

    That suggested the tantalizing possibility that these rocks might be helping the buildings heal themselves from cracks due to weathering or even an earthquake. A ready supply of calcium was already on hand: It would dissolve, seep into the cracks and re-crystallize. Voila! Scar healed.

    But could the team observe this in action? Step one was to re-create the rocks via hot mixing and hope nothing exploded. Step two: Test the Roman-inspired cement. The team created concrete with and without the hot mixing process and tested them side by side. Each block of concrete was broken in half, the pieces placed a small distance apart. Then water was trickled through the crack to see how long it took before the seepage stopped.

    “The results were stunning,” Masic says. The blocks incorporating hot mixed cement healed within two to three weeks. The concrete produced without hot mixed cement never healed at all, the team reports January 6 in Science Advances.

    Cracking the recipe could be a boon to the planet. The Pantheon and its soaring, detailed concrete dome have stood nearly 2,000 years, for instance, while modern concrete structures have a lifespan of perhaps 150 years, and that’s a best case scenario (SN: 2/10/12). And the Romans didn’t have steel reinforcement bars shoring up their structures.

    More frequent replacements of concrete structures means more greenhouse gas emissions. Concrete manufacturing is a huge source of carbon dioxide to the atmosphere, so longer-lasting versions could reduce that carbon footprint. “We make 4 gigatons per year of this material,” Masic says. That manufacture produces as much as 1 metric ton of CO2 per metric ton of produced concrete, currently amounting to about 8 percent of annual global CO2 emissions.

    Still, Masic says, the concrete industry is resistant to change. For one thing, there are concerns about introducing new chemistry into a tried-and-true mixture with well-known mechanical properties. But “the key bottleneck in the industry is the cost,” he says. Concrete is cheap, and companies don’t want to price themselves out of competition.

    The researchers hope that reintroducing this technique that has stood the test of time, and that could involve little added cost to manufacture, could answer both these concerns. In fact, they’re banking on it: Masic and several of his colleagues have created a startup they call DMAT that is currently seeking seed money to begin to commercially produce the Roman-inspired hot-mixed concrete. “It’s very appealing simply because it’s a thousands-of-years-old material.” More