More stories

  • in

    AI could set a new bar for designing hurricane-resistant buildings

    Being able to withstand hurricane-force winds is the key to a long life for many buildings on the Eastern Seaboard and Gulf Coast of the U.S. Determining the right level of winds to design for is tricky business, but support from artificial intelligence may offer a simple solution.
    Equipped with 100 years of hurricane data and modern AI techniques, researchers at the National Institute of Standards and Technology (NIST) have devised a new method of digitally simulating hurricanes. The results of a study published today in Artificial Intelligence for the Earth Systems demonstrate that the simulations can accurately represent the trajectory and wind speeds of a collection of actual storms. The authors suggest that simulating numerous realistic hurricanes with the new approach can help to develop improved guidelines for the design of buildings in hurricane-prone regions.
    State and local laws that regulate building design and construction — more commonly known as building codes — point designers to standardized maps. On these maps, engineers can find the level of wind their structure must handle based on its location and its relative importance (i.e., the bar is higher for a hospital than for a self-storage facility). The wind speeds in the maps are derived from scores of hypothetical hurricanes simulated by computer models, which are themselves based on real-life hurricane records.
    “Imagine you had a second Earth, or a thousand Earths, where you could observe hurricanes for 100 years and see where they hit on the coast, how intense they are. Those simulated storms, if they behave like real hurricanes, can be used to create the data in the maps almost directly,” said NIST mathematical statistician Adam Pintar, a study co-author.
    The researchers who developed the latest maps did so by simulating the complex inner workings of hurricanes, which are influenced by physical parameters such as sea surface temperatures and the Earth’s surface roughness. However, the requisite data on these specific factors is not always readily available.
    More than a decade later, advances in AI-based tools and years of additional hurricane records have made an unprecedented approach possible, which could result in more realistic hurricane wind maps down the road.

    NIST postdoctoral researcher Rikhi Bose, together with Pintar and NIST Fellow Emil Simiu, used these new techniques and resources to tackle the issue from a different angle. Rather than having their model mathematically build a storm from the ground up, the authors of the new study taught it to mimic actual hurricane data with machine learning, Pintar said.
    Studying for a physics exam by only looking at the questions and answers of previous assignments may not play out in a student’s favor, but for powerful AI-based techniques, this type of approach could be worthwhile.
    With enough quality information to study, machine-learning algorithms can construct models based on patterns they uncover within datasets that other methods may miss. Those models can then simulate specific behaviors, such as the wind strength and movement of a hurricane.
    In the new research, the study material came in the form of the National Hurricane Center’s Atlantic Hurricane Database (HURDAT2), which contains information about hurricanes going back more than 100 years, such as the coordinates of their paths and windspeeds.
    The researchers split data on more than 1,500 storms into sets for training and testing their model. When challenged with concurrently simulating the trajectory and wind of historical storms it had not seen before, the model scored highly.

    “It performs very well. Depending on where you’re looking at along the coast, it would be quite difficult to identify a simulated hurricane from a real one, honestly,” Pintar said.
    They also used the model to generate sets of 100 years’ worth of hypothetical storms. It produced the simulations in a matter of seconds, and the authors saw a large degree of overlap with the general behavior of the HURDAT2 storms, suggesting that their model could rapidly produce collections of realistic storms.
    However, there were some discrepancies, such as in the Northeastern coastal states. In these regions, HURDAT2 data was sparse, and thus, the model generated less realistic storms.
    “Hurricanes are not as frequent in, say, Boston as in Miami, for example. The less data you have, the larger the uncertainty of your predictions,” Simiu said.
    As a next step, the team plans to use simulated hurricanes to develop coastal maps of extreme wind speeds as well as quantify uncertainty in those estimated speeds.
    Since the model’s understanding of storms is limited to historical data for now, it cannot simulate the effects that climate change will have on storms of the future. The traditional approach of simulating storms from the ground up is better suited to that task. However, in the short term, the authors are confident that wind maps based on their model — which is less reliant on elusive physical parameters than other models are — would better reflect reality.
    Within the next several years they aim to produce and propose new maps for inclusion in building standards and codes. More

  • in

    Machine learning model helps forecasters improve confidence in storm prediction

    When severe weather is brewing and life-threatening hazards like heavy rain, hail or tornadoes are possible, advance warning and accurate predictions are of utmost importance. Colorado State University weather researchers have given storm forecasters a powerful new tool to improve confidence in their forecasts and potentially save lives.
    Over the last several years, Russ Schumacher, professor in the Department of Atmospheric Science and Colorado State Climatologist, has led a team developing a sophisticated machine learning model for advancing skillful prediction of hazardous weather across the continental United States. First trained on historical records of excessive rainfall, the model is now smart enough to make accurate predictions of events like tornadoes and hail four to eight days in advance — the crucial sweet spot for forecasters to get information out to the public so they can prepare. The model is called CSU-MLP, or Colorado State University-Machine Learning Probabilities.
    Led by research scientist Aaron Hill, who has worked on refining the model for the last two-plus years, the team recently published their medium-range (four to eight days) forecasting ability in the American Meteorological Society journal Weather and Forecasting.
    Working with Storm Prediction Center forecasters
    The researchers have now teamed with forecasters at the national Storm Prediction Center in Norman, Oklahoma, to test the model and refine it based on practical considerations from actual weather forecasters. The tool is not a stand-in for the invaluable skill of human forecasters, but rather provides an agnostic, confidence-boosting measure to help forecasters decide whether to issue public warnings about potential weather.
    “Our statistical models can benefit operational forecasters as a guidance product, not as a replacement,” Hill said.

    Israel Jirak, M.S. ’02, Ph.D. ’05, is science and operations officer at the Storm Prediction Center and co-author of the paper. He called the collaboration with the CSU team “a very successful research-to-operations project.”
    “They have developed probabilistic machine learning-based severe weather guidance that is statistically reliable and skillful while also being practically useful for forecasters,” Jirak said. The forecasters in Oklahoma are using the CSU guidance product daily, particularly when they need to issue medium-range severe weather outlooks.
    Nine years of historical weather data
    The model is trained on a very large dataset containing about nine years of detailed historical weather observations over the continental U.S. These data are combined with meteorological retrospective forecasts, which are model “re-forecasts” created from outcomes of past weather events. The CSU researchers pulled the environmental factors from those model forecasts and associated them with past events of severe weather like tornadoes and hail. The result is a model that can run in real time with current weather events and produce a probability of those types of hazards with a four- to eight-day lead time, based on current environmental factors like temperature and wind.
    Ph.D. student Allie Mazurek is working on the project and is seeking to understand which atmospheric data inputs are the most important to the model’s predictive capabilities. “If we can better decompose how the model is making its predictions, we can hopefully better diagnose why the model’s predictions are good or bad during certain weather setups,” she said.
    Hill and Mazurek are working to make the model not only more accurate, but also more understandable and transparent for the forecasters using it.
    For Hill, it’s most gratifying to know that years of work refining the machine learning tool are now making a difference in a public, operational setting.
    “I love fundamental research. I love understanding new things about our atmosphere. But having a system that is providing improved warnings and improved messaging around the threat of severe weather is extremely rewarding,” Hill said. More

  • in

    Can a solid be a superfluid? Engineering a novel supersolid state from layered 2D materials

    A collaboration of Australian and European physicists predict that layered electronic 2D semiconductors can host a curious quantum phase of matter called the supersolid.
    The supersolid is a very counterintuitive phase indeed. It is made up of particles that simultaneously form a rigid crystal and yet at the same time flow without friction since all the particles belong to the same single quantum state.
    A solid becomes ‘super’ when its quantum properties match the well-known quantum properties of superconductors. A supersolid simultaneously has two orders, solid and super: solid because of the spatially repeating pattern of particles, super because the particles can flow without resistance.”Although a supersolid is rigid, it can flow like a liquid without resistance,” explains Lead author Dr Sara Conti (University of Antwerp).
    The study was conducted at UNSW (Australia), University of Antwerp (Belgium) and University of Camerino (Italy).
    A 50-Year Journey Towards the Exotic Supersolid
    Geoffrey Chester, a Professor at Cornell University, predicted in 1970 that solid helium-4 under pressure should at low temperatures display: Crystalline solid order, with each helium atom at a specific point in a regularly ordered lattice and, at the same time, Bose-Einstein condensation of the atoms, with every atom in the same single quantum state, so they flow without resistance.

    However in the following five decades the Chester supersolid has not been unambiguously detected.
    Alternative approaches to forming a supersolid-like state have reported supersolid-like phases in cold-atom systems in optical lattices. These are either clusters of condensates or condensates with varying density determined by the trapping geometries. These supersolid-like phases should be distinguished from the original Chester supersolid in which each single particle is localised in its place in the crystal lattice purely by the forces acting between the particles.
    The new Australia-Europe study predicts that such a state could instead be engineered in two-dimensional (2D) electronic materials in a semiconductor structure, fabricated with two conducting layers separated by an insulating barrier of thickness d.
    One layer is doped with negatively-charged electrons and the other with positively-charged holes.
    The particles forming the supersolid are interlayer excitons, bound states of an electron and hole tied together by their strong electrical attraction. The insulating barrier prevents fast self-annihilation of the exciton bound pairs. Voltages applied to top and bottom metal ‘gates’ tune the average separation r0 between excitons.

    The research team predicts that excitons in this structure will form a supersolid over a wide range of layer separations and average separations between the excitons. The electrical repulsion between the excitons can constrain them into a fixed crystalline lattice.
    “A key novelty is that a supersolid phase with Bose-Einstein quantum coherence appears at layer separations much smaller than the separation predicted for the non-super exciton solid that is driven by the same electrical repulsion between excitons,” says co-corresponding author Prof David Neilson (University of Antwerp).
    “In this way, the supersolid pre-empts the non-super exciton solid. At still larger separations, the non-super exciton solid eventually wins, and the quantum coherence collapses.”
    “This is an extremely robust state, readily achievable in experimental setups,” adds co-corresponding author Prof Alex Hamilton (UNSW). “Ironically, the layer separations are relatively large and are easier to fabricate than the extremely small layer separations in such systems that have been the focus of recent experiments aimed at maximising the interlayer exciton binding energies.”
    As for detection, for a superfluid it is well known that this cannot be rotated until it can host a quantum vortex, analogous to a whirlpool. But to form this vortex requires a finite amount of energy, and hence a sufficiently strong rotational force. So up to this point, the measured rotational moment of inertia (the extent to which an object resists rotational acceleration) will remain zero. In the same way, a supersolid can be identified by detecting such an anomaly in its rotational moment of inertia.
    The research team has reported the complete phase diagram of this system at low temperatures.
    “By changing the layer separation relative to the average exciton spacing, the strength of the exciton-exciton interactions can be tuned to stabilise either the superfluid, or the supersolid, or the normal solid,” says Dr Sara Conti.
    “The existence of a triple point is also particularly intriguing. At this point, the boundaries of supersolid and normal-solid melting, and the supersolid to normal-solid transition, all cross. There should be exciting physics coming from the exotic interfaces separating these domains, for example, Josephson tunnelling between supersolid puddles embedded in a normal-background.” More

  • in

    Magnon-based computation could signal computing paradigm shift

    Like electronics or photonics, magnonics is an engineering subfield that aims to advance information technologies when it comes to speed, device architecture, and energy consumption. A magnon corresponds to the specific amount of energy required to change the magnetization of a material via a collective excitation called a spin wave.
    Because they interact with magnetic fields, magnons can be used to encode and transport data without electron flows, which involve energy loss through heating (known as Joule heating) of the conductor used. As Dirk Grundler, head of the Lab of Nanoscale Magnetic Materials and Magnonics (LMGN) in the School of Engineering explains, energy losses are an increasingly serious barrier to electronics as data speeds and storage demands soar.
    “With the advent of AI, the use of computing technology has increased so much that energy consumption threatens its development,” Grundler says. “A major issue is traditional computing architecture, which separates processors and memory. The signal conversions involved in moving data between different components slow down computation and waste energy.”
    This inefficiency, known as the memory wall or Von Neumann bottleneck, has had researchers searching for new computing architectures that can better support the demands of big data. And now, Grundler believes his lab might have stumbled on such a “holy grail.”
    While doing other experiments on a commercial wafer of the ferrimagnetic insulator yttrium iron garnet (YIG) with nanomagnetic strips on its surface, LMGN PhD student Korbinian Baumgaertl was inspired to develop precisely engineered YIG-nanomagnet devices. With the Center of MicroNanoTechnology’s support, Baumgaertl was able to excite spin waves in the YIG at specific gigahertz frequencies using radiofrequency signals, and — crucially — to reverse the magnetization of the surface nanomagnets.
    “The two possible orientations of these nanomagnets represent magnetic states 0 and 1, which allows digital information to be encoded and stored,” Grundler explains.

    A route to in-memory computation
    The scientists made their discovery using a conventional vector network analyzer, which sent a spin wave through the YIG-nanomagnet device. Nanomagnet reversal happened only when the spin wave hit a certain amplitude, and could then be used to write and read data.
    “We can now show that the same waves we use for data processing can be used to switch the magnetic nanostructures so that we also have nonvolatile magnetic storage within the very same system,” Grundler explains, adding that “nonvolatile” refers to the stable storage of data over long time periods without additional energy consumption.
    It’s this ability to process and store data in the same place that gives the technique its potential to change the current computing architecture paradigm by putting an end to the energy-inefficient separation of processors and memory storage, and achieving what is known as in-memory computation.
    Optimization on the horizon
    Baumgaertl and Grundler have published the groundbreaking results in the journal Nature Communications, and the LMGN team is already working on optimizing their approach.
    “Now that we have shown that spin waves write data by switching the nanomagnets from states 0 to 1, we need to work on a process to switch them back again — this is known as toggle switching,” Grundler says.
    He also notes that theoretically, the magnonics approach could process data in the terahertz range of the electromagnetic spectrum (for comparison, current computers function in the slower gigahertz range). However, they still need to demonstrate this experimentally.
    “The promise of this technology for more sustainable computing is huge. With this publication, we are hoping to reinforce interest in wave-based computation, and attract more young researchers to the growing field of magnonics.” More

  • in

    Could changes in Fed's interest rates affect pollution and the environment?

    Can monetary policy such as the United States Federal Reserve raising interest rates affect the environment? According to a new study by Florida Atlantic University’s College of Business, it can.
    Using a stylized dynamic aggregate demand-aggregate supply (AD-AS) model, researchers explored the consequences of traditional monetary tools — namely changes in the short-term interest rate — to the environment. Specifically, they looked at how monetary policy impacts CO2 emissions in the short and long run. The AD-AS model conveys several interlocking relationships between the four macroeconomic goals of growth, unemployment, inflation and a sustainable balance of trade.
    For the study, researchers also used the Global Vector AutoRegressive (GVAR) methodology, which interconnects regions using an explicit economic integration variable, in this case, bilateral trade, allowing for spillover effects.
    Joao Ricardo Faria, Ph.D., co-author and a professor in the Economics Department within FAU’s College of Business, and collaborators from Federal University of Ouro Preto and the University of São Paulo in Brazil, examined four regions for the study: U.S., United Kingdom, Japan and the Eurozone (all the European Union countries that incorporate the euro as their national currency).
    In addition, they used data from eight other countries to characterize the international economy. Their method explicitly models their interplay to assess not only the domestic impact of a policy shift, but also its repercussion to other economies.
    Results of the study, published in the journal Energy Economics, suggest that the impact of monetary policy on pollution is basically domestic: a monetary contraction or reduction in a region reduces its own emissions, but this does not seem to spread out to other economies. However, the findings do not imply that the international economy is irrelevant to determining one region’s emissions level.
    “The actions of a country, like the U.S., are not restricted to its borders. For example, a positive shock in the Federal Reserve’s monetary policy may cause adjustments in the whole system, including the carbon emissions of the other regions,” said Faria.
    The approach used in this study considered the U.S.’s own dynamics as well as the responses of other economies. Moreover, analysis of four distinct regions allowed researchers to verify and compare how domestic markets react to the same policy.
    The study also identified important differences across regions. For example, monetary policy does not seem to reduce short-run emissions in the U.K., or long-run emissions in the Eurozone. Moreover, the cointegration coefficient for Japan is much larger than those of the other regions, suggesting strong effects of monetary policy on CO2 emissions. Furthermore, cointegration analysis suggests a relationship between interest rates and emissions in the long run.
    Statistical analyses also suggest that external factors are relevant to understanding each region’s fluctuations in emissions. A large fraction of the fluctuations in domestic CO2 emissions come from external sources.
    “Findings from our study suggest efforts to reduce emissions can benefit from internationally coordinated policies,” said Faria. “Thus, the main policy prescription is to increase international coordination and efforts to reduce CO2 emissions. We realize that achieving coordination is not an easy endeavor despite international efforts to reduce carbon emissions, such as the Paris Agreement. Our paper highlights the payoffs of coordinated policies. We hope it motivates future research on how to achieve successful coordination.” More

  • in

    Preschoolers prefer to learn from a competent robot than an incompetent human

    Who do children prefer to learn from? Previous research has shown that even infants can identify the best informant. But would preschoolers prefer learning from a competent robot over an incompetent human?
    According to a new paper by Concordia researchers published in the Journal of Cognition and Development, the answer largely depends on age.
    The study compared two groups of preschoolers: one of three-year-olds, the other of five-year-olds. The children participated in Zoom meetings featuring a video of a young woman and a small robot with humanoid characteristics (head, face, torso, arms and legs) called Nao sitting side by side. Between them were familiar objects that the robot would label correctly while the human would label them incorrectly, e.g., referring to a car as a book, a ball as a shoe and a cup as a dog.
    Next, the two groups of children were presented with unfamiliar items: the top of a turkey baster, a roll of twine and a silicone muffin container. Both the robot and the human used different nonsense terms like “mido,” “toma,” “fep” and “dax” to label the objects. The children were then asked what the object was called, endorsing either the label offered by the robot or by the human.
    While the three-year-olds showed no preference for one word over another, the five-year-olds were much more likely to state the term provided by the robot than the human.
    “We can see that by age five, children are choosing to learn from a competent teacher over someone who is more familiar to them — even if the competent teacher is a robot,” says the paper’s lead author, PhD candidate Anna-Elisabeth Baumann. Horizon Postdoctoral Fellow Elizabeth Goldman and undergraduate research assistant Alexandra Meltzer also contributed to the study. Professor and Concordia University Chair of Developmental Cybernetics Diane Poulin-Dubois in the Department of Psychology supervised the study.

    The researchers repeated the experiments with new groups of three- and five-year-olds, replacing the humanoid Nao with a small truck-shaped robot called Cozmo. The results resembled those observed with the human-like robot, suggesting that the robot’s morphology does not affect the children’s selective trust strategies.
    Baumann adds that, along with the labelling task, the researchers administered a naive biology task. The children were asked if biological organs or mechanical gears formed the internal parts of unfamiliar animals and robots. The three-year-olds appeared confused, assigning both biological and mechanical internal parts to the robots. However, the five-year-olds were much more likely to indicate that only mechanical parts belonged inside the robots.
    “This data tells us that the children will choose to learn from a robot even though they know it is not like them. They know that the robot is mechanical,” says Baumann.
    Being right is better than being human
    While there has been a substantial amount of literature on the benefits of using robots as teaching aides for children, the researchers note that most studies focus on a single robot informant or two robots pitted against each other. This study, they write, is the first to use both a human speaker and a robot to see if children deem social affiliation and similarity more important than competency when choosing which source to trust and learn from.
    Poulin-Dubois points out that this study builds on a previous paper she co-wrote with Goldman and Baumann. That paper shows that by age five, children treat robots similarly to how adults do, i.e., as depictions of social agents.
    “Older preschoolers know that robots have mechanical insides, but they still anthropomorphize them. Like adults, these children attribute certain human-like qualities to robots, such as the ability to talk, think and feel,” she says.
    “It is important to emphasize that we see robots as tools to study how children can learn from both human and non-human agents,” concludes Goldman. “As technology use increases, and as children interact with technological devices more, it is important for us to understand how technology can be a tool to help facilitate their learning.” More

  • in

    First silicon integrated ECRAM for a practical AI accelerator

    The transformative changes brought by deep learning and artificial intelligence are accompanied by immense costs. For example, OpenAI’s ChatGPT algorithm costs at least $100,000 every day to operate. This could be reduced with accelerators, or computer hardware designed to efficiently perform the specific operations of deep learning. However, such a device is only viable if it can be integrated with mainstream silicon-based computing hardware on the material level.
    This was preventing the implementation of one highly promising deep learning accelerator — arrays of electrochemical random-access memory, or ECRAM — until a research team at the University of Illinois Urbana-Champaign achieved the first material-level integration of ECRAMs onto silicon transistors. The researchers, led by graduate student Jinsong Cui and professor Qing Cao of the Department of Materials Science & Engineering, recently reported an ECRAM device designed and fabricated with materials that can be deposited directly onto silicon during fabrication in Nature Electronics, realizing the first practical ECRAM-based deep learning accelerator.
    “Other ECRAM devices have been made with the many difficult-to-obtain properties needed for deep learning accelerators, but ours is the first to achieve all these properties and be integrated with silicon without compatibility issues,” Cao said. “This was the last major barrier to the technology’s widespread use.”
    ECRAM is a memory cell, or a device that stores data and uses it for calculations in the same physical location. This nonstandard computing architecture eliminates the energy cost of shuttling data between the memory and the processor, allowing data-intensive operations to be performed very efficiently.
    ECRAM encodes information by shuffling mobile ions between a gate and a channel. Electrical pulses applied to a gate terminal either inject ions into or draw ions from a channel, and the resulting change in the channel’s electrical conductivity stores information. It is then read by measuring the electric current that flows across the channel. An electrolyte between the gate and the channel prevents unwanted ion flow, allowing ECRAM to retain data as a nonvolatile memory.
    The research team selected materials compatible with silicon microfabrication techniques: tungsten oxide for the gate and channel, zirconium oxide for the electrolyte, and protons as the mobile ions. This allowed the devices to be integrated onto and controlled by standard microelectronics. Other ECRAM devices draw inspiration from neurological processes or even rechargeable battery technology and use organic substances or lithium ions, both of which are incompatible with silicon microfabrication.
    In addition, the Cao group device has numerous other features that make it ideal for deep learning accelerators. “While silicon integration is critical, an ideal memory cell must achieve a whole slew of properties,” Cao said. “The materials we selected give rise to many other desirable features.”
    Since the same material was used for the gate and channel terminals, injecting ions into and drawing ions from the channel are symmetric operations, simplifying the control scheme and significantly enhancing reliability. The channel reliably held ions for hours at time, which is sufficient for training most deep neural networks. Since the ions were protons, the smallest ion, the devices switched quite rapidly. The researchers found that their devices lasted for over 100 million read-write cycles and were vastly more efficient than standard memory technology. Finally, since the materials are compatible with microfabrication techniques, the devices could be shrunk to the micro- and nanoscales, allowing for high density and computing power.
    The researchers demonstrated their device by fabricating arrays of ECRAMs on silicon microchips to perform matrix-vector multiplication, a mathematical operation crucial to deep learning. Matrix entries, or neural network weights, were stored in the ECRAMs, and the array performed the multiplication on the vector inputs, represented as applied voltages, by using the stored weights to change the resulting currents. This operation as well as the weight update was performed with a high level of parallelism.
    “Our ECRAM devices will be most useful for AI edge-computing applications sensitive to chip size and energy consumption,” Cao said. “That’s where this type of device has the most significant benefits compared to what is possible with silicon-based accelerators.”
    The researchers are patenting the new device, and they are working with semiconductor industry partners to bring this new technology to market. According to Cao, a prime application of this technology is in autonomous vehicles, which must rapidly learn its surrounding environment and make decisions with limited computational resources. He is collaborating with Illinois electrical & computer engineering faculty to integrate their ECRAMs with foundry-fabricated silicon chips and Illinois computer science faculty to develop software and algorithms taking advantage of ECRAM’s unique capabilities. More

  • in

    Here’s why some Renaissance artists egged their oil paintings

    Art historians often wish that Renaissance painters could shell out secrets of the craft. Now, scientists may have cracked one using chemistry and physics.

    Around the turn of the 15th century in Italy, oil-based paints replaced egg-based tempera paints as the dominant medium. During this transition, artists including Leonardo da Vinci and Sandro Botticelli also experimented with paints made from oil and egg (SN: 4/30/14). But it has been unclear how adding egg to oil paints may have affected the artwork.  

    Science News headlines, in your inbox

    Headlines and summaries of the latest Science News articles, delivered to your email inbox every Thursday.

    Thank you for signing up!

    There was a problem signing you up.

    “Usually, when we think about art, not everybody thinks about the science which is behind it,” says chemical engineer Ophélie Ranquet of the Karlsruhe Institute of Technology in Germany.

    In the lab, Ranquet and colleagues whipped up two oil-egg recipes to compare with plain oil paint. One mixture contained fresh egg yolk mixed into oil paint, and had a similar consistency to mayonnaise. For the other blend, the scientists ground pigment into the yolk, dried it and mixed it with oil — a process the old masters might have used, according to the scant historical records that exist today. Each medium was subjected to a battery of tests that analyzed its mass, moisture, oxidation, heat capacity, drying time and more.

    In both concoctions, the yolk’s proteins, phospholipids and antioxidants helped slow paint oxidation, which can cause paint to turn yellow over time, the team reports March 28 in Nature Communications. 

    In the mayolike blend, the yolk created sturdy links between pigment particles, resulting in stiffer paint. Such consistency would have been ideal for techniques like impasto, a raised, thick style that adds texture to art. Egg additions also could have reduced wrinkling by creating a firmer paint consistency. Wrinkling sometimes happens with oil paints when the top layer dries faster than the paint underneath, and the dried film buckles over looser, still-wet paint.

    The hybrid mediums have some less than eggs-ellent qualities, though. For instance, the eggy oil paint can take longer to dry. If paints were too yolky, Renaissance artists would have had to wait a long time to add the next layer, Ranquet says.

    “The more we understand how artists select and manipulate their materials, the more we can appreciate what they’re doing, the creative process and the final product,” says Ken Sutherland, director of scientific research at the Art Institute of Chicago, who was not involved with the work.

    Research on historical art mediums can not only aid art preservation efforts, Sutherland says, but also help people gain a deeper understanding of the artworks themselves. More