More stories

  • in

    Earth’s oldest known wildfires raged 430 million years ago

    Bits of charcoal entombed in ancient rocks unearthed in Wales and Poland push back the earliest evidence for wildfires to around 430 million years ago. Besides breaking the previous record by about 10 million years, the finds help pin down how much oxygen was in Earth’s atmosphere at the time.

    The ancient atmosphere must have contained at least 16 percent oxygen, researchers report June 13 in Geology. That conclusion is based on modern-day lab tests that show how much oxygen it takes for a wildfire to take hold and spread.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    While oxygen makes up 21 percent of our air today, over the last 600 million years or so, oxygen levels in Earth’s atmosphere have fluctuated between 13 percent and 30 percent (SN: 12/13/05). Long-term models simulating past oxygen concentrations are based on processes such as the burial of coal swamps, mountain building, erosion and the chemical changes associated with them. But those models, some of which predict lower oxygen levels as low as 10 percent for this time period, provide broad-brush strokes of trends and may not capture brief spikes and dips, say Ian Glasspool and Robert Gastaldo, both paleobotanists at Colby College in Waterville, Maine.

    Charcoal, a remnant of wildfire, is physical evidence that provides, at the least, a minimum threshold for oxygen concentrations. That’s because oxygen is one of three ingredients needed to create a wildfire. The second, ignition, came from lightning in the ancient world, says Glasspool. The third, fuel, came from burgeoning plants and fungus 430 million years ago, during the Silurian Period. The predominant greenery were low-growing plants just a couple of centimeters tall. Scattered among this diminutive ground cover were occasional knee-high to waist-high plants and Prototaxites fungi that towered up to nine meters tall. Before this time, most plants were single-celled and lived in the seas.

    Once plants left the ocean and began to thrive, wildfire followed. “Almost as soon as we have evidence of plants on land, we have evidence of wildfire,” says Glasspool.

    That evidence includes tiny chunks of partially charred plants — including charcoal as identified by its microstructure — as well as conglomerations of charcoal and associated minerals embedded within fossilized hunks of Prototaxites fungi. Those samples came from rocks of known ages that formed from sediments dumped just offshore of ancient landmasses. This wildfire debris was carried offshore in streams or rivers before it settled, accumulated and was preserved, the researchers suggest.

    The microstructure of this fossilized and partially charred bit of plant unearthed in Poland from sediments that are almost 425 million years old reveals that it was burnt by some of Earth’s earliest known wildfires.Ian Glasspool/Colby College

    The discovery adds to previous evidence, including analyses of pockets of fluid trapped in halite minerals formed during the Silurian, that suggests that atmospheric oxygen during that time approached or even exceeded the 21 percent concentration seen today, the pair note.

    “The team has good evidence for charring,” says Lee Kump, a biogeochemist at Penn State who wasn’t involved in the new study. Although its evidence points to higher oxygen levels than some models suggest for that time, it’s possible that oxygen was a substantial component of the atmosphere even earlier than the Silurian, he says.

    “We can’t rule out that oxygen levels weren’t higher even further back,” says Kump. “It could be that plants from that era weren’t amenable to leaving a charcoal record.” More

  • in

    Methods from weather forecasting can be adapted to assess risk of COVID-19 exposure

    Techniques used in weather forecasting can be repurposed to provide individuals with a personalized assessment of their risk of exposure to COVID-19 or other viruses, according to new research published by Caltech scientists.
    The technique has the potential to be more effective and less intrusive than blanket lockdowns for combatting the spread of disease, says Tapio Schneider, the Theodore Y. Wu Professor of Environmental Science and Engineering; senior research scientist at JPL, which Caltech manages for NASA; and the lead author of a study on the new research that was published by PLOS Computational Biology on June 23.
    “For this pandemic, it may be too late,” Schneider says, “but this is not going to be the last epidemic that we will face. This is useful for tracking other infectious diseases, too.”
    In principle, the idea is simple: Weather forecasting models ingest a lot of data — for example, measurements of wind speed and direction, temperature, and humidity from local weather stations, in addition to satellite data. They use the data to assess what the current state of the atmosphere is, forecast the weather evolution into the future, and then repeat the cycle by blending the forecast atmospheric state with new data. In the same way, disease risk assessment also harnesses various types of available data to make an assessment about an individual’s risk of exposure to or infection with disease, forecasts the spread of disease across a network of human contacts using an epidemiological model, and then repeats the cycle by blending the forecast with new data. Such assessments might use the results of an institution’s surveillance testing, data from wearable sensors, self-reported symptoms and close contacts as recorded by smartphones, and municipalities’ disease-reporting dashboards.
    The research presented in PLOS Computational Biology is proof of concept. However, its end result would be a smart phone app that would provide an individual with a frequently updated numerical assessment (i.e., a percentage) that reflects their likelihood of having been exposed to or infected with a particular infectious disease agent, such as COVID-19.
    Such an app would be similar to existing COVID-19 exposure notification apps but more sophisticated and effective in its use of data, Schneider and his colleagues say. Those apps provide a binary exposure assessment (“yes, you have been exposed,” or, in the case of no exposure, radio silence); the new app described in the study would provide a more nuanced understanding of continually changing risks of exposure and infection as individuals come close to others and as data about infections is propagated across a continually evolving contact network. More

  • in

    Self-assembled, interlocked threads: Spinning yarn with no machine needed

    The spiral is pervasive throughout the universe — from the smallest DNA molecule to ferns and sunflowers, and from fingerprints to galaxies themselves. In science the ubiquity of this structure is associated with parsimony — that things will organize themselves in the simplest or most economical way.
    Researchers from the University of Pittsburgh and Princeton University unexpectedly discovered that this principle also applies to some non-biological systems that convert chemical energy into mechanical action — allowing two-dimensional polymer sheets to rise and rotate in spiral helices without the application of external power.
    This self-assembly into coherent three-dimensional structures represents the group’s latest contribution in the field of soft robotics and chemo-mechanical systems.
    The research was published this month in Proceedings of the National Academy of Sciences (PNAS) Nexus. Lead author is Raj Kumar Manna with Oleg E. Shklyaev, post-doctoral associates with Anna Balazs, Distinguished Professor of Chemical and Petroleum Engineering and the John A. Swanson Chair of Engineering in Pitt’s Swanson School of Engineering. Contributing author is Howard A. Stone, the Donald R. Dixon ’69 and Elizabeth W. Dixon Professor of Mechanical and Aerospace Engineering at Princeton.
    “Through computational modeling, we placed passive, uncoated polymer sheets around a circular, catalytic patch within a fluid-filled chamber. We added hydrogen peroxide to initiate a catalytic reaction, which then generated fluid flow. While one sheet alone did not spin in the solution, multiple sheets autonomously self-assembled into a tower-like structure,” Manna explained. “Then, as the tower experienced an instability, the sheets spontaneously formed an interweaving structure that rotates in the fluid.”
    As Balazs pointed out, “The whole thing resembles a thread of twisted yarn being formed by a rotating spindle, which was used to make fibers for weaving. Except, there is no spindle; the system naturally forms the intertwined, rotating structure.”
    Flow affects the sheet which affects the flow More

  • in

    Ultra-thin film creates vivid 3D images with large field of view

    Researchers have developed a new ultra-thin film that can create detailed 3D images viewable under normal illumination without any special reading devices. The images appear to float on top of the film and exhibit smooth parallax, which means they can be clearly viewed from all angles. With additional development, the new glass-free approach could be used as a visual security feature or incorporated into virtual or augmented reality devices.
    “Our ultra-thin, integrated reflective imaging film creates an image that can be viewed from a wide range of angles and appears to have physical depth,” said research team leader Su Shen from Soochow University in China. “It can be easily laminated to any surface as a tag or sticker or integrated into a transparent substrate, making it suitable for use as a security feature on banknotes or identity cards.”
    In the Optica Publishing Group journal Optics Letters, the researchers describe their new imaging film. At just 25 microns thick, the film is about twice as thick as household plastic wrap. It uses a technology known as light-field imaging, which captures the direction and intensity of all rays of light within a scene to create a 3D image.
    “Achieving glass-free 3D imaging with a large field of view, smooth parallax and a wide, focusable depth range under natural viewing conditions is one of the most exciting challenges in optics,” said Shen. “Our approach offers an innovative way to achieve vivid 3D images that cause no viewing discomfort or fatigue, are easy to see with the naked eye and are aesthetically pleasing.”
    High-density recording
    Various technical schemes have been investigated for creating the ideal 3D viewing experience, but they tend to suffer from drawbacks such as a limited viewing angle or low light efficiency. To overcome these shortcomings, the researchers developed a reflective light-field imaging film and new algorithm that allows both the position and angular information for the light field to be recorded with high density.
    The researchers also developed an economic self-releasing nanoimprinting lithography approach that can achieve the precision needed for high optical performance while using low-cost materials.The film is patterned with an array of reflective focusing elements on one side that act much like tiny cameras while the other side contains a micropattern array that encodes the image to be displayed.
    “The powerful microfabrication approach we used allowed us to make a reflective focusing that was extremely compact — measuring just tens of microns,” said Shen. “This lets the light radiance be densely collected, creating a realistic 3D effect.”
    A realistic 3D image
    The researchers demonstrated their new film by using it to create a 3D image of a cubic die that could be viewed clearly from almost any viewpoint. The resulting image measures 8 x 8 millimeters with an image depth that ranges from 0.1 to 8.0 millimeters under natural lighting conditions. They have also designed and fabricated an imaging film with a floating logo that can be used as a decorative element, for example on the back of a mobile phone.
    The researchers say that their algorithm and nanopatterning technique could be extended to other applications by creating the nanopatterns on a transparent display screen instead of a film, for example. They are also working toward commercializing the fabrication process by developing a double-sided nanoimprinting machine that would make it easier to achieve the precise alignment required between the micropatterns on each side of the film.
    Story Source:
    Materials provided by Optica. Note: Content may be edited for style and length. More

  • in

    Personal health trackers may include smart face mask, other wearables

    For years, automotive companies have developed intelligent sensors to provide real-time monitoring of a vehicle’s health, including engine oil pressure, tire pressure and air-fuel mixture. Together, these sensors can provide an early warning system for a driver to identify a potential problem before it may need to be repaired.
    Now, in a similar vein biologically, Zheng Yan, an assistant professor in the MU College of Engineering at the University of Missouri, has recently published two studies demonstrating different ways to improve wearable bioelectronic devices and materials to provide better real-time monitoring of a person’s health, including vital signs.
    Developing a ‘smart’ face mask
    The onset of the COVID-19 pandemic has brought the idea of mask-wearing to the forefront of many people’s minds. In response, one focus of Yan’s lab has been to develop breathable soft bioelectronics. He said it was natural for him and his team to come up with the idea for integrating bioelectronics in a breathable face mask, which can monitor someone’s physiological status based on the nature of the person’s cough. Their findings were recently published in ACS Nano, a journal of the American Chemical Society.
    “Different respiratory problems lead to different cough frequencies and degrees,” Yan said. “Taking chronic obstructive pulmonary disease (COPD) as an example, the frequency of cough in the early morning is higher than that in the daytime and night. Our smart face mask can effectively monitor cough frequencies, which may assist physicians with knowing disease development and providing timely, customized interventions.”
    In addition to monitoring someone’s physiological status, the mask can also help identify proper mask wearing in public places using a bioelectronic sensor, Yan said. At this time, the mask does not have the capability to provide automatic reminders, but they would like to develop that function in the future. More

  • in

    Are babies the key to the next generation of artificial intelligence?

    Babies can help unlock the next generation of artificial intelligence (AI), according to Trinity College neuroscientists and colleagues who have just published new guiding principles for improving AI.
    The research, published today  in the journal Nature Machine Intelligence, examines the neuroscience and psychology of infant learning and distills three principles to guide the next generation of AI, which will help overcome the most pressing limitations of machine learning.
    Dr Lorijn Zaadnoordijk, Marie Skłodowska-Curie Research Fellow at Trinity College explained:
    “Artificial intelligence (AI) has made tremendous progress in the last decade, giving us smart speakers, autopilots in cars, ever-smarter apps, and enhanced medical diagnosis. These exciting developments in AI have been achieved thanks to machine learning which uses enormous datasets to train artificial neural network models. However, progress is stalling in many areas because the datasets that machines learn from must be painstakingly curated by humans. But we know that learning can be done much more efficiently, because infants don’t learn this way! They learn by experiencing the world around them, sometimes by even seeing something just once.”
    In their article “Lessons from infant learning for unsupervised machine learning,” Dr Lorijn Zaadnoordijk and Professor Rhodri Cusack, from the Trinity College Institute of Neuroscience, and Dr Tarek R. Besold from TU Eindhoven, the Netherlands, argue that better ways to learn from unstructured data are needed. For the first time, they make concrete proposals about what particular insights from infant learning can be fruitfully applied in machine learning and how exactly to apply these learnings.
    Machines, they say, will need in-built preferences to shape their learning from the beginning. They will need to learn from richer datasets that capture how the world is looking, sounding, smelling, tasting and feeling. And, like infants, they will need to have a developmental trajectory, where experiences and networks change as they “grow up.”
    Dr. Tarek R. Besold, Researcher, Philosophy & Ethics group at TU Eindhoven, said:
    “As AI researchers we often draw metaphorical parallels between our systems and the mental development of human babies and children. It is high time to take these analogies more seriously and look at the rich knowledge of infant development from psychology and neuroscience, which may help us overcome the most pressing limitations of machine learning.”
    Professor Rhodri Cusack, The Thomas Mitchell Professor of Cognitive Neuroscience, Director of Trinity College Institute of Neuroscience, added:
    “Artificial neural networks were in parts inspired by the brain. Similar to infants, they rely on learning, but current implementations are very different from human (and animal) learning. Through interdisciplinary research, babies can help unlock the next generation of AI.”
    Story Source:
    Materials provided by Trinity College Dublin. Note: Content may be edited for style and length. More

  • in

    Modeling a devastating childhood disease on a chip

    Millions of children in low- and middle-income nations suffer from environmental enteric dysfunction (EED), a chronic inflammatory disease of the intestine that is the second leading cause of death in children under five years old. EED is a devastating condition that is associated with malnutrition, stunted growth, and poor cognitive development, permanently impacting patients’ quality of life. In addition, oral vaccines are less effective in children with EED, leaving them vulnerable to otherwise preventable diseases. While some cases of EED are treatable by simply improving a patient’s diet, better nutrition doesn’t help all children. A lack of adequate nutrients and exposure to contaminated water and food contribute to EED, but the underlying mechanism of the disease remains unknown.
    Now, a team of researchers at the Wyss Institute at Harvard University has created an in vitro human model of EED in a microengineered Intestine Chip device, providing a window into the complex interplay between malnutrition and genetic factors driving the disease. Their EED Chips recapitulate several features of EED found in biopsies from human patients, including inflammation, intestinal barrier dysfunction, reduced nutrient absorption, and atrophy of the villi (tiny hair-like projections) on intestinal cells.
    They also found that depriving healthy Intestine Chips of two crucial nutrients — niacinamide (a vitamin) and tryptophan (an essential amino acid) — caused morphological, functional, and genetic changes similar to those found in EED patients, suggesting that their model could be used to identify and test the effects of potential treatments.
    “Functionally, there is something very wrong with these kids’ digestive system and its ability to absorb nutrients and fight infections, which you can’t cure simply by giving them the nutrients that are missing from their diet. Our EED model allowed us to decipher what has happened to the intestine, both physically and genetically, that so dramatically affects its normal function in patients with EED,” said co-first author Amir Bein, R.D., Ph.D., a former Senior Postdoctoral Research Fellow at the Wyss Institute who is now the VP of Biology at Quris Technologies.
    The research is published today in Nature Biomedical Engineering.
    Modeling a complex disease on-a-chip
    The EED Chip project grew out of conversations between the Wyss Institute’s Founding Director Donald Ingber, M.D., Ph.D. and the Bill and Melinda Gates Foundation, which has an established interest in supporting research to understand and treat enteric diseases. Recognizing that there had been no in vitro studies of EED to study its molecular mechanisms, a Wyss team of more than 20 people set about creating a model of EED using its Human Organ Chip technology developed in Ingber’s lab. More

  • in

    Where once were black boxes, new LANTERN illuminates

    Researchers at the National Institute of Standards and Technology (NIST) have developed a new statistical tool that they have used to predict protein function. Not only could it help with the difficult job of altering proteins in practically useful ways, but it also works by methods that are fully interpretable — an advantage over the conventional artificial intelligence (AI) that has aided with protein engineering in the past.
    The new tool, called LANTERN, could prove useful in work ranging from producing biofuels to improving crops to developing new disease treatments. Proteins, as building blocks of biology, are a key element in all these tasks. But while it is comparatively easy to make changes to the strand of DNA that serves as the blueprint for a given protein, it remains challenging to determine which specific base pairs — rungs on the DNA ladder — are the keys to producing a desired effect. Finding these keys has been the purview of AI built of deep neural networks (DNNs), which, though effective, are notoriously opaque to human understanding.
    Described in a new paper published in the Proceedings of the National Academy of Sciences, LANTERN shows the ability to predict the genetic edits needed to create useful differences in three different proteins. One is the spike-shaped protein from the surface of the SARS-CoV-2 virus that causes COVID-19; understanding how changes in the DNA can alter this spike protein might help epidemiologists predict the future of the pandemic. The other two are well-known lab workhorses: the LacI protein from the E. coli bacterium and the green fluorescent protein (GFP) used as a marker in biology experiments. Selecting these three subjects allowed the NIST team to show not only that their tool works, but also that its results are interpretable — an important characteristic for industry, which needs predictive methods that help with understanding of the underlying system.
    “We have an approach that is fully interpretable and that also has no loss in predictive power,” said Peter Tonner, a statistician and computational biologist at NIST and LANTERN’s main developer. “There’s a widespread assumption that if you want one of those things you can’t have the other. We’ve shown that sometimes, you can have both.”
    The problem the NIST team is tackling might be imagined as interacting with a complex machine that sports a vast control panel filled with thousands of unlabeled switches: The device is a gene, a strand of DNA that encodes a protein; the switches are base pairs on the strand. The switches all affect the device’s output somehow. If your job is to make the machine work differently in a specific way, which switches should you flip?
    Because the answer might require changes to multiple base pairs, scientists have to flip some combination of them, measure the result, then choose a new combination and measure again. The number of permutations is daunting. More