More stories

  • in

    Can a solid be a superfluid? Engineering a novel supersolid state from layered 2D materials

    A collaboration of Australian and European physicists predict that layered electronic 2D semiconductors can host a curious quantum phase of matter called the supersolid.
    The supersolid is a very counterintuitive phase indeed. It is made up of particles that simultaneously form a rigid crystal and yet at the same time flow without friction since all the particles belong to the same single quantum state.
    A solid becomes ‘super’ when its quantum properties match the well-known quantum properties of superconductors. A supersolid simultaneously has two orders, solid and super: solid because of the spatially repeating pattern of particles, super because the particles can flow without resistance.”Although a supersolid is rigid, it can flow like a liquid without resistance,” explains Lead author Dr Sara Conti (University of Antwerp).
    The study was conducted at UNSW (Australia), University of Antwerp (Belgium) and University of Camerino (Italy).
    A 50-Year Journey Towards the Exotic Supersolid
    Geoffrey Chester, a Professor at Cornell University, predicted in 1970 that solid helium-4 under pressure should at low temperatures display: Crystalline solid order, with each helium atom at a specific point in a regularly ordered lattice and, at the same time, Bose-Einstein condensation of the atoms, with every atom in the same single quantum state, so they flow without resistance.

    However in the following five decades the Chester supersolid has not been unambiguously detected.
    Alternative approaches to forming a supersolid-like state have reported supersolid-like phases in cold-atom systems in optical lattices. These are either clusters of condensates or condensates with varying density determined by the trapping geometries. These supersolid-like phases should be distinguished from the original Chester supersolid in which each single particle is localised in its place in the crystal lattice purely by the forces acting between the particles.
    The new Australia-Europe study predicts that such a state could instead be engineered in two-dimensional (2D) electronic materials in a semiconductor structure, fabricated with two conducting layers separated by an insulating barrier of thickness d.
    One layer is doped with negatively-charged electrons and the other with positively-charged holes.
    The particles forming the supersolid are interlayer excitons, bound states of an electron and hole tied together by their strong electrical attraction. The insulating barrier prevents fast self-annihilation of the exciton bound pairs. Voltages applied to top and bottom metal ‘gates’ tune the average separation r0 between excitons.

    The research team predicts that excitons in this structure will form a supersolid over a wide range of layer separations and average separations between the excitons. The electrical repulsion between the excitons can constrain them into a fixed crystalline lattice.
    “A key novelty is that a supersolid phase with Bose-Einstein quantum coherence appears at layer separations much smaller than the separation predicted for the non-super exciton solid that is driven by the same electrical repulsion between excitons,” says co-corresponding author Prof David Neilson (University of Antwerp).
    “In this way, the supersolid pre-empts the non-super exciton solid. At still larger separations, the non-super exciton solid eventually wins, and the quantum coherence collapses.”
    “This is an extremely robust state, readily achievable in experimental setups,” adds co-corresponding author Prof Alex Hamilton (UNSW). “Ironically, the layer separations are relatively large and are easier to fabricate than the extremely small layer separations in such systems that have been the focus of recent experiments aimed at maximising the interlayer exciton binding energies.”
    As for detection, for a superfluid it is well known that this cannot be rotated until it can host a quantum vortex, analogous to a whirlpool. But to form this vortex requires a finite amount of energy, and hence a sufficiently strong rotational force. So up to this point, the measured rotational moment of inertia (the extent to which an object resists rotational acceleration) will remain zero. In the same way, a supersolid can be identified by detecting such an anomaly in its rotational moment of inertia.
    The research team has reported the complete phase diagram of this system at low temperatures.
    “By changing the layer separation relative to the average exciton spacing, the strength of the exciton-exciton interactions can be tuned to stabilise either the superfluid, or the supersolid, or the normal solid,” says Dr Sara Conti.
    “The existence of a triple point is also particularly intriguing. At this point, the boundaries of supersolid and normal-solid melting, and the supersolid to normal-solid transition, all cross. There should be exciting physics coming from the exotic interfaces separating these domains, for example, Josephson tunnelling between supersolid puddles embedded in a normal-background.” More

  • in

    Magnon-based computation could signal computing paradigm shift

    Like electronics or photonics, magnonics is an engineering subfield that aims to advance information technologies when it comes to speed, device architecture, and energy consumption. A magnon corresponds to the specific amount of energy required to change the magnetization of a material via a collective excitation called a spin wave.
    Because they interact with magnetic fields, magnons can be used to encode and transport data without electron flows, which involve energy loss through heating (known as Joule heating) of the conductor used. As Dirk Grundler, head of the Lab of Nanoscale Magnetic Materials and Magnonics (LMGN) in the School of Engineering explains, energy losses are an increasingly serious barrier to electronics as data speeds and storage demands soar.
    “With the advent of AI, the use of computing technology has increased so much that energy consumption threatens its development,” Grundler says. “A major issue is traditional computing architecture, which separates processors and memory. The signal conversions involved in moving data between different components slow down computation and waste energy.”
    This inefficiency, known as the memory wall or Von Neumann bottleneck, has had researchers searching for new computing architectures that can better support the demands of big data. And now, Grundler believes his lab might have stumbled on such a “holy grail.”
    While doing other experiments on a commercial wafer of the ferrimagnetic insulator yttrium iron garnet (YIG) with nanomagnetic strips on its surface, LMGN PhD student Korbinian Baumgaertl was inspired to develop precisely engineered YIG-nanomagnet devices. With the Center of MicroNanoTechnology’s support, Baumgaertl was able to excite spin waves in the YIG at specific gigahertz frequencies using radiofrequency signals, and — crucially — to reverse the magnetization of the surface nanomagnets.
    “The two possible orientations of these nanomagnets represent magnetic states 0 and 1, which allows digital information to be encoded and stored,” Grundler explains.

    A route to in-memory computation
    The scientists made their discovery using a conventional vector network analyzer, which sent a spin wave through the YIG-nanomagnet device. Nanomagnet reversal happened only when the spin wave hit a certain amplitude, and could then be used to write and read data.
    “We can now show that the same waves we use for data processing can be used to switch the magnetic nanostructures so that we also have nonvolatile magnetic storage within the very same system,” Grundler explains, adding that “nonvolatile” refers to the stable storage of data over long time periods without additional energy consumption.
    It’s this ability to process and store data in the same place that gives the technique its potential to change the current computing architecture paradigm by putting an end to the energy-inefficient separation of processors and memory storage, and achieving what is known as in-memory computation.
    Optimization on the horizon
    Baumgaertl and Grundler have published the groundbreaking results in the journal Nature Communications, and the LMGN team is already working on optimizing their approach.
    “Now that we have shown that spin waves write data by switching the nanomagnets from states 0 to 1, we need to work on a process to switch them back again — this is known as toggle switching,” Grundler says.
    He also notes that theoretically, the magnonics approach could process data in the terahertz range of the electromagnetic spectrum (for comparison, current computers function in the slower gigahertz range). However, they still need to demonstrate this experimentally.
    “The promise of this technology for more sustainable computing is huge. With this publication, we are hoping to reinforce interest in wave-based computation, and attract more young researchers to the growing field of magnonics.” More

  • in

    Could changes in Fed's interest rates affect pollution and the environment?

    Can monetary policy such as the United States Federal Reserve raising interest rates affect the environment? According to a new study by Florida Atlantic University’s College of Business, it can.
    Using a stylized dynamic aggregate demand-aggregate supply (AD-AS) model, researchers explored the consequences of traditional monetary tools — namely changes in the short-term interest rate — to the environment. Specifically, they looked at how monetary policy impacts CO2 emissions in the short and long run. The AD-AS model conveys several interlocking relationships between the four macroeconomic goals of growth, unemployment, inflation and a sustainable balance of trade.
    For the study, researchers also used the Global Vector AutoRegressive (GVAR) methodology, which interconnects regions using an explicit economic integration variable, in this case, bilateral trade, allowing for spillover effects.
    Joao Ricardo Faria, Ph.D., co-author and a professor in the Economics Department within FAU’s College of Business, and collaborators from Federal University of Ouro Preto and the University of São Paulo in Brazil, examined four regions for the study: U.S., United Kingdom, Japan and the Eurozone (all the European Union countries that incorporate the euro as their national currency).
    In addition, they used data from eight other countries to characterize the international economy. Their method explicitly models their interplay to assess not only the domestic impact of a policy shift, but also its repercussion to other economies.
    Results of the study, published in the journal Energy Economics, suggest that the impact of monetary policy on pollution is basically domestic: a monetary contraction or reduction in a region reduces its own emissions, but this does not seem to spread out to other economies. However, the findings do not imply that the international economy is irrelevant to determining one region’s emissions level.
    “The actions of a country, like the U.S., are not restricted to its borders. For example, a positive shock in the Federal Reserve’s monetary policy may cause adjustments in the whole system, including the carbon emissions of the other regions,” said Faria.
    The approach used in this study considered the U.S.’s own dynamics as well as the responses of other economies. Moreover, analysis of four distinct regions allowed researchers to verify and compare how domestic markets react to the same policy.
    The study also identified important differences across regions. For example, monetary policy does not seem to reduce short-run emissions in the U.K., or long-run emissions in the Eurozone. Moreover, the cointegration coefficient for Japan is much larger than those of the other regions, suggesting strong effects of monetary policy on CO2 emissions. Furthermore, cointegration analysis suggests a relationship between interest rates and emissions in the long run.
    Statistical analyses also suggest that external factors are relevant to understanding each region’s fluctuations in emissions. A large fraction of the fluctuations in domestic CO2 emissions come from external sources.
    “Findings from our study suggest efforts to reduce emissions can benefit from internationally coordinated policies,” said Faria. “Thus, the main policy prescription is to increase international coordination and efforts to reduce CO2 emissions. We realize that achieving coordination is not an easy endeavor despite international efforts to reduce carbon emissions, such as the Paris Agreement. Our paper highlights the payoffs of coordinated policies. We hope it motivates future research on how to achieve successful coordination.” More

  • in

    Preschoolers prefer to learn from a competent robot than an incompetent human

    Who do children prefer to learn from? Previous research has shown that even infants can identify the best informant. But would preschoolers prefer learning from a competent robot over an incompetent human?
    According to a new paper by Concordia researchers published in the Journal of Cognition and Development, the answer largely depends on age.
    The study compared two groups of preschoolers: one of three-year-olds, the other of five-year-olds. The children participated in Zoom meetings featuring a video of a young woman and a small robot with humanoid characteristics (head, face, torso, arms and legs) called Nao sitting side by side. Between them were familiar objects that the robot would label correctly while the human would label them incorrectly, e.g., referring to a car as a book, a ball as a shoe and a cup as a dog.
    Next, the two groups of children were presented with unfamiliar items: the top of a turkey baster, a roll of twine and a silicone muffin container. Both the robot and the human used different nonsense terms like “mido,” “toma,” “fep” and “dax” to label the objects. The children were then asked what the object was called, endorsing either the label offered by the robot or by the human.
    While the three-year-olds showed no preference for one word over another, the five-year-olds were much more likely to state the term provided by the robot than the human.
    “We can see that by age five, children are choosing to learn from a competent teacher over someone who is more familiar to them — even if the competent teacher is a robot,” says the paper’s lead author, PhD candidate Anna-Elisabeth Baumann. Horizon Postdoctoral Fellow Elizabeth Goldman and undergraduate research assistant Alexandra Meltzer also contributed to the study. Professor and Concordia University Chair of Developmental Cybernetics Diane Poulin-Dubois in the Department of Psychology supervised the study.

    The researchers repeated the experiments with new groups of three- and five-year-olds, replacing the humanoid Nao with a small truck-shaped robot called Cozmo. The results resembled those observed with the human-like robot, suggesting that the robot’s morphology does not affect the children’s selective trust strategies.
    Baumann adds that, along with the labelling task, the researchers administered a naive biology task. The children were asked if biological organs or mechanical gears formed the internal parts of unfamiliar animals and robots. The three-year-olds appeared confused, assigning both biological and mechanical internal parts to the robots. However, the five-year-olds were much more likely to indicate that only mechanical parts belonged inside the robots.
    “This data tells us that the children will choose to learn from a robot even though they know it is not like them. They know that the robot is mechanical,” says Baumann.
    Being right is better than being human
    While there has been a substantial amount of literature on the benefits of using robots as teaching aides for children, the researchers note that most studies focus on a single robot informant or two robots pitted against each other. This study, they write, is the first to use both a human speaker and a robot to see if children deem social affiliation and similarity more important than competency when choosing which source to trust and learn from.
    Poulin-Dubois points out that this study builds on a previous paper she co-wrote with Goldman and Baumann. That paper shows that by age five, children treat robots similarly to how adults do, i.e., as depictions of social agents.
    “Older preschoolers know that robots have mechanical insides, but they still anthropomorphize them. Like adults, these children attribute certain human-like qualities to robots, such as the ability to talk, think and feel,” she says.
    “It is important to emphasize that we see robots as tools to study how children can learn from both human and non-human agents,” concludes Goldman. “As technology use increases, and as children interact with technological devices more, it is important for us to understand how technology can be a tool to help facilitate their learning.” More

  • in

    First silicon integrated ECRAM for a practical AI accelerator

    The transformative changes brought by deep learning and artificial intelligence are accompanied by immense costs. For example, OpenAI’s ChatGPT algorithm costs at least $100,000 every day to operate. This could be reduced with accelerators, or computer hardware designed to efficiently perform the specific operations of deep learning. However, such a device is only viable if it can be integrated with mainstream silicon-based computing hardware on the material level.
    This was preventing the implementation of one highly promising deep learning accelerator — arrays of electrochemical random-access memory, or ECRAM — until a research team at the University of Illinois Urbana-Champaign achieved the first material-level integration of ECRAMs onto silicon transistors. The researchers, led by graduate student Jinsong Cui and professor Qing Cao of the Department of Materials Science & Engineering, recently reported an ECRAM device designed and fabricated with materials that can be deposited directly onto silicon during fabrication in Nature Electronics, realizing the first practical ECRAM-based deep learning accelerator.
    “Other ECRAM devices have been made with the many difficult-to-obtain properties needed for deep learning accelerators, but ours is the first to achieve all these properties and be integrated with silicon without compatibility issues,” Cao said. “This was the last major barrier to the technology’s widespread use.”
    ECRAM is a memory cell, or a device that stores data and uses it for calculations in the same physical location. This nonstandard computing architecture eliminates the energy cost of shuttling data between the memory and the processor, allowing data-intensive operations to be performed very efficiently.
    ECRAM encodes information by shuffling mobile ions between a gate and a channel. Electrical pulses applied to a gate terminal either inject ions into or draw ions from a channel, and the resulting change in the channel’s electrical conductivity stores information. It is then read by measuring the electric current that flows across the channel. An electrolyte between the gate and the channel prevents unwanted ion flow, allowing ECRAM to retain data as a nonvolatile memory.
    The research team selected materials compatible with silicon microfabrication techniques: tungsten oxide for the gate and channel, zirconium oxide for the electrolyte, and protons as the mobile ions. This allowed the devices to be integrated onto and controlled by standard microelectronics. Other ECRAM devices draw inspiration from neurological processes or even rechargeable battery technology and use organic substances or lithium ions, both of which are incompatible with silicon microfabrication.
    In addition, the Cao group device has numerous other features that make it ideal for deep learning accelerators. “While silicon integration is critical, an ideal memory cell must achieve a whole slew of properties,” Cao said. “The materials we selected give rise to many other desirable features.”
    Since the same material was used for the gate and channel terminals, injecting ions into and drawing ions from the channel are symmetric operations, simplifying the control scheme and significantly enhancing reliability. The channel reliably held ions for hours at time, which is sufficient for training most deep neural networks. Since the ions were protons, the smallest ion, the devices switched quite rapidly. The researchers found that their devices lasted for over 100 million read-write cycles and were vastly more efficient than standard memory technology. Finally, since the materials are compatible with microfabrication techniques, the devices could be shrunk to the micro- and nanoscales, allowing for high density and computing power.
    The researchers demonstrated their device by fabricating arrays of ECRAMs on silicon microchips to perform matrix-vector multiplication, a mathematical operation crucial to deep learning. Matrix entries, or neural network weights, were stored in the ECRAMs, and the array performed the multiplication on the vector inputs, represented as applied voltages, by using the stored weights to change the resulting currents. This operation as well as the weight update was performed with a high level of parallelism.
    “Our ECRAM devices will be most useful for AI edge-computing applications sensitive to chip size and energy consumption,” Cao said. “That’s where this type of device has the most significant benefits compared to what is possible with silicon-based accelerators.”
    The researchers are patenting the new device, and they are working with semiconductor industry partners to bring this new technology to market. According to Cao, a prime application of this technology is in autonomous vehicles, which must rapidly learn its surrounding environment and make decisions with limited computational resources. He is collaborating with Illinois electrical & computer engineering faculty to integrate their ECRAMs with foundry-fabricated silicon chips and Illinois computer science faculty to develop software and algorithms taking advantage of ECRAM’s unique capabilities. More

  • in

    Here’s why some Renaissance artists egged their oil paintings

    Art historians often wish that Renaissance painters could shell out secrets of the craft. Now, scientists may have cracked one using chemistry and physics.

    Around the turn of the 15th century in Italy, oil-based paints replaced egg-based tempera paints as the dominant medium. During this transition, artists including Leonardo da Vinci and Sandro Botticelli also experimented with paints made from oil and egg (SN: 4/30/14). But it has been unclear how adding egg to oil paints may have affected the artwork.  

    Science News headlines, in your inbox

    Headlines and summaries of the latest Science News articles, delivered to your email inbox every Thursday.

    Thank you for signing up!

    There was a problem signing you up.

    “Usually, when we think about art, not everybody thinks about the science which is behind it,” says chemical engineer Ophélie Ranquet of the Karlsruhe Institute of Technology in Germany.

    In the lab, Ranquet and colleagues whipped up two oil-egg recipes to compare with plain oil paint. One mixture contained fresh egg yolk mixed into oil paint, and had a similar consistency to mayonnaise. For the other blend, the scientists ground pigment into the yolk, dried it and mixed it with oil — a process the old masters might have used, according to the scant historical records that exist today. Each medium was subjected to a battery of tests that analyzed its mass, moisture, oxidation, heat capacity, drying time and more.

    In both concoctions, the yolk’s proteins, phospholipids and antioxidants helped slow paint oxidation, which can cause paint to turn yellow over time, the team reports March 28 in Nature Communications. 

    In the mayolike blend, the yolk created sturdy links between pigment particles, resulting in stiffer paint. Such consistency would have been ideal for techniques like impasto, a raised, thick style that adds texture to art. Egg additions also could have reduced wrinkling by creating a firmer paint consistency. Wrinkling sometimes happens with oil paints when the top layer dries faster than the paint underneath, and the dried film buckles over looser, still-wet paint.

    The hybrid mediums have some less than eggs-ellent qualities, though. For instance, the eggy oil paint can take longer to dry. If paints were too yolky, Renaissance artists would have had to wait a long time to add the next layer, Ranquet says.

    “The more we understand how artists select and manipulate their materials, the more we can appreciate what they’re doing, the creative process and the final product,” says Ken Sutherland, director of scientific research at the Art Institute of Chicago, who was not involved with the work.

    Research on historical art mediums can not only aid art preservation efforts, Sutherland says, but also help people gain a deeper understanding of the artworks themselves. More

  • in

    Microplastics are in our bodies. Here’s why we don’t know the health risks

    Tiny particles of plastic have been found everywhere — from the deepest place on the planet, the Mariana Trench, to the top of Mount Everest. And now more and more studies are finding that microplastics, defined as plastic pieces less than 5 millimeters across, are also in our bodies.

    “What we are looking at is the biggest oil spill ever,” says Maria Westerbos, founder of the Plastic Soup Foundation, an Amsterdam-based nonprofit advocacy organization that works to reduce plastic pollution around the world. Nearly all plastics are made from fossil fuel sources. And microplastics are “everywhere,” she adds, “even in our bodies.”

    Science News headlines, in your inbox

    Headlines and summaries of the latest Science News articles, delivered to your email inbox every Thursday.

    Thank you for signing up!

    There was a problem signing you up.

    In recent years, microplastics have been documented in all parts of the human lung, in maternal and fetal placental tissues, in human breast milk and in human blood. Microplastics scientist Heather Leslie, formerly of Vrije Universiteit Amsterdam, and colleagues found microplastics in blood samples from 17 of 22 healthy adult volunteers in the Netherlands. The finding, published last year in Environment International, confirms what many scientists have long suspected: These tiny bits can get absorbed into the human bloodstream.

    “We went from expecting plastic particles to be absorbable and present in the human bloodstream to knowing that they are,” Leslie says.

    The findings aren’t entirely surprising; plastics are all around us. Durable, versatile and cheap to manufacture, they are in our clothes, cosmetics, electronics, tires, packaging and so many more items of daily use. And the types of plastic materials on the market continues to increase. “There were around 3,000 [plastic materials] when I started researching microplastics over a decade ago,” Leslie says. “Now there are over 9,600. That’s a huge number, each with its own chemical makeup and potential toxicity.”

    Though durable, plastics do degrade, by weathering from water, wind, sunlight or heat — as in ocean environments or in landfills — or by friction, in the case of car tires, which releases plastic particles along roadways during motion and braking.

    In addition to studying microplastic particles, researchers are also trying to get a handle on nanoplastics, particles which are less than 1 micrometer in length. “The large plastic objects in the environment will break down into micro- and nanoplastics, constantly raising particle numbers,” says toxicologist Dick Vethaak of the Institute for Risk Assessment Sciences at Utrecht University in the Netherlands, who collaborated with Leslie on the study finding microplastics in human blood.

    Nearly two decades ago, marine biologists began drawing attention to the accumulation of microplastics in the ocean and their potential to interfere with organism and ecosystem health (SN: 2/20/16, p. 20). But only in recent years have scientists started focusing on microplastics in people’s food and drinking water — as well as in indoor air.

    Plastic particles are also intentionally added to cosmetics like lipstick, lip gloss and eye makeup to improve their feel and finish, and to personal care products, such as face scrubs, toothpastes and shower gels, for the cleansing and exfoliating properties. When washed off, these microplastics enter the sewage system. They can end up in the sewage sludge from wastewater treatment plants, which is used to fertilize agricultural lands, or even in treated water released into waterways.

    What if any damage microplastics may do when they get into our bodies is not clear, but a growing community of researchers investigating these questions thinks there is reason for concern. Inhaled particles might irritate and damage the lungs, akin to the damage caused by other particulate matter. And although the composition of plastic particles varies, some contain chemicals that are known to interfere with the body’s hormones.

    Currently there are huge knowledge gaps in our understanding of how these particles are processed by the human body.

    How do microplastics get into our bodies?

    Research points to two main entry routes into the human body: We swallow them and we breathe them in.

    Evidence is growing that our food and water is contaminated with microplastics. A study in Italy, reported in 2020, found microplastics in everyday fruits and vegetables. Wheat and lettuce plants have been observed taking up microplastic particles in the lab; uptake from soil containing the particles is probably how they get into our produce in the first place.

    Sewage sludge can contain microplastics not only from personal care products, but also from washing machines. One study looking at sludge from a wastewater treatment plant in southwest England found that if all the treated sludge produced there were used to fertilize soils, a volume of microplastic particles equivalent to what is found in more than 20,000 plastic credit cards could potentially be released into the environment each month.

    On top of that, fertilizers are coated with plastic for controlled release, plastic mulch film is used as a protective layer for crops and water containing microplastics is used for irrigation, says Sophie Vonk, a researcher at the Plastic Soup Foundation.

    “Agricultural fields in Europe and North America are estimated to receive far higher quantities of microplastics than global oceans,” Vonk says.

    A recent pilot study commissioned by the Plastic Soup Foundation found microplastics in all blood samples collected from pigs and cows on Dutch farms, showing livestock are capable of absorbing some of the plastic particles from their feed, water or air. Of the beef and pork samples collected from farms and supermarkets as part of the same study, 75 percent showed the presence of microplastics. Multiple studies document that microplastic particles are also in fish muscle, not just the gut, and so are likely to be consumed when people eat seafood.

    Microplastics are in our drinking water, whether it’s from the tap or bottled. The particles may enter the water at the source, during treatment and distribution, or, in the case of bottled water, from its packaging.

    Results from studies attempting to quantify levels of human ingestion vary dramatically, but they suggest people might be consuming on the order of tens of thousands of microplastic particles per person per year. These estimates may change as more data come in, and they will likely vary depending on people’s diets and where they live. Plus, it is not yet clear how these particles are absorbed, distributed, metabolized and excreted by the human body, and if not excreted immediately, how long they might stick around.

    Babies might face particularly high exposures. A small study of six infants and 10 adults found that the infants had more microplastic particles in their feces than the adults did. Research suggests microplastics can enter the fetus via the placenta, and babies could also ingest the particles via breast milk. The use of plastic feeding bottles and teething toys adds to children’s microplastics exposure.

    Microplastic particles are also floating in the air. Research conducted in Paris to document microplastic levels in indoor air found concentrations ranging from three to 15 particles per cubic meter of air. Outdoor concentrations were much lower.

    Airborne particles may turn out to be more of a concern than those in food. One study reported in 2018 compared the amount of microplastics present within mussels harvested off Scotland’s coasts with the amount of microplastics present in indoor air. Exposure to microplastic fibers from the air during the meal was far higher than the risk of ingesting microplastics from the mussels themselves.

    Extrapolating from this research, immunologist Nienke Vrisekoop of the University Medical Center Utrecht says, “If I keep a piece of fish on the table for an hour, it has probably gathered more microplastics from the ambient air than it has from the ocean.”

    What’s more, a study of human lung tissue reported last year offers solid evidence that we are breathing in plastic particles. Microplastics showed up in 11 of 13 samples, including those from the upper, middle and lower lobes, researchers in England reported.

    Perhaps good news: Microplastics seem unable to penetrate the skin. “The epidermis holds off quite a lot of stuff from the outside world, including [nano]particles,” Leslie says. “Particles can go deep into your skin, but so far we haven’t observed them passing the barrier, unless the skin is damaged.”

    What do we know about the potential health risks?

    Studies in mice suggest microplastics are not benign. Research in these test animals shows that lab exposure to microplastics can disrupt the gut microbiome, lead to inflammation, lower sperm quality and testosterone levels, and negatively affect learning and memory.

    But some of these studies used concentrations that may not be relevant to real-world scenarios. Studies on the health effects of exposure in humans are just getting under way, so it could be years before scientists understand the actual impact in people.

    Immunologist Barbro Melgert of the University of Groningen in the Netherlands has studied the effects of nylon microfibers on human tissue grown to resemble lungs. Exposure to nylon fibers reduced both the number and size of airways that formed in these tissues by 67 percent and 50 percent, respectively. “We found that the cause was not the microfibers themselves but rather the chemicals released from them,” Melgert says.

    “Microplastics could be considered a form of air pollution,” she says. “We know air pollution particles tend to induce stress in our lungs, and it will probably be the same for microplastics.”

    Vrisekoop is studying how the human immune system responds to microplastics. Her unpublished lab experiments suggest immune cells don’t recognize microplastic particles unless they have blood proteins, viruses, bacteria or other contaminants attached. But it is likely that such bits will attach to microplastic particles out in the environment and inside the body.

    “If the microplastics are not clean … the immune cells [engulf] the particle and die faster because of it,” Vrisekoop says. “More immune cells then rush in.” This marks the start of an immune response to the particle, which could potentially trigger a strong inflammatory reaction or possibly aggravate existing inflammatory diseases of the lungs or gastrointestinal tract.

    A study reported last year identified microplastic particles in 11 of 13 samples of human lung tissue (examples shown). The plastics were found throughout the lungs, and their presence suggests that inhalation is one route for the particles to enter the body.L.C. JENNER ET AL/SCIENCE OF THE TOTAL ENVIRONMENT 2022

    Some of the chemicals added to make plastic suitable for particular uses are also known to cause problems for humans: Bisphenol A, or BPA, is used to harden plastic and is a known endocrine disruptor that has been linked to developmental effects in children and problems with reproductive systems and metabolism in adults (SN: 7/18/09, p. 5). Phthalates, used to make plastic soft and flexible, are associated with adverse effects on fetal development and reproductive problems in adults along with insulin resistance and obesity. And flame retardants that make electronics less flammable are associated with endocrine, reproductive and behavioral effects.

    “Some of these chemical products that I worked on in the past [like the polybrominated diphenyl ethers used as flame retardants] have been phased out or are prohibited to use in new products now [in the European Union and the United States] because of their neurotoxic or disrupting effects,” Leslie says.

    What are the open questions?

    The first step in determining the risk of microplastics to human health is to better understand and quantify human exposure. Polyrisk — one of five large-scale research projects under CUSP, a multidisciplinary group of researchers and experts from 75 organizations across 21 European countries studying micro- and nanoplastics — is doing exactly that.

    Immunotoxicologist Raymond Pieters, of the Institute for Risk Assessment Sciences at Utrecht University and coordinator of Polyrisk, and colleagues are studying people’s inhalation exposure in a number of real-life scenarios: near a traffic light, for example, where cars are likely to be braking, versus a highway, where vehicles are continuously moving. Other scenarios under study include an indoor sports stadium, as well as occupational scenarios like the textile and rubber industry.

    Melgert wants to know how much microplastic is in our houses, what the particle sizes are and how much we breathe in. “There are very few studies looking at indoor levels [of microplastics],” she says. “We all have stuff in our houses — carpets, insulation made of plastic materials, curtains, clothes — that all give off fibers.”

    Vethaak, who co-coordinates MOMENTUM, a consortium of 27 research and industry partners from the Netherlands and seven other countries studying microplastics’ potential effects on human health, is quick to point out that “any measurement of the degree of exposure to plastic particles is likely an underestimation.” In addition to research on the impact of microplastics, the group is also looking at nanoplastics. Studying and analyzing these smallest of plastics in the environment and in our bodies is extremely challenging. “The analytical tools and techniques required for this are still being developed,” Vethaak says.

    Vethaak also wants to understand whether microplastic particles coated with bacteria and viruses found in the environment could spread these pathogens and increase infection rates in people. Studies have suggested that microplastics in the ocean can serve as safe havens for germs.

    Alongside knowing people’s level of exposure to microplastics, the second big question scientists want to understand is what if any level of real-world exposure is harmful. “This work is confounded by the multitude of different plastic particle types, given their variations in size, shape and chemical composition, which can affect uptake and toxicity,” Leslie says. “In the case of microplastics, it will take several more years to determine what the threshold dose for toxicity is.”

    Several countries have banned the use of microbeads in specific categories of products, including rinse-off cosmetics and toothpastes. But there are no regulations or policies anywhere in the world that address the release or concentrations of other microplastics — and there are very few consistent monitoring efforts. California has recently taken a step toward monitoring by approving the world’s first requirements for testing microplastics in drinking water sources. The testing will happen over the next several years.

    Pieters is very pragmatic in his outlook: “We know ‘a’ and ‘b,’” he says. “So we can expect ‘c,’ and ‘c’ would [imply] a risk for human health.”

    He is inclined to find ways to protect people now even if there is limited or uncertain scientific knowledge. “Why not take a stand for the precautionary principle?” he asks.

    For people who want to follow Pieters’ lead, there are ways to reduce exposure.

    “Ventilate, ventilate, ventilate,” Melgert says. She recommends not only proper ventilation, including opening your windows at home, but also regular vacuum cleaning and air purification. That can remove dust, which often contains microplastics, from surfaces and the air.

    Consumers can also choose to avoid cosmetics and personal care products containing microbeads. Buying clothes made from natural fabrics like cotton, linen and hemp, instead of from synthetic materials like acrylic and polyester, helps reduce the shedding of microplastics during wear and during the washing process.

    Specialized microplastics-removal devices, including laundry balls, laundry bags and filters that attach to washing machines, are designed to reduce the number of microfibers making it into waterways.

    Vethaak recommends not heating plastic containers in the microwave, even if they claim to be food grade, and not leaving plastic water bottles in the sun.

    Perhaps the biggest thing people can do is rely on plastics less. Reducing overall consumption will reduce plastic pollution, and so reduce microplastics sloughing into the air and water.

    Leslie recommends functional substitution: “Before you purchase something, think if you really need it, and if it needs to be plastic.”

    Westerbos remains hopeful that researchers and scientists from around the world can come together to find a solution. “We need all the brainpower we have to connect and work together to find a substitute to plastic that is not toxic and doesn’t last [in the environment] as long as plastic does,” she says. More

  • in

    AI 'brain' created from core materials for OLED TVs

    ChatGPT’s impact extends beyond the education sector and is causing significant changes in other areas. The AI language model is recognized for its ability to perform various tasks, including paper writing, translation, coding, and more, all through question-and-answer-based interactions. The AI system relies on deep learning, which requires extensive training to minimize errors, resulting in frequent data transfers between memory and processors. However, traditional digital computer systems’ von Neumann architecture separates the storage and computation of information, resulting in increased power consumption and significant delays in AI computations. Researchers have developed semiconductor technologies suitable for AI applications to address this challenge.
    A research team at POSTECH, led by Professor Yoonyoung Chung (Department of Electrical Engineering, Department of Semiconductor Engineering), Professor Seyoung Kim (Department of Materials Science and Engineering, Department of Semiconductor Engineering), and Ph.D. candidate Seongmin Park (Department of Electrical Engineering), has developed a high-performance AI semiconductor device using indium gallium zinc oxide (IGZO), an oxide semiconductor widely used in OLED displays. The new device has proven to be excellent in terms of performance and power efficiency.
    Efficient AI operations, such as those of ChatGPT, require computations to occur within the memory responsible for storing information. Unfortunately, previous AI semiconductor technologies were limited in meeting all the requirements, such as linear and symmetric programming and uniformity, to improve AI accuracy.
    The research team sought IGZO as a key material for AI computations that could be mass-produced and provide uniformity, durability, and computing accuracy. This compound comprises four atoms in a fixed ratio of indium, gallium, zinc, and oxygen and has excellent electron mobility and leakage current properties, which have made it a backplane of the OLED display.
    Using this material, the researchers developed a novel synapse device composed of two transistors interconnected through a storage node. The precise control of this node’s charging and discharging speed has enabled the AI semiconductor to meet the diverse performance metrics required for high-level performance. Furthermore, applying synaptic devices to a large-scale AI system requires the output current of synaptic devices to be minimized. The researchers confirmed the possibility of utilizing the ultra-thin film insulators inside the transistors to control the current, making them suitable for large-scale AI.
    The researchers used the newly developed synaptic device to train and classify handwritten data, achieving a high accuracy of over 98%, which verifies its potential application in high-accuracy AI systems in the future.
    Professor Chung explained, “The significance of my research team’s achievement is that we overcame the limitations of conventional AI semiconductor technologies that focused solely on material development. To do this, we utilized materials already in mass production. Furthermore, Linear and symmetrical programming characteristics were obtained through a new structure using two transistors as one synaptic device. Thus, our successful development and application of this new AI semiconductor technology show great potential to improve the efficiency and accuracy of AI.”
    This study was published last week on the inside back cover of Advanced Electronic Materials and was supported by the Next-Generation Intelligent Semiconductor Technology Development Program through the National Research Foundation, funded by the Ministry of Science and ICT of Korea. More