More stories

  • in

    Biodegradable displays for sustainable electronics

    In the next years, increasing use of electronic devices in consumables and new technologies for the internet of things will increase the amount of electronic scrap. To save resources and minimize waste volumes, an eco-friendlier production and more sustainable lifecycle will be needed. Scientists of Karlsruhe Institute of Technology (KIT) have now been the first to produce displays, whose biodegradability has been checked and certified by an independent office. The results are reported in the Journal of Materials Chemistry.
    “For the first time, we have demonstrated that it is possible to produce sustainable displays that are largely based on natural materials with the help of industrially relevant production methods. After use, these displays are no electronic scrap, but can be composted. In combination with recycling and reuse, this might help minimize or completely prevent some of the environmental impacts of electronic scrap,” says Manuel Pietsch, first author of the publication and researcher of KIT’s Light Technology Institute (LTI), who is working at the Heidelberg InnovationLab.
    Low Energy Consumption, Simple Component Architecture
    Functioning of the display is based on the so-called electrochromic effect of the initial organic material. When voltage is applied, light absorption is modified and the material changes its color. Electrochromic displays have a low energy consumption and simple component architecture compared to commercially available displays, such as LED, LCD, and E-paper. Another advantage is that these displays can be produced by inkjet printing in a customized, inexpensive, and material-efficient way. Moreover, this process is suited for scaling with a high throughput. The materials used mainly are of natural origin or biocompatible. Sealing with gelatine makes the display adhesive and flexible, such that it can be worn directly on the skin.
    Use in Medical Diagnostics and Food Packagings
    The display is generally suited for short-lifecycle applications in various sectors. In medical diagnostics, for instance, where hygiene plays an important role, sensors and their indicators have to be cleaned or disposed of after use. The newly developed display will not be dumped as electronic scrap, but is compostable. It can also be used for quality monitoring in food packagings, where reuse is not permitted. Digital printing allows the displays to be adapted to persons or complex shapes without any expensive modification of the process. This reduces the consumption of resources.
    “As far as we know, this is the first demonstration of a biodegradable display produced by inkjet printing. It will pave the way to sustainable innovations for other electronic components and to the production of eco-friendlier electronics,” says Gerardo Hernandez-Sosa, Head of LTI’s Printed Electronics Group at the Heidelberg InnovationLab.

    Story Source:
    Materials provided by Karlsruher Institut für Technologie (KIT). Note: Content may be edited for style and length. More

  • in

    How one physicist is unraveling the mathematics of knitting

    Physicist Elisabetta Matsumoto is an avid knitter and has been since taking up the hobby as a child. During graduate school at the University of Pennsylvania in 2009, Matsumoto came across an unusually knotty stitch while knitting a pattern for a Japanese red dragon. “I have books with thousands of different stitch patterns, but the one in the red dragon wall hanging was one I had never seen,” she says. That got her thinking about the geometry of stitches and, eventually, led her to study the mathematics of knitting.
    There are a hundred or so basic stitches, Matsumoto says. By varying stitch combinations, a knitter can alter the elasticity, mechanical strength and 3-D structure of the resulting fabric. Yarn on its own isn’t very elastic. But when knitted, the yarn gives rise to fabric that can stretch by more than twice its length while the yarn itself barely stretches.
    Matsumoto, now at the Georgia Institute of Technology in Atlanta, is teasing out the mathematical rules that dictate how stitches impart such unique properties to fabrics. She hopes to develop a catalog of stitch types, their combinations and the resulting fabric properties. Knitters, scientists and manufacturers could all benefit from a dictionary of knits, she says.
    Elisabetta Matsumoto, a physicist at the Georgia Institute of Technology in Atlanta, hopes to create a dictionary of knits that could be used to manipulate physical properties of materials.Courtesy of Elisabetta Matsumoto
    Matsumoto’s research builds on knot theory (SN: 10/31/08), a set of mathematical principles that define how knots form. These principles have helped explain how DNA folds and unfolds and how a molecule’s makeup and distribution in space impart it with physical and chemical characteristics (SN: 5/23/08; SN: 8/27/18). Matsumoto is using knot theory to understand how each stitch entangles with its neighbors. “The types of stitches, the differences in their geometries as well as the order in which you put those stitches together into a textile may determine [the fabric’s] properties,” she says.
    Making tiny changes, such as altering a couple of crossings in a knot, could have a huge impact on the mechanics of the textile. For instance, a fabric made of just one stitch type, such as a knit or purl, tends to curl at the edges. But combine the two stitch types together in alternating rows or columns, and the fabric lays flat. And despite looking nearly identical, the fabrics have varying degrees of stretchiness, Matsumoto and grad student Shashank Markande reported in July in the Bridges 2020 Conference Proceedings.
    Matsumoto’s team is now training a computer to think like a knitter. Using yarn properties, mathematical stitch details and final knitted structures as inputs, a program can predict mechanical properties of fabrics. These predictions could someday help tailor materials for specific applications — from scaffolds for growing human tissue to wearable smart clothing (SN: 6/1/18) — and perhaps solve knotty problems of everyday life. More

  • in

    Toddlers who use touchscreens may be more distractible

    Toddlers with high daily touchscreen use are quicker to look at objects when they appear and are less able to resist distraction compared to toddlers with no or low touchscreen use — according to new research from Birkbeck, University of London, King’s College London and University of Bath.
    The research team say the findings are important for the growing debate around the role of screen time on toddlers’ development especially given the increased levels of screen time seen during the COVID-19 pandemic.
    Lead researcher Professor Tim Smith, from Birkbeck’s Centre for Brain and Cognitive Development, said: “The use of smartphones and tablets by babies and toddlers has accelerated rapidly in recent years. The first few years of life are critical for children to learn how to control their attention and ignore distraction, early skills that are known to be important for later academic achievement. There has been growing concern that toddler touchscreen use may negatively impact their developing attention but previously there was no empirical evidence to support this.”
    To provide such evidence, Professor Smith’s TABLET Project, at Birkbeck’s Centre for Brain and Cognitive Development, recruited 12-month-old infants who had different levels of touchscreen usage. The study followed them over the next 2.5 years, bringing them into the lab three times, at 12 months, 18 months and 3.5 years. During each visit the toddlers took part in computer tasks with an eye-tracker to measure their attention. Objects appeared in different screen locations. How quickly toddlers looked at the objects and how well they could ignore distracting objects were measured.
    Professor Smith states: “We found that infants and toddlers with high touchscreen use were faster to look at objects when they appeared and were less able to ignore distracting objects compared to the low users.”
    Dr Ana Maria Portugal, main researcher on the project points out “We are currently unable to conclude that the touchscreen use caused the differences in attention as it could also be that children who are more distractible may be more attracted to the attention-grabbing features of touchscreen devices than those who are not.”
    Co-investigator Dr Rachael Bedford, from the Department of Psychology at University of Bath commented: “What we need to know next is how this pattern of increased looking to distracting objects on screens relates to attention in the real-world: is it a positive sign that the children have adapted to the multitasking demands of their complex everyday environment or does it relate to difficulties during tasks that require concentration?”

    Story Source:
    Materials provided by University of Bath. Note: Content may be edited for style and length. More

  • in

    Domino effects and synchrony in seizure initiation

    Epilepsy, a neurological disease that causes recurring seizures with a wide array of effects, impacts approximately 50 million people across the world. This condition has been recognized for a long time — written records of epileptic symptoms date all the way back to 4000 B.C.E. But despite this long history of knowledge and treatment, the exact processes that occur in the brain during a seizure remain elusive.
    Scientists have observed distinctive patterns in the electrical activity of neuron groups in healthy brains. Networks of neurons move through states of similar behavior (synchronization) and dissimilar behavior (desynchronization) in a process that is associated with memory and attention. But in a brain with a neurological disorder like epilepsy, synchronization can grow to a dangerous extent when a collection of brain cells begins to emit excess electricity. “Synchronization is thought to be important for information processing,” Jennifer Creaser of the University of Exeter said. “But too much synchronization — such as what occurs in epileptic seizures or Parkinson’s disease — is associated with disease states and can impair brain function.”
    Measurements of epileptic seizures have revealed that desynchronization in brain networks often occurs before or during the early stages of a seizure. As the seizure progresses, networks become increasingly more synchronized as additional regions of the brain get involved, leading to high levels of synchronization towards the seizure’s end. Understanding the interactions between the increased electrical activity during a seizure and changes in synchronization is an important step towards improving the diagnosis and treatment of epilepsy.
    Jennifer Creaser, Peter Ashwin (University of Exeter), and Krasimira Tsaneva-Atanasova (University of Exeter, Technical University of Munich, and Bulgarian Academy of Sciences) explored the mechanisms of synchronization that accompany seizure onset in a paper that published in December in the SIAM Journal on Applied Dynamical Systems. In their study — which took place at the Engineering and Physical Science Research Council’s Centre for Predictive Modelling in Healthcare at the University of Exeter and University of Birmingham — the researchers used mathematical modeling to explore the interplay between groups of neurons in the brain that leads to transitions in synchronization changes during seizure onset. “Although this is a theoretical study of an idealized model, it is inspired by challenges posed by understanding transitions between healthy and pathological activity in the brain,” Ashwin said.
    The authors utilize an extended version of an existing mathematical model that represents the brain as a network connecting multiple nodes of neuron groups. The model network consists of bistable nodes, meaning that each node is able to switch between two stable states: resting (a quiescent state) and seizure (an active and oscillatory state). These nodes remain in their current state until they receive a stimulus that gives them a sufficient kick to escape to the other state. In the model, this stimulus comes from other connected nodes or appears in the form of “noise” — outside sources of neural activity, such as endocrine responses that are associated with an emotional state or physiological changes due to disease.
    The influence between neighboring nodes is governed by a coupling function that represents the way in which the nodes in the network communicate with each other. The first of the two possible types of coupling is amplitude coupling, which is governed by the “loudness” of the neighboring nodes. The second is phase coupling, which is related to the speed at which the neighbors are firing. Although the researchers needed to utilize a simple formulation on a small network to even make their analysis possible — a more complex and realistic system would be too computationally taxing — they expected their model to exhibit the same types of behaviors that clinical recordings of real brain activity have revealed.
    The nodes in the modeled system all begin in the healthy resting state. In previous research, the authors found that adding a small amount of noise to the system caused each node to transition to the active state — but the system’s geometry was such that returning to the resting state took much longer than leaving. Because of this, these escapes can spread sequentially as a “domino effect” when a number of nodes are connected. This leads to a cascade of escapes to the active state — much like a falling line of dominos — that spreads activity across the network.
    Creaser, Ashwin, and Tsaneva-Atanasova’s new paper builds upon this previous research on the domino effect to explore the transitions into and out of synchrony that occur during cascades of escapes. The team used their model to identify the circumstances that bring about these changes in synchrony and investigate how the type of coupling in a network affects its behavior.
    When the model incorporated only amplitude coupling, it exhibited a new phenomenon in which the domino effect could accelerate or decelerate. However, this effect had no bearing on synchronization changes in the network; all of the nodes started and remained synchronized. But when the model incorporated more general amplitude and phase coupling, the authors found that the nodes’ synchrony could change between consecutive escapes during the domino effect. They then determined which conditions would cause changes in synchrony under phase-amplitude coupling. This change in synchrony throughout the sequence of escapes was the study’s most novel result.
    The results of this work could facilitate further studies on seizures and their management. “The mathematical modeling of seizure initiation and propagation can not only help to uncover seizures’ complex underlying mechanisms, but also provide a means for enabling in silico experiments to predict the outcome of manipulating the neural system,” Tsaneva-Atanasova said. Understanding the interplay between synchronized and desynchronized dynamics in brain networks could help identify clinically-relevant measures for seizure treatment. For example, Creaser and Tsaneva-Atanasova recently served as the lead and senior author, respectively, on a paper that utilized a simpler version of the model to classify patterns of seizure onset that were recorded in a clinical setting. In the future, these kinds of modeling studies may lead to the personalization of seizure identification and treatment for individuals with epilepsy.

    Story Source:
    Materials provided by Society for Industrial and Applied Mathematics. Original written by Jillian Kunze. Note: Content may be edited for style and length. More

  • in

    Simulating 800,000 years of California earthquake history to pinpoint risks

    Massive earthquakes are, fortunately, rare events. But that scarcity of information blinds us in some ways to their risks, especially when it comes to determining the risk for a specific location or structure.
    “We haven’t observed most of the possible events that could cause large damage,” explained Kevin Milner, a computer scientist and seismology researcher at the Southern California Earthquake Center (SCEC) at the University of Southern California. “Using Southern California as an example, we haven’t had a truly big earthquake since 1857 — that was the last time the southern San Andreas broke into a massive magnitude 7.9 earthquake. A San Andreas earthquake could impact a much larger area than the 1994 Northridge earthquake, and other large earthquakes can occur too. That’s what we’re worried about.”
    The traditional way of getting around this lack of data involves digging trenches to learn more about past ruptures, collating information from lots of earthquakes all around the world and creating a statistical model of hazard, or using supercomputers to simulate a specific earthquake in a specific place with a high degree of fidelity.
    However, a new framework for predicting the likelihood and impact of earthquakes over an entire region, developed by a team of researchers associated with SCEC over the past decade, has found a middle ground and perhaps a better way to ascertain risk.
    A new study led by Milner and Bruce Shaw of Columbia University, published in the Bulletin of the Seismological Society of America in January 2021, presents results from a prototype Rate-State earthquake simulator, or RSQSim, that simulates hundreds of thousands of years of seismic history in California. Coupled with another code, CyberShake, the framework can calculate the amount of shaking that would occur for each quake. Their results compare well with historical earthquakes and the results of other methods, and display a realistic distribution of earthquake probabilities.
    According to the developers, the new approach improves the ability to pinpoint how big an earthquake might occur in a given location, allowing building code developers, architects, and structural engineers to design more resilient buildings that can survive earthquakes at a specific site.

    advertisement

    “For the first time, we have a whole pipeline from start to finish where earthquake occurrence and ground-motion simulation are physics-based,” Milner said. “It can simulate up to 100,000s of years on a really complicated fault system.”
    Applying massive computer power to big problems
    RSQSim transforms mathematical representations of the geophysical forces at play in earthquakes — the standard model of how ruptures nucleate and propagate — into algorithms, and then solves them on some of the most powerful supercomputers on the planet. The computationally-intensive research was enabled over several years by government-sponsored supercomputers at the Texas Advanced Computing Center, including Frontera — the most powerful system at any university in the world — Blue Waters at the National Center for Supercomputing Applications, and Summit at the Oak Ridge Leadership Computing Facility.
    “One way we might be able to do better in predicting risk is through physics-based modeling, by harnessing the power of systems like Frontera to run simulations,” said Milner. “Instead of an empirical statistical distribution, we simulate the occurrence of earthquakes and the propagation of its waves.”
    “We’ve made a lot of progress on Frontera in determining what kind of earthquakes we can expect, on which fault, and how often,” said Christine Goulet, Executive Director for Applied Science at SCEC, also involved in the work. “We don’t prescribe or tell the code when the earthquakes are going to happen. We launch a simulation of hundreds of thousands of years, and just let the code transfer the stress from one fault to another.”
    The simulations began with the geological topography of California and simulated over 800,000 virtual years how stresses form and dissipate as tectonic forces act on the Earth. From these simulations, the framework generated a catalogue — a record that an earthquake occurred at a certain place with a certain magnitude and attributes at a given time. The catalog that the SCEC team produced on Frontera and Blue Waters was among the largest ever made, Goulet said. The outputs of RSQSim were then fed into CyberShake that again used computer models of geophysics to predict how much shaking (in terms of ground acceleration, or velocity, and duration) would occur as a result of each quake.

    advertisement

    “The framework outputs a full slip-time history: where a rupture occurs and how it grew,” Milner explained. “We found it produces realistic ground motions, which tells us that the physics implemented in the model is working as intended.” They have more work planned for validation of the results, which is critical before acceptance for design applications.
    The researchers found that the RSQSim framework produces rich, variable earthquakes overall — a sign it is producing reasonable results — while also generating repeatable source and path effects.
    “For lots of sites, the shaking hazard goes down, relative to state-of-practice estimates” Milner said. “But for a couple of sites that have special configurations of nearby faults or local geological features, like near San Bernardino, the hazard went up. We are working to better understand these results and to define approaches to verify them.”
    The work is helping to determine the probability of an earthquake occurring along any of California’s hundreds of earthquake-producing faults, the scale of earthquake that could be expected, and how it may trigger other quakes.
    Support for the project comes from the U.S. Geological Survey (USGS), National Science Foundation (NSF), and the W.M. Keck Foundation. Frontera is NSF’s leadership-class national resource. Compute time on Frontera was provided through a Large-Scale Community Partnership (LSCP) award to SCEC that allows hundreds of U.S. scholars access to the machine to study many aspects of earthquake science. LSCP awards provide extended allocations of up to three years to support long-lived research efforts. SCEC — which was founded in 1991 and has computed on TACC systems for over a decade — is a premier example of such an effort.
    The creation of the catalog required eight days of continuous computing on Frontera and used more than 3,500 processors in parallel. Simulating the ground shaking at 10 sites across California required a comparable amount of computing on Summit, the second fastest supercomputer in the world.
    “Adoption by the broader community will be understandably slow,” said Milner. “Because such results will impact safety, it is part of our due diligence to make sure these results are technically defensible by the broader community,” added Goulet. But research results such as these are important in order to move beyond generalized building codes that in some cases may be inadequately representing the risk a region face while in other cases being too conservative.
    “The hope is that these types of models will help us better characterize seismic hazard so we’re spending our resources to build strong, safe, resilient buildings where they are needed the most,” Milner said.
    Video: https://www.youtube.com/watch?v=AdGctQsjKpU&feature=emb_logo More

  • in

    Adding or subtracting single quanta of sound

    Researchers perform experiments that can add or subtract a single quantum of sound — with surprising results when applied to noisy sound fields.
    Quantum mechanics tells us that physical objects can have both wave and particle properties. For instance, a single particle — or quantum — of light is known as a photon, and, in a similar fashion, a single quantum of sound is known as a phonon, which can be thought of as the smallest unit of sound energy.
    A team of researchers spanning Imperial College London, University of Oxford, the Niels Bohr Institute, University of Bath, and the Australian National University have performed an experiment that can add or subtract a single phonon to a high-frequency sound field using interactions with laser light.
    The team’s findings aid the development of future quantum technologies, such as hardware components in a future ‘quantum internet’, and help pave the way for tests of quantum mechanics on a more macroscopic scale. The details of their research are published today in the journal Physical Review Letters.
    To add or subtract a single quantum of sound, the team experimentally implement a technique proposed in 2013 that exploits correlations between photons and phonons created inside a resonator. More specifically, laser light is injected into a crystalline microresonator that supports both the light and the high-frequency sound waves.
    The two types of waves then couple to one another via an electromagnetic interaction that creates light at a new frequency. Then, to subtract a single phonon, the team detect a single photon that has been up-shifted in frequency. “Detecting a single photon gives us an event-ready signal that we have subtracted a single phonon,” says lead author of the project Georg Enzian.
    When the experiment is performed at a finite temperature, the sound field has random fluctuations from thermal noise. Thus, at any one time, the exact number of sound quanta present is unknown but on average there will be n phonons initially.
    What happens now when you add or subtract a single phonon? At first thought, you may expect this would simply change the average to n + 1 or n — 1, respectively, however the actual outcome defies this intuition. Indeed, quite counterintuitively, when you subtract a single phonon, the average number of phonons actually goes up to 2n.
    This surprising result where the mean number of quanta doubles has been observed for all-optical photon-subtraction experiments and is observed for the first time outside of optics here. “One way to think of the experiment is to imagine a claw machine that you often see in video arcades, except that you can’t see how many toys there are inside the machine. Before you agree to play, you’ve been told that on average there are n toys inside but the exact number changes randomly each time you play. Then, immediately after a successful grab with the claw, the average number of toys actually goes up to 2n,” describes Michael Vanner, Principal Investigator of the Quantum Measurement Lab at Imperial College London.
    It’s important to note that this result certainly does not violate energy conservation and comes about due to the statistics of thermal phonons.
    The team’s results, combined with their recent experiment that reported strong coupling between light and sound in a microresonator, open a new path for quantum science and technology with sound waves.

    Story Source:
    Materials provided by Imperial College London. Note: Content may be edited for style and length. More

  • in

    A robot arm toting a Venus flytrap can grab delicate objects

    A new robotic grabber is ripped straight from the plant world. The device, made with a severed piece of a Venus flytrap, can grasp tiny, delicate objects, researchers report January 25 in Nature Electronics.
    Normally, the carnivorous Dionaea muscipula scores a meal when unsuspecting prey touches delicate hairs on one of the plant’s jawlike leaves, triggering the trap to snap shut (SN: 10/14/20). But by sticking electrodes to the leaves and applying a small electric voltage, researchers designed a method to force Venus flytraps to close. Even when cut from the plant, the leaves retained the ability to shut upon command for up to a day, say materials scientist Wenlong Li and colleagues at Nanyang Technological University in Singapore.
    Integrating soft, flexible plant material into robotics could aid in picking up fragile objects that would otherwise be damaged by clunky, rigid graspers, the researchers say. So, Li’s team attached a piece of a flytrap to a robotic arm and used a smartphone app to control the trap. In experiments, the robotic grabber clutched a piece of wire one-half of a millimeter in diameter. And when not strapped to the robotic arm, the dismembered plant also caught a slowly moving 1-gram weight.
    One drawback: The traps take hours to reopen, meaning this bot had better make the catch on the first try.
    [embedded content]
    Scientists controlled a Venus flytrap outfitted with electrodes, using a smartphone to direct it to grasp small objects like a wire and a moving weight. More

  • in

    Sport may fast-track numeracy skills for Indigenous children

    Greater sports participation among Aboriginal and Torres Strait Islander children is linked with better academic performance, according to new research from the University of South Australia.
    Conducted in partnership with the University of Sydney and the University of Technology Sydney, the world-first study found that Aboriginal and Torres Strait Islander children who played organised sports every year over four years, had numeracy skills which were advanced by seven months, compared to children who did less sport.
    The study used data from four successive waves of Australia’s Longitudinal Study of Indigenous Children, following 303 students (with a baseline age of five to six years old) to assess cumulative sports participation against academic performance in standardised NAPLAN and PAT outcomes.
    Sports participation has been linked with better cognitive function and memory in many child populations, but this is the first study to confirm the beneficial association between ongoing involvement in sport and academic performance among Aboriginal and Torres Strait Islander children.
    Lead researcher, UniSA’s Dr Dot Dumuid, says the study highlights the importance of sports as a strategy to help close the gap* for Australia’s first nations peoples.
    “Playing sport has always had strong cultural importance to Aboriginal and Torres Strait Islanders, so understanding how sports can boost numeracy among Indigenous children is a valuable step towards improving health and reducing disadvantage,” Dr Dumuid says.

    advertisement

    “When children play sport, they’re learning the social structures of a team, how to work within rules, how to focus their attention, and key strategies for success.
    “Interestingly, when children play sport, they’re not only activating parts of the brain that are involved in learning, but they’re also inadvertently practising mathematical computations such as ‘how much time is left in the game?’ and ‘how many points do we need to win?’, and it’s this that may well be contributing to improved numeracy.”
    Aboriginal and Torres Strait Islanders comprise a relatively large proportion of athletes in Australia’s leading sports teams. While only representing about three percent of the population, they make up nine percent of AFL players, and 22 per cent of State of Origin players.
    Encouraging sports in Aboriginal and Torres Strait Islander communities could have many other benefits for health and wellbeing, says co-researcher and Professor of Indigenous Health Education at UTS, John Evans.
    “Playing sport creates a sense of belonging, and builds self-esteem, coherence and purpose,” Professor Evans says.
    “This is especially important for people living in rural and remote areas where opportunities for social interaction and structured activities can be limited.
    “If we can find ways to encourage greater participation among Aboriginal and Torres Strain Islander communities, while removing key barriers — such as financial costs and lack of transport — we could promote healthier living, more cohesive societies while also and boosting academic performance among Indigenous children.”

    Story Source:
    Materials provided by University of South Australia. Note: Content may be edited for style and length. More