More stories

  • in

    Aquatic robots can remove contaminant particles from water

    Corals in the Ocean are made up of coral polyps, a small soft creature with a stem and tentacles, they are responsible for nourishing the corals, and aid the coral’s survival by generating self-made currents through motion of their soft bodies.
    Scientists from WMG at the University of Warwick, led by Eindhoven University of Technology in the Netherlands, developed a 1cm by 1cm wireless artificial aquatic polyp, which can remove contaminants from water. Apart from cleaning, this soft robot could be also used in medical diagnostic devices by aiding in picking up and transporting specific cells for analysis.
    In the paper, ‘An artificial aquatic polyp that wirelessly attracts, grasps, and releases objects’ researchers demonstrate how their artificial aquatic polyp moves under the influence of a magnetic field, while the tentacles are triggered by light. A rotating magnetic field under the device drives a rotating motion of the artificial polyp’s stem. This motion results in the generation of an attractive flow which can guide suspended targets, such as oil droplets, towards the artificial polyp.
    Once the targets are within reach, UV light can be used to activate the polyp’s tentacles, composed of photo-active liquid crystal polymers, which then bend towards the light enclosing the passing target in the polyp’s grasp. Target release is then possible through illumination with blue light.
    Dr Harkamaljot Kandail, from WMG, University of Warwick was responsible for creating state of the art 3D simulations of the artificial aquatic polyps. The simulations are important to help understand and elucidate the stem and tentacles generate the flow fields that can attract the particles in the water.
    The simulations were then used to optimise the shape of the tentacles so that the floating particles could be grabbed quickly and efficiently.
    Dr Harkamaljot Kandail, from WMG, University of Warwick comments:
    “Corals are such a valuable ecosystem in our oceans, I hope that the artificial aquatic polyps can be further developed to collect contaminant particles in real applications. The next stage for us to overcome before being able to do this is to successfully scale up the technology from laboratory to pilot scale. To do so we need to design an array of polyps which work harmoniously together where one polyp can capture the particle and pass it along for removal.”
    Marina Pilz Da Cunha, from the Eindhoven University of Technology, Netherlands adds:
    “The artificial aquatic polyp serves as a proof of concept to demonstrate the potential of actuator assemblies and serves as an inspiration for future devices. It exemplifies how motion of different stimuli-responsive polymers can be harnessed to perform wirelessly controlled tasks in an aquatic environment.”

    Story Source:
    Materials provided by University of Warwick. Note: Content may be edited for style and length. More

  • in

    Math shows how brain stays stable amid internal noise and a widely varying world

    Whether you are playing Go in a park amid chirping birds, a gentle breeze and kids playing catch nearby or you are playing in a den with a ticking clock on a bookcase and a purring cat on the sofa, if the game situation is identical and clear, your next move likely would be, too, regardless of those different conditions. You’ll still play the same next move despite a wide range of internal feelings or even if a few neurons here and there are just being a little erratic. How does the brain overcome unpredictable and varying disturbances to produce reliable and stable computations? A new study by MIT neuroscientists provides a mathematical model showing how such stability inherently arises from several known biological mechanisms.
    More fundamental than the willful exertion of cognitive control over attention, the model the team developed describes an inclination toward robust stability that is built in to neural circuits by virtue of the connections, or “synapses” that neurons make with each other. The equations they derived and published in PLOS Computational Biology show that networks of neurons involved in the same computation will repeatedly converge toward the same patterns of electrical activity, or “firing rates,” even if they are sometimes arbitrarily perturbed by the natural noisiness of individual neurons or arbitrary sensory stimuli the world can produce.
    “How does the brain make sense of this highly dynamic, non-linear nature of neural activity?” said co-senior author Earl Miller, Picower Professor of Neuroscience in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences (BCS) at MIT. “The brain is noisy, there are different starting conditions — how does the brain achieve a stable representation of information in the face of all these factors that can knock it around?”
    To find out, Miller’s lab, which studies how neural networks represent information, joined forces with BCS colleague and mechanical engineering Professor Jean-Jacques Slotine, who leads the Nonlinear Systems Laboratory at MIT. Slotine brought the mathematical method of “contraction analysis,” a concept developed in control theory, to the problem along with tools his lab developed to apply the method. Contracting networks exhibit the property of trajectories that start from disparate points ultimately converging into one trajectory, like tributaries in a watershed. They do so even when the inputs vary with time. They are robust to noise and disturbance, and they allow for many other contracting networks to be combined together without a loss of overall stability — much like brain typically integrates information from many specialized regions.
    “In a system like the brain where you have [hundreds of billions] of connections the questions of what will preserve stability and what kinds of constraints that imposes on the system’s architecture become very important,” Slotine said.
    Math reflects natural mechanisms
    Leo Kozachkov, a graduate student in both Miller’s and Slotine’s labs, led the study by applying contraction analysis to the problem of the stability of computations in the brain. What he found is that the variables and terms in the resulting equations that enforce stability directly mirror properties and processes of synapses: inhibitory circuit connections can get stronger, excitatory circuit connections can get weaker, both kinds of connections are typically tightly balanced relative to each other, and neurons make far fewer connections than they could (each neuron, on average, could make roughly 10 million more connections than it does).

    advertisement

    “These are all things that neuroscientists have found, but they haven’t linked them to this stability property,” Kozachkov said. “In a sense, we’re synthesizing some disparate findings in the field to explain this common phenomenon.”
    The new study, which also involved Miller lab postdoc Mikael Lundqvist, was hardly the first to grapple with stability in the brain, but the authors argue it has produced a more advanced model by accounting for the dynamics of synapses and by allowing for wide variations in starting conditions. It also offers mathematical proofs of stability, Kozachkov added.
    Though focused on the factors that ensure stability, the authors noted, their model does not go so far as to doom the brain to inflexibility or determinism. The brain’s ability to change — to learn and remember — is just as fundamental to its function as its ability to consistently reason and formulate stable behaviors.
    “We’re not asking how the brain changes,” Miller said. “We’re asking how the brain keeps from changing too much.”
    Still, the team plans to keep iterating on the model, for instance by encompassing a richer accounting for how neurons produce individual spikes of electrical activity, not just rates of that activity.
    They are also working to compare the model’s predictions with data from experiments in which animals repeatedly performed tasks in which they needed to perform the same neural computations, despite experiencing inevitable internal neural noise and at least small sensory input differences.
    Finally, the team is considering how the models may inform understanding of different disease states of the brain. Aberrations in the delicate balance of excitatory and inhibitory neural activity in the brain is considered crucial in epilepsy, Kozachkov notes. A symptom of Parkinson’s disease, as well, entails a neurally-rooted loss of motor stability. Miller adds that some patients with autism spectrum disorders struggle to stably repeat actions (e.g. brushing teeth) when external conditions vary (e.g. brushing in a different room).
    The National Institute of Mental Health, the Office of Naval Research, the National Science Foundation and the JPB Foundation supported the research More

  • in

    Grasshopper jumping on Bloch sphere finds new quantum insights

    New research at the University of Warwick has (pardon the pun) put a new spin on a mathematical analogy involving a jumping grasshopper and its ideal lawn shape. This work could help us understand the spin states of quantum-entangled particles.
    The grasshopper problem was devised by physicists Olga Goulko (then at UMass Amherst), Adrian Kent and Damián Pitalúa-García (Cambridge). They asked for the ideal lawn shape that would maximize the chance that a grasshopper, starting from a random position on the lawn and jumping a fixed distance in a random direction, lands back on the lawn. Intuitively one might expect the answer to be a circular lawn, at least for small jumps. But Goulko and Kent actually proved otherwise: various shapes from a cogwheel pattern to some disconnected patches of lawn performed better for different jump sizes (link to the technical paper).
    Beyond surprises about lawn shapes and grasshoppers, the research provided useful insight into Bell-type inequalities relating probabilities of the spin states of two separated quantum-entangled particles. The Bell inequality, proved by physicist John Stewart Bell in 1964 and later generalised in many ways, demonstrated that no combination of classical theories with Einstein’s special relativity is able to explain the predictions (and later actual experimental observations) of quantum theory.
    The next step was to test the grasshopper problem on a sphere. The Bloch sphere is a geometrical representation of the state space of a single quantum bit. A great circle on the Bloch sphere defines linear polarization measurements, which are easily implemented and commonly used in Bell and other cryptographic tests. Because of the antipodal symmetry for the Bloch sphere, a lawn covers half the total surface area, and the natural hypothesis would be that the ideal lawn is hemispherical. Researchers in the Department of Computer Science at the University of Warwick, in collaboration with Goulko and Kent, investigated this problem and found that it too requires non-intuitive lawn patterns. The main result is that the hemisphere is never optimal, except in the special case when the grasshopper needs exactly an even number of jumps to go around the equator. This research shows that there are previously unknown types of Bell inequalities.
    One of the paper’s authors — Dmitry Chistikov from the Centre for Discrete Mathematics and its Applications (DIMAP) and the Department of Computer Science, at the University of Warwick, commented:
    “Geometry on the sphere is fascinating. The sine rule, for instance, looks nicer for the sphere than the plane, but this didn’t make our job easy.”
    The other author from Warwick, Professor Mike Paterson FRS, said:
    “Spherical geometry makes the analysis of the grasshopper problem more complicated. Dmitry, being from the younger generation, used a 1948 textbook and pen-and-paper calculations, whereas I resorted to my good old Mathematica methods.”
    The paper, entitled ‘Globe-hopping’, is published in the Proceedings of the Royal Society A. It is interdisciplinary work involving mathematics and theoretical physics, with applications to quantum information theory.
    The research team: Dmitry Chistikov and Mike Paterson (both from the University of Warwick), Olga Goulko (Boise State University, USA), and Adrian Kent (Cambridge), say that the next steps to give even more insight into quantum spin state probabilities are looking for the most grasshopper-friendly lawns on the sphere or even letting the grasshopper boldly go jumping in three or more dimensions.

    Story Source:
    Materials provided by University of Warwick. Note: Content may be edited for style and length. More

  • in

    Scientists can’t agree on how clumpy the universe is

    The universe is surprisingly smooth.
    A new measurement reveals that the universe is less clumpy than predicted, physicists report in a series of papers posted July 30 at arXiv.org. The discrepancy could hint at something amiss with scientists’ understanding of the cosmos.
    To pin down the cosmic clumpiness, researchers studied the orientation of 21 million galaxies with the Kilo-Degree Survey at the Paranal Observatory in Chile. As light from those galaxies streams through the universe, its trajectory is bent by massive objects, a phenomenon called gravitational lensing. This lensing causes the elongated shapes of galaxies to appear slightly aligned, rather than oriented randomly.
    When combined with additional data from other sky surveys, that alignment quantifies how much the matter in the universe is clumped together. The researchers found that the universe is about 10 percent more homogenous, or smoother, than predicted based on light released just after the Big Bang, the cosmic microwave background. Previous results had hinted at the discrepancy, but the new measurement strengthens the case that the disagreement is not a fluke (SN: 7/30/19).
    If the measurement is correct, the mismatch could hint at a hole in the standard model of cosmology, the theory that describes how the universe has changed over time. When combined with a similar puzzle over how fast the universe is expanding (SN: 7/15/20), physicists are beginning to suspect that the universe is putting them­­­­­ on notice.
    “It’s a bit of a riddle,” says cosmologist Hendrik Hildebrandt of Ruhr-Universität Bochum in Germany, a coauthor of the studies. “Is [the universe] just telling us ‘You’re stupid and you didn’t do your measurement right,’ or … ‘Hey, I’m more complicated than you thought’?” More

  • in

    Species may swim thousands of kilometers to escape ocean heat waves

    When an intense heat wave strikes a patch of ocean, overheated marine animals may have to swim thousands of kilometers to find cooler waters, researchers report August 5 in Nature.
    Such displacement, whether among fish, whales or turtles, can hinder both conservation efforts and fishery operations. “To properly manage those species, we need to understand where they are,” says Michael Jacox, a physical oceanographer with the National Oceanographic and Atmospheric Administration based in Monterey, Calif.
    Marine heat waves —  defined as at least five consecutive days of unusually hot water for a given patch of ocean — have become increasingly common over the past century (SN: 4/10/18). Climate change has amped up the intensity of some of the most famous marine heat waves of recent years, such as the Pacific Ocean Blob from 2015 to 2016 and scorching waters in the Tasman Sea in 2017 (SN: 12/14/17; SN: 12/11/18).
    “We know that these marine heat waves are having lots of effects on the ecosystem,” Jacox says. For example, researchers have documented how the sweltering waters can bleach corals and wreak havoc on kelp forests. But the impacts on mobile species such as fish are only beginning to be studied (SN: 1/15/20).

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    “We have seen species appearing far north of where we expect them,” Jacox says. For example, in 2015, the Blob drove hammerhead sharks — which normally stay close to the tropics, near Baja California in Mexico — to shift their range at least hundreds of kilometers north, where they were observed off the coast of Southern California.
    To see how far a mobile ocean dweller would need to flee to escape the heat, Jacox and colleagues compared ocean temperatures around the globe. First, they examined surface ocean temperatures from 1982 to 2019 compiled by NOAA from satellites, buoys and shipboard measurements. Then, for the same period, they identified marine heat waves occurring around the world, where water temperatures for a region lingered in the highest 10 percent ever recorded for that place and that time of year. Finally, they calculated how far a swimmer in an area with a heat wave has had to go to reach cooler waters, a distance the team dubs “thermal displacement.”

    In higher-latitude regions, such as the Tasman Sea, relief tended to be much closer, within a few tens of kilometers of the overheated patch, the researchers found. So while ocean heat waves in that region might spell doom for firmly rooted corals and kelp, mobile species might fare better. “We were surprised that the displacements were so small,” Jacox says.
    But in the tropics, where ocean temperatures are more uniform, species may have had to travel thousands of kilometers to escape the heat.  
    Projecting how species might move around in the future due to marine heat waves gets increasingly complicated, the researchers found. That’s because over the next few decades, climate change is anticipated to cause not just an increase in frequency and intensity of marine heat waves, but also warming of all of Earth’s ocean waters (SN: 9/25/19). Furthermore, that rate of warming will vary from place to place. As a result, future thermal displacement could increase in some parts of the ocean relative to today, and decrease in others, writes marine ecologist Mark Payne of the Technical University of Denmark in Copenhagen, in a commentary in the same issue of Nature.
    That complexity highlights the task ahead for researchers trying to anticipate changes across ocean ecosystems as the waters warm, says Lewis Barnett, a Seattle-based NOAA fish biologist, who was not involved in the study. The new work provides important context for data being collected on fish stocks. For example, surveys of the Gulf of Alaska in 2017 noted a large decline in the abundance of valuable Pacific cod, now known to be linked to the Blob heatwave that had ended the year before.
    But there’s a lot more work to be done, Barnett says.
    The study focuses on surface ocean temperatures, but ocean conditions and dynamics are different in the deep ocean, he notes. Some species, too, move more easily between water depths than others. And heat tolerance also varies from species to species. Biologists are racing to understand these differences, and how hot waters can affect the life cycles and distributions of many different animals.
    The effects of marine heat waves might be ephemeral compared with the impacts of long-term climate change. But these extreme events offer a peek into the future, says Malin Pinsky, a marine ecologist at Rutgers University in New Brunswick, N.J., who was not involved in the study. “We can use these heat waves as lessons for how we’ll need to adapt.” More

  • in

    Predictions for the 2020 Atlantic hurricane season just got worse

    Chalk up one more way 2020 could be an especially stressful year: The Atlantic hurricane season now threatens to be even more severe than preseason forecasts predicted, and may be one of the busiest on record.
    With as many as 25 named storms now expected — twice the average number — 2020 is shaping up to be an “extremely active” season with more frequent, longer and stronger storms, the National Oceanic and Atmospheric Administration warns. Wind patterns and warmer-than-normal seawater have conspired to prime the Atlantic Ocean for a particularly fitful year — although it is not yet clear whether climate change had a hand in creating such hurricane-friendly conditions. “Once the season ends, we’ll study it within the context of the overall climate record,” Gerry Bell, lead seasonal hurricane forecaster at NOAA’s Climate Prediction Center, said during an Aug. 6 news teleconference.
    The 2020 hurricane season is already off to a rapid start, with a record-high nine named storms by early August, including two hurricanes. The average season, which runs June through November, sees two named storms by this time of year.
    “We are now entering the peak months of the Atlantic hurricane season, August through October,” National Weather Service Director Louis Uccellini said in the news teleconference. “Given the activity we have seen so far this season, coupled with the ongoing challenges that communities face in light of COVID-19, now is the time to organize your family plan and make necessary preparations.”

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Storms get names once they have sustained wind speeds of at least 63 kilometers per hour. In April, forecasters predicted there would be 18 named storms, with half reaching hurricane status (SN: 4/16/20). Now, NOAA anticipates that 2020 could deliver a total of 19 to 25 named storms. That would put this year in league with 2005, which boasted over two dozen named storms including Hurricane Katrina (SN: 8/23/15).
    Seven to 11 of this year’s named storms could become hurricanes, including three to six major hurricanes of Category 3 or higher, NOAA predicts. By contrast, the average season brings 12 named storms and six hurricanes, including three major ones.
    Given that heightened activity, NOAA projects that 2020 will have an Accumulated Cyclone Energy, or ACE, value between 140 to 230 percent the norm. That value accounts for both the duration and intensity of all a season’s named storms, and seasons that exceed 165 percent the average ACE value qualify as “extremely active.”
    Researchers at Colorado State University released a similar prediction on August 5. They foresee  24 named storms in total, 12 of which could be hurricanes, including five major ones. The probability of at least one major hurricane making landfall in the continental United States before the season is up is 74 percent — compared with the average seasonal likelihood of 52 percent, the Colorado State researchers say.
    It’s hard to know how many storms in total will make landfall. But “when we do have more activity, there is a [trend] of more storms coming towards major landmasses — coming towards the U.S., coming towards Central America, and the Caribbean, and even sometimes up towards Canada,” says meteorologist Matthew Rosencrans of NOAA’s Climate Prediction Center in College Park, Md.
    Two main climate patterns are setting the stage for an extremely intense hurricane season, says Jhordanne Jones, an atmospheric scientist at Colorado State in Fort Collins. Warmer-than-normal sea surface temperatures in the tropical Atlantic are poised to fuel stronger storms. What’s more, there are hints that La Niña may develop around the height of Atlantic hurricane season. La Niña, the flip side of El Niño, is a naturally occurring climate cycle that brings cooler waters to the tropical Pacific, changing wind patterns over that ocean (SN: 1/26/15). The effects of that disturbance in air circulation can be felt across the globe, suppressing winds over the Atlantic that might otherwise pull tropical storms apart. More

  • in

    Updating Turing's model of pattern formation

    In 1952, Alan Turing published a study which described mathematically how systems composed of many living organisms can form rich and diverse arrays of orderly patterns. He proposed that this ‘self-organisation’ arises from instabilities in un-patterned systems, which can form as different species jostle for space and resources. So far, however, researchers have struggled to reproduce Turing patterns in laboratory conditions, raising serious doubts about its applicability. In a new study published in EPJ B, researchers led by Malbor Asllani at the University of Limerick, Ireland, have revisited Turing’s theory to prove mathematically how instabilities can occur through simple reactions, and in widely varied environmental conditions.

    advertisement

    The team’s results could help biologists to better understand the origins of many ordered structures in nature, from spots and stripes on animal coats, to clusters of vegetation in arid environments. In Turing’s original model, he introduced two diffusing chemical species to different points on a closed ring of cells. As they diffused across adjacent cells, these species ‘competed’ with each other as they interacted; eventually organising to form patterns. This pattern formation depended on the fact that the symmetry during this process could be broken to different degrees, depending on the ratio between the diffusion speeds of each species; a mechanism now named the ‘Turing instability.’ However, a significant drawback of Turing’s mechanism was that it relied on the unrealistic assumption that many chemicals diffuse at different paces.
    Through their calculations, Asllani’s team showed that in sufficiently large rings of cells, where diffusion asymmetry causes both species to travel in the same direction, the instabilities which generate ordered patterns will always arise — even when competing chemicals diffuse at the same rate. Once formed, the patterns will either remain stationary, or propagate steadily around the ring as waves. The team’s result addresses one of Turing’s key concerns about his own theory, and is an important step forward in our understanding of the innate drive for living systems to organise themselves.

    make a difference: sponsored opportunity

    Story Source:
    Materials provided by Springer. Note: Content may be edited for style and length.

    Journal Reference:
    Malbor Asllani, Timoteo Carletti, Duccio Fanelli, Philip K. Maini. A universal route to pattern formation in multicellular systems. The European Physical Journal B, 2020; 93 (7) DOI: 10.1140/epjb/e2020-10206-3

    Cite This Page:

    Springer. “Updating Turing’s model of pattern formation.” ScienceDaily. ScienceDaily, 7 August 2020. .
    Springer. (2020, August 7). Updating Turing’s model of pattern formation. ScienceDaily. Retrieved August 7, 2020 from www.sciencedaily.com/releases/2020/08/200807111942.htm
    Springer. “Updating Turing’s model of pattern formation.” ScienceDaily. www.sciencedaily.com/releases/2020/08/200807111942.htm (accessed August 7, 2020). More