More stories

  • in

    Mathematical patterns developed by Alan Turing help researchers understand bird behavior

    Scientists from the University of Sheffield have used mathematical modelling to understand why flocks of long-tailed tits segregate themselves into different parts of the landscape.
    The team tracked the birds around Sheffield’s Rivelin Valley which eventually produced a pattern across the landscape, using maths helped the team to reveal the behaviours causing these patterns.
    The findings, published in the Journal of Animal Ecology, show that flocks of long-tailed tits are less likely to avoid places where they have interacted with relatives and more likely to avoid larger flocks, whilst preferring the centre of woodland.
    It was previously unknown why flocks of long-tailed tits live in separate parts of the same area, despite there being plenty of food to sustain multiple flocks and the birds not showing territorial behaviour.
    The equations used to understand the birds are similar to those developed by Alan Turing to describe how animals get their spotted and striped patterns. Turing’s famous mathematics indicates if patterns will appear as an animal grows in the womb, here it’s used to find out which behaviours lead to the patterns across the landscape.
    Territorial animals often live in segregated areas that they aggressively defend and stay close to their den. Before this study, these mathematical ideas had been used to understand the patterns made by territorial animals such as coyotes, meerkats and even human gangs. However, this study was the first to use the ideas on non-territorial animals with no den pinning them in place.
    Natasha Ellison, PhD student at the University of Sheffield who led the study, said: “Mathematical models help us understand nature in an extraordinary amount of ways and our study is a fantastic example of this.”
    “Long-tailed tits are too small to be fitted with GPS trackers like larger animals, so researchers follow these tiny birds on foot, listening for bird calls and identifying birds with binoculars. The field work is extremely time consuming and without the help of these mathematical models these behaviours wouldn’t have been discovered.”

    Story Source:
    Materials provided by University of Sheffield. Note: Content may be edited for style and length. More

  • in

    Classifying galaxies with artificial intelligence

    Astronomers have applied artificial intelligence (AI) to ultra-wide field-of-view images of the distant Universe captured by the Subaru Telescope, and have achieved a very high accuracy for finding and classifying spiral galaxies in those images. This technique, in combination with citizen science, is expected to yield further discoveries in the future.
    A research group, consisting of astronomers mainly from the National Astronomical Observatory of Japan (NAOJ), applied a deep-learning technique, a type of AI, to classify galaxies in a large dataset of images obtained with the Subaru Telescope. Thanks to its high sensitivity, as many as 560,000 galaxies have been detected in the images. It would be extremely difficult to visually process this large number of galaxies one by one with human eyes for morphological classification. The AI enabled the team to perform the processing without human intervention.
    Automated processing techniques for extraction and judgment of features with deep-learning algorithms have been rapidly developed since 2012. Now they usually surpass humans in terms of accuracy and are used for autonomous vehicles, security cameras, and many other applications. Dr. Ken-ichi Tadaki, a Project Assistant Professor at NAOJ, came up with the idea that if AI can classify images of cats and dogs, it should be able to distinguish “galaxies with spiral patterns” from “galaxies without spiral patterns.” Indeed, using training data prepared by humans, the AI successfully classified the galaxy morphologies with an accuracy of 97.5%. Then applying the trained AI to the full data set, it identified spirals in about 80,000 galaxies.
    Now that this technique has been proven effective, it can be extended to classify galaxies into more detailed classes, by training the AI on the basis of a substantial number of galaxies classified by humans. NAOJ is now running a citizen-science project “GALAXY CRUISE,” where citizens examine galaxy images taken with the Subaru Telescope to search for features suggesting that the galaxy is colliding or merging with another galaxy. The advisor of “GALAXY CRUISE,” Associate Professor Masayuki Tanaka has high hopes for the study of galaxies using artificial intelligence and says, “The Subaru Strategic Program is serious Big Data containing an almost countless number of galaxies. Scientifically, it is very interesting to tackle such big data with a collaboration of citizen astronomers and machines. By employing deep-learning on top of the classifications made by citizen scientists in GALAXY CRUISE, chances are, we can find a great number of colliding and merging galaxies.”

    Story Source:
    Materials provided by National Institutes of Natural Sciences. Note: Content may be edited for style and length. More

  • in

    Electronic components join forces to take up 10 times less space on computer chips

    Electronic filters are essential to the inner workings of our phones and other wireless devices. They eliminate or enhance specific input signals to achieve the desired output signals. They are essential, but take up space on the chips that researchers are on a constant quest to make smaller. A new study demonstrates the successful integration of the individual elements that make up electronic filters onto a single component, significantly reducing the amount of space taken up by the device.
    Researchers at the University of Illinois, Urbana-Champaign have ditched the conventional 2D on-chip lumped or distributed filter network design — composed of separate inductors and capacitors — for a single, space-saving 3D rolled membrane that contains both independently designed elements.
    The results of the study, led by electrical and computer engineering professor Xiuling Li, are published in the journal Advanced Functional Materials.
    “With the success that our team has had on rolled inductors and capacitors, it makes sense to take advantage of the 2D to 3D self-assembly nature of this fabrication process to integrate these different components onto a single self-rolling and space-saving device,” Li said.
    In the lab, the team uses a specialized etching and lithography process to pattern 2D circuitry onto very thin membranes. In the circuit, they join the capacitors and inductors together and with ground or signal lines, all in a single plane. The multilayer membrane can then be rolled into a thin tube and placed onto a chip, the researchers said.
    “The patterns, or masks, we use to form the circuitry on the 2D membrane layers can be tuned to achieve whatever kind of electrical interactions we need for a particular device,” said graduate student and co-author Mark Kraman. “Experimenting with different filter designs is relatively simple using this technique because we only need to modify that mask structure when we want to make changes.”
    The team tested the performance of the rolled components and found that under the current design, the filters were suitable for applications in the 1-10 gigahertz frequency range, the researchers said. While the designs are targeted for use in radio frequency communications systems, the team posits that other frequencies, including in the megahertz range, are also possible based on their ability to achieve high power inductors in past research.
    “We worked with several simple filter designs, but theoretically we can make any filter network combination using the same process steps,” said graduate student and lead author Mike Yang. “We took what was already out there to provide a new, easier platform to lump these components together closer than ever.”
    “Our way of integrating inductors and capacitors monolithically could bring passive electronic circuit integration to a whole new level,” Li said. “There is practically no limit to the complexity or configuration of circuits that can be made in this manner, all with one mask set.”

    Story Source:
    Materials provided by University of Illinois at Urbana-Champaign, News Bureau. Original written by Lois Yoksoulian. Note: Content may be edited for style and length. More

  • in

    Using air to amplify light

    “The idea had been going around my head for about 15 years, but I never had the time or the resources to do anything about it.” But now Luc Thévenaz, the head of the Fiber Optics Group in EPFL’s School of Engineering, has finally made it happen: his lab has developed a technology to amplify light inside the latest hollow-core optical fibers.
    Squaring the circle
    Today’s optical fibers usually have a solid glass core, so there’s no air inside. Light can travel along the fibers but loses half of its intensity after 15 kilometers. It keeps weakening until it can hardly be detected at 300 kilometers. So to keep the light moving, it has to be amplified at regular intervals.
    Thévenaz’s approach is based on new hollow-core optical fibers that are filled with either air or gas. “The air means there’s less attenuation, so the light can travel over a longer distance. That’s a real advantage,” says the professor. But in a thin substance like air, the light is harder to amplify. “That’s the crux of the problem: light travels faster when there’s less resistance, but at the same time it’s harder to act on. Luckily, our discovery has squared that circle.”
    From infrared to ultraviolet
    So what did the researchers do? “We just added pressure to the air in the fiber to give us some controlled resistance,” explains Fan Yang, postdoctoral student. “It works in a similar way to optical tweezers — the air molecules are compressed and form into regularly spaced clusters. This creates a sound wave that increases in amplitude and effectively diffracts the light from a powerful source towards the weakened beam so that it is amplified up to 100,000 times.” Their technique therefore makes the light considerably more powerful. “Our technology can be applied to any type of light, from infrared to ultraviolet, and to any gas,” he explains. Their findings have just been published in Nature Photonics.
    An extremely accurate thermometer
    Going forward, the technology could serve other purposes in addition to light amplification. Hollow-core or compressed-gas optical fibers could, for instance, be used to make extremely accurate thermometers. “We’ll be able to measure temperature distribution at any point along the fiber. So if a fire starts along a tunnel, we’ll know exactly where it began based on the increased temperature at a given point,” says Flavien Gyger, PhD student. The technology could also be used to create a temporary optical memory by stopping the light in the fiber for a microsecond — that’s ten times longer than is currently possible.

    Story Source:
    Materials provided by Ecole Polytechnique Fédérale de Lausanne. Original written by Valérie Geneux. Note: Content may be edited for style and length. More

  • in

    NIST's SAMURAI measures 5G communications channels precisely

    Engineers at the National Institute of Standards and Technology (NIST) have developed a flexible, portable measurement system to support design and repeatable laboratory testing of fifth-generation (5G) wireless communications devices with unprecedented accuracy across a wide range of signal frequencies and scenarios.
    The system is called SAMURAI, short for Synthetic Aperture Measurements of Uncertainty in Angle of Incidence. The system is the first to offer 5G wireless measurements with accuracy that can be traced to fundamental physical standards — a key feature because even tiny errors can produce misleading results. SAMURAI is also small enough to be transported to field tests.
    Mobile devices such as cellphones, consumer Wi-Fi devices and public-safety radios now mostly operate at electromagnetic frequencies below 3 gigahertz (GHz) with antennas that radiate equally in all directions. Experts predict 5G technologies could boost data rates a thousandfold by using higher, “millimeter-wave” frequencies above 24 GHz and highly directional, actively changing antenna patterns. Such active antenna arrays help to overcome losses of these higher-frequency signals during transmission. 5G systems also send signals over multiple paths simultaneously — so-called spatial channels — to increase speed and overcome interference.
    Many instruments can measure some aspects of directional 5G device and channel performance. But most focus on collecting quick snapshots over a limited frequency range to provide a general overview of a channel, whereas SAMURAI provides a detailed portrait. In addition, many instruments are so physically large that they can distort millimeter-wave signal transmissions and reception.
    Described at a conference on Aug. 7, SAMURAI is expected to help resolve many unanswered questions surrounding 5G’s use of active antennas, such as what happens when high data rates are transmitted across multiple channels at once. The system will help improve theory, hardware and analysis techniques to provide accurate channel models and efficient networks.
    “SAMURAI provides a cost-effective way to study many millimeter-wave measurement issues, so the technique will be accessible to academic labs as well as instrumentation metrology labs,” NIST electronics engineer Kate Remley said. “Because of its traceability to standards, users can have confidence in the measurements. The technique will allow better antenna design and performance verification, and support network design.”
    SAMURAI measures signals across a wide frequency range, currently up to 50 GHz, extending to 75 GHz in the coming year. The system got its name because it measures received signals at many points over a grid or virtual “synthetic aperture.” This allows reconstruction of incoming energy in three dimensions — including the angles of the arriving signals — which is affected by many factors, such as how the signal’s electric field reflects off of objects in the transmission path.
    SAMURAI can be applied to a variety of tasks from verifying the performance of wireless devices with active antennas to measuring reflective channels in environments where metallic objects scatter signals. NIST researchers are currently using SAMURAI to develop methods for testing industrial Internet of Things devices at millimeter-wave frequencies.
    The basic components are two antennas to transmit and receive signals, instrumentation with precise timing synchronization to generate radio transmissions and analyze reception, and a six-axis robotic arm that positions the receive antenna to the grid points that form the synthetic aperture. The robot ensures accurate and repeatable antenna positions and traces out a variety of reception patterns in 3D space, such as cylindrical and hemispherical shapes. A variety of small metallic objects such as flat plates and cylinders can be placed in the test setup to represent buildings and other real-world impediments to signal transmission. To improve positional accuracy, a system of 10 cameras is also used to track the antennas and measure the locations of objects in the channel that scatter signals.
    The system is typically attached to an optical table measuring 5 feet by 14 feet (1.5 meters by 4.3 meters). But the equipment is portable enough to be used in mobile field tests and moved to other laboratory settings. Wireless communications research requires a mix of lab tests — which are well controlled to help isolate specific effects and verify system performance — and field tests, which capture the range of realistic conditions.
    Measurements can require hours to complete, so all aspects of the (stationary) channel are recorded for later analysis. These values include environmental factors such as temperature and humidity, location of scattering objects, and drift in accuracy of the measurement system.
    The NIST team developed SAMURAI with collaborators from the Colorado School of Mines in Golden, Colorado. Researchers have verified the basic operation and are now incorporating uncertainty due to unwanted reflections from the robotic arm, position error and antenna patterns into the measurements. More

  • in

    Aquatic robots can remove contaminant particles from water

    Corals in the Ocean are made up of coral polyps, a small soft creature with a stem and tentacles, they are responsible for nourishing the corals, and aid the coral’s survival by generating self-made currents through motion of their soft bodies.
    Scientists from WMG at the University of Warwick, led by Eindhoven University of Technology in the Netherlands, developed a 1cm by 1cm wireless artificial aquatic polyp, which can remove contaminants from water. Apart from cleaning, this soft robot could be also used in medical diagnostic devices by aiding in picking up and transporting specific cells for analysis.
    In the paper, ‘An artificial aquatic polyp that wirelessly attracts, grasps, and releases objects’ researchers demonstrate how their artificial aquatic polyp moves under the influence of a magnetic field, while the tentacles are triggered by light. A rotating magnetic field under the device drives a rotating motion of the artificial polyp’s stem. This motion results in the generation of an attractive flow which can guide suspended targets, such as oil droplets, towards the artificial polyp.
    Once the targets are within reach, UV light can be used to activate the polyp’s tentacles, composed of photo-active liquid crystal polymers, which then bend towards the light enclosing the passing target in the polyp’s grasp. Target release is then possible through illumination with blue light.
    Dr Harkamaljot Kandail, from WMG, University of Warwick was responsible for creating state of the art 3D simulations of the artificial aquatic polyps. The simulations are important to help understand and elucidate the stem and tentacles generate the flow fields that can attract the particles in the water.
    The simulations were then used to optimise the shape of the tentacles so that the floating particles could be grabbed quickly and efficiently.
    Dr Harkamaljot Kandail, from WMG, University of Warwick comments:
    “Corals are such a valuable ecosystem in our oceans, I hope that the artificial aquatic polyps can be further developed to collect contaminant particles in real applications. The next stage for us to overcome before being able to do this is to successfully scale up the technology from laboratory to pilot scale. To do so we need to design an array of polyps which work harmoniously together where one polyp can capture the particle and pass it along for removal.”
    Marina Pilz Da Cunha, from the Eindhoven University of Technology, Netherlands adds:
    “The artificial aquatic polyp serves as a proof of concept to demonstrate the potential of actuator assemblies and serves as an inspiration for future devices. It exemplifies how motion of different stimuli-responsive polymers can be harnessed to perform wirelessly controlled tasks in an aquatic environment.”

    Story Source:
    Materials provided by University of Warwick. Note: Content may be edited for style and length. More

  • in

    Math shows how brain stays stable amid internal noise and a widely varying world

    Whether you are playing Go in a park amid chirping birds, a gentle breeze and kids playing catch nearby or you are playing in a den with a ticking clock on a bookcase and a purring cat on the sofa, if the game situation is identical and clear, your next move likely would be, too, regardless of those different conditions. You’ll still play the same next move despite a wide range of internal feelings or even if a few neurons here and there are just being a little erratic. How does the brain overcome unpredictable and varying disturbances to produce reliable and stable computations? A new study by MIT neuroscientists provides a mathematical model showing how such stability inherently arises from several known biological mechanisms.
    More fundamental than the willful exertion of cognitive control over attention, the model the team developed describes an inclination toward robust stability that is built in to neural circuits by virtue of the connections, or “synapses” that neurons make with each other. The equations they derived and published in PLOS Computational Biology show that networks of neurons involved in the same computation will repeatedly converge toward the same patterns of electrical activity, or “firing rates,” even if they are sometimes arbitrarily perturbed by the natural noisiness of individual neurons or arbitrary sensory stimuli the world can produce.
    “How does the brain make sense of this highly dynamic, non-linear nature of neural activity?” said co-senior author Earl Miller, Picower Professor of Neuroscience in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences (BCS) at MIT. “The brain is noisy, there are different starting conditions — how does the brain achieve a stable representation of information in the face of all these factors that can knock it around?”
    To find out, Miller’s lab, which studies how neural networks represent information, joined forces with BCS colleague and mechanical engineering Professor Jean-Jacques Slotine, who leads the Nonlinear Systems Laboratory at MIT. Slotine brought the mathematical method of “contraction analysis,” a concept developed in control theory, to the problem along with tools his lab developed to apply the method. Contracting networks exhibit the property of trajectories that start from disparate points ultimately converging into one trajectory, like tributaries in a watershed. They do so even when the inputs vary with time. They are robust to noise and disturbance, and they allow for many other contracting networks to be combined together without a loss of overall stability — much like brain typically integrates information from many specialized regions.
    “In a system like the brain where you have [hundreds of billions] of connections the questions of what will preserve stability and what kinds of constraints that imposes on the system’s architecture become very important,” Slotine said.
    Math reflects natural mechanisms
    Leo Kozachkov, a graduate student in both Miller’s and Slotine’s labs, led the study by applying contraction analysis to the problem of the stability of computations in the brain. What he found is that the variables and terms in the resulting equations that enforce stability directly mirror properties and processes of synapses: inhibitory circuit connections can get stronger, excitatory circuit connections can get weaker, both kinds of connections are typically tightly balanced relative to each other, and neurons make far fewer connections than they could (each neuron, on average, could make roughly 10 million more connections than it does).

    advertisement

    “These are all things that neuroscientists have found, but they haven’t linked them to this stability property,” Kozachkov said. “In a sense, we’re synthesizing some disparate findings in the field to explain this common phenomenon.”
    The new study, which also involved Miller lab postdoc Mikael Lundqvist, was hardly the first to grapple with stability in the brain, but the authors argue it has produced a more advanced model by accounting for the dynamics of synapses and by allowing for wide variations in starting conditions. It also offers mathematical proofs of stability, Kozachkov added.
    Though focused on the factors that ensure stability, the authors noted, their model does not go so far as to doom the brain to inflexibility or determinism. The brain’s ability to change — to learn and remember — is just as fundamental to its function as its ability to consistently reason and formulate stable behaviors.
    “We’re not asking how the brain changes,” Miller said. “We’re asking how the brain keeps from changing too much.”
    Still, the team plans to keep iterating on the model, for instance by encompassing a richer accounting for how neurons produce individual spikes of electrical activity, not just rates of that activity.
    They are also working to compare the model’s predictions with data from experiments in which animals repeatedly performed tasks in which they needed to perform the same neural computations, despite experiencing inevitable internal neural noise and at least small sensory input differences.
    Finally, the team is considering how the models may inform understanding of different disease states of the brain. Aberrations in the delicate balance of excitatory and inhibitory neural activity in the brain is considered crucial in epilepsy, Kozachkov notes. A symptom of Parkinson’s disease, as well, entails a neurally-rooted loss of motor stability. Miller adds that some patients with autism spectrum disorders struggle to stably repeat actions (e.g. brushing teeth) when external conditions vary (e.g. brushing in a different room).
    The National Institute of Mental Health, the Office of Naval Research, the National Science Foundation and the JPB Foundation supported the research More

  • in

    Grasshopper jumping on Bloch sphere finds new quantum insights

    New research at the University of Warwick has (pardon the pun) put a new spin on a mathematical analogy involving a jumping grasshopper and its ideal lawn shape. This work could help us understand the spin states of quantum-entangled particles.
    The grasshopper problem was devised by physicists Olga Goulko (then at UMass Amherst), Adrian Kent and Damián Pitalúa-García (Cambridge). They asked for the ideal lawn shape that would maximize the chance that a grasshopper, starting from a random position on the lawn and jumping a fixed distance in a random direction, lands back on the lawn. Intuitively one might expect the answer to be a circular lawn, at least for small jumps. But Goulko and Kent actually proved otherwise: various shapes from a cogwheel pattern to some disconnected patches of lawn performed better for different jump sizes (link to the technical paper).
    Beyond surprises about lawn shapes and grasshoppers, the research provided useful insight into Bell-type inequalities relating probabilities of the spin states of two separated quantum-entangled particles. The Bell inequality, proved by physicist John Stewart Bell in 1964 and later generalised in many ways, demonstrated that no combination of classical theories with Einstein’s special relativity is able to explain the predictions (and later actual experimental observations) of quantum theory.
    The next step was to test the grasshopper problem on a sphere. The Bloch sphere is a geometrical representation of the state space of a single quantum bit. A great circle on the Bloch sphere defines linear polarization measurements, which are easily implemented and commonly used in Bell and other cryptographic tests. Because of the antipodal symmetry for the Bloch sphere, a lawn covers half the total surface area, and the natural hypothesis would be that the ideal lawn is hemispherical. Researchers in the Department of Computer Science at the University of Warwick, in collaboration with Goulko and Kent, investigated this problem and found that it too requires non-intuitive lawn patterns. The main result is that the hemisphere is never optimal, except in the special case when the grasshopper needs exactly an even number of jumps to go around the equator. This research shows that there are previously unknown types of Bell inequalities.
    One of the paper’s authors — Dmitry Chistikov from the Centre for Discrete Mathematics and its Applications (DIMAP) and the Department of Computer Science, at the University of Warwick, commented:
    “Geometry on the sphere is fascinating. The sine rule, for instance, looks nicer for the sphere than the plane, but this didn’t make our job easy.”
    The other author from Warwick, Professor Mike Paterson FRS, said:
    “Spherical geometry makes the analysis of the grasshopper problem more complicated. Dmitry, being from the younger generation, used a 1948 textbook and pen-and-paper calculations, whereas I resorted to my good old Mathematica methods.”
    The paper, entitled ‘Globe-hopping’, is published in the Proceedings of the Royal Society A. It is interdisciplinary work involving mathematics and theoretical physics, with applications to quantum information theory.
    The research team: Dmitry Chistikov and Mike Paterson (both from the University of Warwick), Olga Goulko (Boise State University, USA), and Adrian Kent (Cambridge), say that the next steps to give even more insight into quantum spin state probabilities are looking for the most grasshopper-friendly lawns on the sphere or even letting the grasshopper boldly go jumping in three or more dimensions.

    Story Source:
    Materials provided by University of Warwick. Note: Content may be edited for style and length. More