More stories

  • in

    Electronic components join forces to take up 10 times less space on computer chips

    Electronic filters are essential to the inner workings of our phones and other wireless devices. They eliminate or enhance specific input signals to achieve the desired output signals. They are essential, but take up space on the chips that researchers are on a constant quest to make smaller. A new study demonstrates the successful integration of the individual elements that make up electronic filters onto a single component, significantly reducing the amount of space taken up by the device.
    Researchers at the University of Illinois, Urbana-Champaign have ditched the conventional 2D on-chip lumped or distributed filter network design — composed of separate inductors and capacitors — for a single, space-saving 3D rolled membrane that contains both independently designed elements.
    The results of the study, led by electrical and computer engineering professor Xiuling Li, are published in the journal Advanced Functional Materials.
    “With the success that our team has had on rolled inductors and capacitors, it makes sense to take advantage of the 2D to 3D self-assembly nature of this fabrication process to integrate these different components onto a single self-rolling and space-saving device,” Li said.
    In the lab, the team uses a specialized etching and lithography process to pattern 2D circuitry onto very thin membranes. In the circuit, they join the capacitors and inductors together and with ground or signal lines, all in a single plane. The multilayer membrane can then be rolled into a thin tube and placed onto a chip, the researchers said.
    “The patterns, or masks, we use to form the circuitry on the 2D membrane layers can be tuned to achieve whatever kind of electrical interactions we need for a particular device,” said graduate student and co-author Mark Kraman. “Experimenting with different filter designs is relatively simple using this technique because we only need to modify that mask structure when we want to make changes.”
    The team tested the performance of the rolled components and found that under the current design, the filters were suitable for applications in the 1-10 gigahertz frequency range, the researchers said. While the designs are targeted for use in radio frequency communications systems, the team posits that other frequencies, including in the megahertz range, are also possible based on their ability to achieve high power inductors in past research.
    “We worked with several simple filter designs, but theoretically we can make any filter network combination using the same process steps,” said graduate student and lead author Mike Yang. “We took what was already out there to provide a new, easier platform to lump these components together closer than ever.”
    “Our way of integrating inductors and capacitors monolithically could bring passive electronic circuit integration to a whole new level,” Li said. “There is practically no limit to the complexity or configuration of circuits that can be made in this manner, all with one mask set.”

    Story Source:
    Materials provided by University of Illinois at Urbana-Champaign, News Bureau. Original written by Lois Yoksoulian. Note: Content may be edited for style and length. More

  • in

    Using air to amplify light

    “The idea had been going around my head for about 15 years, but I never had the time or the resources to do anything about it.” But now Luc Thévenaz, the head of the Fiber Optics Group in EPFL’s School of Engineering, has finally made it happen: his lab has developed a technology to amplify light inside the latest hollow-core optical fibers.
    Squaring the circle
    Today’s optical fibers usually have a solid glass core, so there’s no air inside. Light can travel along the fibers but loses half of its intensity after 15 kilometers. It keeps weakening until it can hardly be detected at 300 kilometers. So to keep the light moving, it has to be amplified at regular intervals.
    Thévenaz’s approach is based on new hollow-core optical fibers that are filled with either air or gas. “The air means there’s less attenuation, so the light can travel over a longer distance. That’s a real advantage,” says the professor. But in a thin substance like air, the light is harder to amplify. “That’s the crux of the problem: light travels faster when there’s less resistance, but at the same time it’s harder to act on. Luckily, our discovery has squared that circle.”
    From infrared to ultraviolet
    So what did the researchers do? “We just added pressure to the air in the fiber to give us some controlled resistance,” explains Fan Yang, postdoctoral student. “It works in a similar way to optical tweezers — the air molecules are compressed and form into regularly spaced clusters. This creates a sound wave that increases in amplitude and effectively diffracts the light from a powerful source towards the weakened beam so that it is amplified up to 100,000 times.” Their technique therefore makes the light considerably more powerful. “Our technology can be applied to any type of light, from infrared to ultraviolet, and to any gas,” he explains. Their findings have just been published in Nature Photonics.
    An extremely accurate thermometer
    Going forward, the technology could serve other purposes in addition to light amplification. Hollow-core or compressed-gas optical fibers could, for instance, be used to make extremely accurate thermometers. “We’ll be able to measure temperature distribution at any point along the fiber. So if a fire starts along a tunnel, we’ll know exactly where it began based on the increased temperature at a given point,” says Flavien Gyger, PhD student. The technology could also be used to create a temporary optical memory by stopping the light in the fiber for a microsecond — that’s ten times longer than is currently possible.

    Story Source:
    Materials provided by Ecole Polytechnique Fédérale de Lausanne. Original written by Valérie Geneux. Note: Content may be edited for style and length. More

  • in

    NIST's SAMURAI measures 5G communications channels precisely

    Engineers at the National Institute of Standards and Technology (NIST) have developed a flexible, portable measurement system to support design and repeatable laboratory testing of fifth-generation (5G) wireless communications devices with unprecedented accuracy across a wide range of signal frequencies and scenarios.
    The system is called SAMURAI, short for Synthetic Aperture Measurements of Uncertainty in Angle of Incidence. The system is the first to offer 5G wireless measurements with accuracy that can be traced to fundamental physical standards — a key feature because even tiny errors can produce misleading results. SAMURAI is also small enough to be transported to field tests.
    Mobile devices such as cellphones, consumer Wi-Fi devices and public-safety radios now mostly operate at electromagnetic frequencies below 3 gigahertz (GHz) with antennas that radiate equally in all directions. Experts predict 5G technologies could boost data rates a thousandfold by using higher, “millimeter-wave” frequencies above 24 GHz and highly directional, actively changing antenna patterns. Such active antenna arrays help to overcome losses of these higher-frequency signals during transmission. 5G systems also send signals over multiple paths simultaneously — so-called spatial channels — to increase speed and overcome interference.
    Many instruments can measure some aspects of directional 5G device and channel performance. But most focus on collecting quick snapshots over a limited frequency range to provide a general overview of a channel, whereas SAMURAI provides a detailed portrait. In addition, many instruments are so physically large that they can distort millimeter-wave signal transmissions and reception.
    Described at a conference on Aug. 7, SAMURAI is expected to help resolve many unanswered questions surrounding 5G’s use of active antennas, such as what happens when high data rates are transmitted across multiple channels at once. The system will help improve theory, hardware and analysis techniques to provide accurate channel models and efficient networks.
    “SAMURAI provides a cost-effective way to study many millimeter-wave measurement issues, so the technique will be accessible to academic labs as well as instrumentation metrology labs,” NIST electronics engineer Kate Remley said. “Because of its traceability to standards, users can have confidence in the measurements. The technique will allow better antenna design and performance verification, and support network design.”
    SAMURAI measures signals across a wide frequency range, currently up to 50 GHz, extending to 75 GHz in the coming year. The system got its name because it measures received signals at many points over a grid or virtual “synthetic aperture.” This allows reconstruction of incoming energy in three dimensions — including the angles of the arriving signals — which is affected by many factors, such as how the signal’s electric field reflects off of objects in the transmission path.
    SAMURAI can be applied to a variety of tasks from verifying the performance of wireless devices with active antennas to measuring reflective channels in environments where metallic objects scatter signals. NIST researchers are currently using SAMURAI to develop methods for testing industrial Internet of Things devices at millimeter-wave frequencies.
    The basic components are two antennas to transmit and receive signals, instrumentation with precise timing synchronization to generate radio transmissions and analyze reception, and a six-axis robotic arm that positions the receive antenna to the grid points that form the synthetic aperture. The robot ensures accurate and repeatable antenna positions and traces out a variety of reception patterns in 3D space, such as cylindrical and hemispherical shapes. A variety of small metallic objects such as flat plates and cylinders can be placed in the test setup to represent buildings and other real-world impediments to signal transmission. To improve positional accuracy, a system of 10 cameras is also used to track the antennas and measure the locations of objects in the channel that scatter signals.
    The system is typically attached to an optical table measuring 5 feet by 14 feet (1.5 meters by 4.3 meters). But the equipment is portable enough to be used in mobile field tests and moved to other laboratory settings. Wireless communications research requires a mix of lab tests — which are well controlled to help isolate specific effects and verify system performance — and field tests, which capture the range of realistic conditions.
    Measurements can require hours to complete, so all aspects of the (stationary) channel are recorded for later analysis. These values include environmental factors such as temperature and humidity, location of scattering objects, and drift in accuracy of the measurement system.
    The NIST team developed SAMURAI with collaborators from the Colorado School of Mines in Golden, Colorado. Researchers have verified the basic operation and are now incorporating uncertainty due to unwanted reflections from the robotic arm, position error and antenna patterns into the measurements. More

  • in

    Aquatic robots can remove contaminant particles from water

    Corals in the Ocean are made up of coral polyps, a small soft creature with a stem and tentacles, they are responsible for nourishing the corals, and aid the coral’s survival by generating self-made currents through motion of their soft bodies.
    Scientists from WMG at the University of Warwick, led by Eindhoven University of Technology in the Netherlands, developed a 1cm by 1cm wireless artificial aquatic polyp, which can remove contaminants from water. Apart from cleaning, this soft robot could be also used in medical diagnostic devices by aiding in picking up and transporting specific cells for analysis.
    In the paper, ‘An artificial aquatic polyp that wirelessly attracts, grasps, and releases objects’ researchers demonstrate how their artificial aquatic polyp moves under the influence of a magnetic field, while the tentacles are triggered by light. A rotating magnetic field under the device drives a rotating motion of the artificial polyp’s stem. This motion results in the generation of an attractive flow which can guide suspended targets, such as oil droplets, towards the artificial polyp.
    Once the targets are within reach, UV light can be used to activate the polyp’s tentacles, composed of photo-active liquid crystal polymers, which then bend towards the light enclosing the passing target in the polyp’s grasp. Target release is then possible through illumination with blue light.
    Dr Harkamaljot Kandail, from WMG, University of Warwick was responsible for creating state of the art 3D simulations of the artificial aquatic polyps. The simulations are important to help understand and elucidate the stem and tentacles generate the flow fields that can attract the particles in the water.
    The simulations were then used to optimise the shape of the tentacles so that the floating particles could be grabbed quickly and efficiently.
    Dr Harkamaljot Kandail, from WMG, University of Warwick comments:
    “Corals are such a valuable ecosystem in our oceans, I hope that the artificial aquatic polyps can be further developed to collect contaminant particles in real applications. The next stage for us to overcome before being able to do this is to successfully scale up the technology from laboratory to pilot scale. To do so we need to design an array of polyps which work harmoniously together where one polyp can capture the particle and pass it along for removal.”
    Marina Pilz Da Cunha, from the Eindhoven University of Technology, Netherlands adds:
    “The artificial aquatic polyp serves as a proof of concept to demonstrate the potential of actuator assemblies and serves as an inspiration for future devices. It exemplifies how motion of different stimuli-responsive polymers can be harnessed to perform wirelessly controlled tasks in an aquatic environment.”

    Story Source:
    Materials provided by University of Warwick. Note: Content may be edited for style and length. More

  • in

    Math shows how brain stays stable amid internal noise and a widely varying world

    Whether you are playing Go in a park amid chirping birds, a gentle breeze and kids playing catch nearby or you are playing in a den with a ticking clock on a bookcase and a purring cat on the sofa, if the game situation is identical and clear, your next move likely would be, too, regardless of those different conditions. You’ll still play the same next move despite a wide range of internal feelings or even if a few neurons here and there are just being a little erratic. How does the brain overcome unpredictable and varying disturbances to produce reliable and stable computations? A new study by MIT neuroscientists provides a mathematical model showing how such stability inherently arises from several known biological mechanisms.
    More fundamental than the willful exertion of cognitive control over attention, the model the team developed describes an inclination toward robust stability that is built in to neural circuits by virtue of the connections, or “synapses” that neurons make with each other. The equations they derived and published in PLOS Computational Biology show that networks of neurons involved in the same computation will repeatedly converge toward the same patterns of electrical activity, or “firing rates,” even if they are sometimes arbitrarily perturbed by the natural noisiness of individual neurons or arbitrary sensory stimuli the world can produce.
    “How does the brain make sense of this highly dynamic, non-linear nature of neural activity?” said co-senior author Earl Miller, Picower Professor of Neuroscience in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences (BCS) at MIT. “The brain is noisy, there are different starting conditions — how does the brain achieve a stable representation of information in the face of all these factors that can knock it around?”
    To find out, Miller’s lab, which studies how neural networks represent information, joined forces with BCS colleague and mechanical engineering Professor Jean-Jacques Slotine, who leads the Nonlinear Systems Laboratory at MIT. Slotine brought the mathematical method of “contraction analysis,” a concept developed in control theory, to the problem along with tools his lab developed to apply the method. Contracting networks exhibit the property of trajectories that start from disparate points ultimately converging into one trajectory, like tributaries in a watershed. They do so even when the inputs vary with time. They are robust to noise and disturbance, and they allow for many other contracting networks to be combined together without a loss of overall stability — much like brain typically integrates information from many specialized regions.
    “In a system like the brain where you have [hundreds of billions] of connections the questions of what will preserve stability and what kinds of constraints that imposes on the system’s architecture become very important,” Slotine said.
    Math reflects natural mechanisms
    Leo Kozachkov, a graduate student in both Miller’s and Slotine’s labs, led the study by applying contraction analysis to the problem of the stability of computations in the brain. What he found is that the variables and terms in the resulting equations that enforce stability directly mirror properties and processes of synapses: inhibitory circuit connections can get stronger, excitatory circuit connections can get weaker, both kinds of connections are typically tightly balanced relative to each other, and neurons make far fewer connections than they could (each neuron, on average, could make roughly 10 million more connections than it does).

    advertisement

    “These are all things that neuroscientists have found, but they haven’t linked them to this stability property,” Kozachkov said. “In a sense, we’re synthesizing some disparate findings in the field to explain this common phenomenon.”
    The new study, which also involved Miller lab postdoc Mikael Lundqvist, was hardly the first to grapple with stability in the brain, but the authors argue it has produced a more advanced model by accounting for the dynamics of synapses and by allowing for wide variations in starting conditions. It also offers mathematical proofs of stability, Kozachkov added.
    Though focused on the factors that ensure stability, the authors noted, their model does not go so far as to doom the brain to inflexibility or determinism. The brain’s ability to change — to learn and remember — is just as fundamental to its function as its ability to consistently reason and formulate stable behaviors.
    “We’re not asking how the brain changes,” Miller said. “We’re asking how the brain keeps from changing too much.”
    Still, the team plans to keep iterating on the model, for instance by encompassing a richer accounting for how neurons produce individual spikes of electrical activity, not just rates of that activity.
    They are also working to compare the model’s predictions with data from experiments in which animals repeatedly performed tasks in which they needed to perform the same neural computations, despite experiencing inevitable internal neural noise and at least small sensory input differences.
    Finally, the team is considering how the models may inform understanding of different disease states of the brain. Aberrations in the delicate balance of excitatory and inhibitory neural activity in the brain is considered crucial in epilepsy, Kozachkov notes. A symptom of Parkinson’s disease, as well, entails a neurally-rooted loss of motor stability. Miller adds that some patients with autism spectrum disorders struggle to stably repeat actions (e.g. brushing teeth) when external conditions vary (e.g. brushing in a different room).
    The National Institute of Mental Health, the Office of Naval Research, the National Science Foundation and the JPB Foundation supported the research More

  • in

    Grasshopper jumping on Bloch sphere finds new quantum insights

    New research at the University of Warwick has (pardon the pun) put a new spin on a mathematical analogy involving a jumping grasshopper and its ideal lawn shape. This work could help us understand the spin states of quantum-entangled particles.
    The grasshopper problem was devised by physicists Olga Goulko (then at UMass Amherst), Adrian Kent and Damián Pitalúa-García (Cambridge). They asked for the ideal lawn shape that would maximize the chance that a grasshopper, starting from a random position on the lawn and jumping a fixed distance in a random direction, lands back on the lawn. Intuitively one might expect the answer to be a circular lawn, at least for small jumps. But Goulko and Kent actually proved otherwise: various shapes from a cogwheel pattern to some disconnected patches of lawn performed better for different jump sizes (link to the technical paper).
    Beyond surprises about lawn shapes and grasshoppers, the research provided useful insight into Bell-type inequalities relating probabilities of the spin states of two separated quantum-entangled particles. The Bell inequality, proved by physicist John Stewart Bell in 1964 and later generalised in many ways, demonstrated that no combination of classical theories with Einstein’s special relativity is able to explain the predictions (and later actual experimental observations) of quantum theory.
    The next step was to test the grasshopper problem on a sphere. The Bloch sphere is a geometrical representation of the state space of a single quantum bit. A great circle on the Bloch sphere defines linear polarization measurements, which are easily implemented and commonly used in Bell and other cryptographic tests. Because of the antipodal symmetry for the Bloch sphere, a lawn covers half the total surface area, and the natural hypothesis would be that the ideal lawn is hemispherical. Researchers in the Department of Computer Science at the University of Warwick, in collaboration with Goulko and Kent, investigated this problem and found that it too requires non-intuitive lawn patterns. The main result is that the hemisphere is never optimal, except in the special case when the grasshopper needs exactly an even number of jumps to go around the equator. This research shows that there are previously unknown types of Bell inequalities.
    One of the paper’s authors — Dmitry Chistikov from the Centre for Discrete Mathematics and its Applications (DIMAP) and the Department of Computer Science, at the University of Warwick, commented:
    “Geometry on the sphere is fascinating. The sine rule, for instance, looks nicer for the sphere than the plane, but this didn’t make our job easy.”
    The other author from Warwick, Professor Mike Paterson FRS, said:
    “Spherical geometry makes the analysis of the grasshopper problem more complicated. Dmitry, being from the younger generation, used a 1948 textbook and pen-and-paper calculations, whereas I resorted to my good old Mathematica methods.”
    The paper, entitled ‘Globe-hopping’, is published in the Proceedings of the Royal Society A. It is interdisciplinary work involving mathematics and theoretical physics, with applications to quantum information theory.
    The research team: Dmitry Chistikov and Mike Paterson (both from the University of Warwick), Olga Goulko (Boise State University, USA), and Adrian Kent (Cambridge), say that the next steps to give even more insight into quantum spin state probabilities are looking for the most grasshopper-friendly lawns on the sphere or even letting the grasshopper boldly go jumping in three or more dimensions.

    Story Source:
    Materials provided by University of Warwick. Note: Content may be edited for style and length. More

  • in

    Scientists can’t agree on how clumpy the universe is

    The universe is surprisingly smooth.
    A new measurement reveals that the universe is less clumpy than predicted, physicists report in a series of papers posted July 30 at arXiv.org. The discrepancy could hint at something amiss with scientists’ understanding of the cosmos.
    To pin down the cosmic clumpiness, researchers studied the orientation of 21 million galaxies with the Kilo-Degree Survey at the Paranal Observatory in Chile. As light from those galaxies streams through the universe, its trajectory is bent by massive objects, a phenomenon called gravitational lensing. This lensing causes the elongated shapes of galaxies to appear slightly aligned, rather than oriented randomly.
    When combined with additional data from other sky surveys, that alignment quantifies how much the matter in the universe is clumped together. The researchers found that the universe is about 10 percent more homogenous, or smoother, than predicted based on light released just after the Big Bang, the cosmic microwave background. Previous results had hinted at the discrepancy, but the new measurement strengthens the case that the disagreement is not a fluke (SN: 7/30/19).
    If the measurement is correct, the mismatch could hint at a hole in the standard model of cosmology, the theory that describes how the universe has changed over time. When combined with a similar puzzle over how fast the universe is expanding (SN: 7/15/20), physicists are beginning to suspect that the universe is putting them­­­­­ on notice.
    “It’s a bit of a riddle,” says cosmologist Hendrik Hildebrandt of Ruhr-Universität Bochum in Germany, a coauthor of the studies. “Is [the universe] just telling us ‘You’re stupid and you didn’t do your measurement right,’ or … ‘Hey, I’m more complicated than you thought’?” More

  • in

    Species may swim thousands of kilometers to escape ocean heat waves

    When an intense heat wave strikes a patch of ocean, overheated marine animals may have to swim thousands of kilometers to find cooler waters, researchers report August 5 in Nature.
    Such displacement, whether among fish, whales or turtles, can hinder both conservation efforts and fishery operations. “To properly manage those species, we need to understand where they are,” says Michael Jacox, a physical oceanographer with the National Oceanographic and Atmospheric Administration based in Monterey, Calif.
    Marine heat waves —  defined as at least five consecutive days of unusually hot water for a given patch of ocean — have become increasingly common over the past century (SN: 4/10/18). Climate change has amped up the intensity of some of the most famous marine heat waves of recent years, such as the Pacific Ocean Blob from 2015 to 2016 and scorching waters in the Tasman Sea in 2017 (SN: 12/14/17; SN: 12/11/18).
    “We know that these marine heat waves are having lots of effects on the ecosystem,” Jacox says. For example, researchers have documented how the sweltering waters can bleach corals and wreak havoc on kelp forests. But the impacts on mobile species such as fish are only beginning to be studied (SN: 1/15/20).

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    “We have seen species appearing far north of where we expect them,” Jacox says. For example, in 2015, the Blob drove hammerhead sharks — which normally stay close to the tropics, near Baja California in Mexico — to shift their range at least hundreds of kilometers north, where they were observed off the coast of Southern California.
    To see how far a mobile ocean dweller would need to flee to escape the heat, Jacox and colleagues compared ocean temperatures around the globe. First, they examined surface ocean temperatures from 1982 to 2019 compiled by NOAA from satellites, buoys and shipboard measurements. Then, for the same period, they identified marine heat waves occurring around the world, where water temperatures for a region lingered in the highest 10 percent ever recorded for that place and that time of year. Finally, they calculated how far a swimmer in an area with a heat wave has had to go to reach cooler waters, a distance the team dubs “thermal displacement.”

    In higher-latitude regions, such as the Tasman Sea, relief tended to be much closer, within a few tens of kilometers of the overheated patch, the researchers found. So while ocean heat waves in that region might spell doom for firmly rooted corals and kelp, mobile species might fare better. “We were surprised that the displacements were so small,” Jacox says.
    But in the tropics, where ocean temperatures are more uniform, species may have had to travel thousands of kilometers to escape the heat.  
    Projecting how species might move around in the future due to marine heat waves gets increasingly complicated, the researchers found. That’s because over the next few decades, climate change is anticipated to cause not just an increase in frequency and intensity of marine heat waves, but also warming of all of Earth’s ocean waters (SN: 9/25/19). Furthermore, that rate of warming will vary from place to place. As a result, future thermal displacement could increase in some parts of the ocean relative to today, and decrease in others, writes marine ecologist Mark Payne of the Technical University of Denmark in Copenhagen, in a commentary in the same issue of Nature.
    That complexity highlights the task ahead for researchers trying to anticipate changes across ocean ecosystems as the waters warm, says Lewis Barnett, a Seattle-based NOAA fish biologist, who was not involved in the study. The new work provides important context for data being collected on fish stocks. For example, surveys of the Gulf of Alaska in 2017 noted a large decline in the abundance of valuable Pacific cod, now known to be linked to the Blob heatwave that had ended the year before.
    But there’s a lot more work to be done, Barnett says.
    The study focuses on surface ocean temperatures, but ocean conditions and dynamics are different in the deep ocean, he notes. Some species, too, move more easily between water depths than others. And heat tolerance also varies from species to species. Biologists are racing to understand these differences, and how hot waters can affect the life cycles and distributions of many different animals.
    The effects of marine heat waves might be ephemeral compared with the impacts of long-term climate change. But these extreme events offer a peek into the future, says Malin Pinsky, a marine ecologist at Rutgers University in New Brunswick, N.J., who was not involved in the study. “We can use these heat waves as lessons for how we’ll need to adapt.” More