More stories

  • in

    Silicon chips combine light and ultrasound for better signal processing

    The continued growth of wireless and cellular data traffic relies heavily on light waves. Microwave photonics is the field of technology that is dedicated to the distribution and processing of electrical information signals using optical means. Compared with traditional solutions based on electronics alone, microwave photonic systems can handle massive amounts of data. Therefore, microwave photonics has become increasingly important as part of 5G cellular networks and beyond. A primary task of microwave photonics is the realization of narrowband filters: the selection of specific data, at specific frequencies, out of immense volumes that are carried over light.
    Many microwave photonic systems are built of discrete, separate components and long optical fiber paths. However, the cost, size, power consumption and production volume requirements of advanced networks call for a new generation of microwave photonic systems that are realized on a chip. Integrated microwave photonic filters, particularly in silicon, are highly sought after. There is, however, a fundamental challenge: Narrowband filters require that signals are delayed for comparatively long durations as part of their processing.
    “Since the speed of light is so fast,” says Prof. Avi Zadok from Bar-Ilan University, Israel, “we run out of chip space before the necessary delays are accommodated. The required delays may reach over 100 nanoseconds. Such delays may appear to be short considering daily experience, however the optical paths that support them are over ten meters long! We cannot possibly fit such long paths as part of a silicon chip. Even if we could somehow fold over that many meters in a certain layout, the extent of optical power losses to go along with it would be prohibitive.”
    These long delays require a different type of wave, one that travels much more slowly. In a study recently published in the journal Optica, Zadok and his team from the Faculty of Engineering and Institute of Nanotechnology and Advanced Materials at Bar-Ilan University, and collaborators from the Hebrew University of Jerusalem and Tower Semiconductors, suggest a solution. They brought together light and ultrasonic waves to realize ultra-narrow filters of microwave signals, in silicon integrated circuits. The concept allows large freedom for filters design.
    Bar-Ilan University doctoral student Moshe Katzman explains: “We’ve learned how to convert the information of interest from the form of light waves to ultrasonic, surface acoustic waves, and then back to optics. The surface acoustic waves travel at a speed that is 100,000 slower. We can accommodate the delays that we need as part of our silicon chip, within less than a millimeter, and with losses that are very reasonable.”
    Acoustic waves have served for the processing of information for sixty years, however their chip-level integration alongside light waves has proven tricky. Moshe Katzman continues: “Over the last decade we have seen landmark demonstrations of how light and ultrasound waves can be brought together on a chip device, to make up excellent microwave photonic filters. However, the platforms used were more specialized. Part of the appeal of the solution is in its simplicity. The fabrication of devices is based on routine protocols of silicon waveguides. We are not doing anything fancy here.” The realized filters are very narrowband: the spectral width of the filters passbands is only 5 MHz.
    In order to realize narrowband filters, the information-carrying surface acoustic waves is imprinted upon the output light wave multiple times. Doctoral student Maayan Priel elaborates: “The acoustic signal crosses the light path up to 12 times, depending on choice of layout. Each such event imprints a replica of our signal of interest on the optical wave. Due to the slow acoustic speed, these events are separated by long delays. Their overall summation is what makes the filters work.” As part of their research, the team reports complete control over each replica, towards the realization of arbitrary filter responses. Maayan Priel concludes: “The freedom to design the response of the filters is making the most out of the integrated, microwave-photonic platform.”
    Story Source:
    Materials provided by Bar-Ilan University. Note: Content may be edited for style and length. More

  • in

    Ultra-sensitive light detector gives self-driving tech a jolt

    Realizing the potential of self-driving cars hinges on technology that can quickly sense and react to obstacles and other vehicles in real time. Engineers from The University of Texas at Austin and the University of Virginia created a new first-of-its-kind light detecting device that can more accurately amplify weak signals bouncing off of faraway objects than current technology allows, giving autonomous vehicles a fuller picture of what’s happening on the road.
    The new device is more sensitive than other light detectors in that it also eliminates inconsistency, or noise, associated with the detection process. Such noise can cause systems to miss signals and put autonomous vehicle passengers at risk.
    “Autonomous vehicles send out laser signals that bounce off objects to tell you how far away you are. Not much light comes back, so if your detector is putting out more noise than the signal coming in you get nothing,” said Joe Campbell, professor of electrical and computer engineering at the University of Virginia School of Engineering.
    Researchers around the globe are working on devices, known as avalanche photodiodes, to meet these needs. But what makes this new device stand out is its staircase-like alignment. It includes physical steps in energy that electrons roll down, multiplying along the way and creating a stronger electrical current for light detection as they go.
    In 2015, the researchers created a single-step staircase device. In this new discovery, detailed in Nature Photonics, they’ve shown, for the first time, a staircase avalanche photodiode with multiple steps.
    “The electron is like a marble rolling down a flight of stairs,” said Seth Bank, professor in the Cockrell School’s Department of Electrical and Computer Engineering who led the research with Campbell, a former professor in the Cockrell School from 1989 to 2006 and UT Austin alumnus (B.S., Physics, 1969). “Each time the marble rolls off a step, it drops and crashes into the next one. In our case, the electron does the same thing, but each collision releases enough energy to actually free another electron. We may start with one electron, but falling off each step doubles the number of electrons: 1, 2, 4, 8, and so on.”
    The new pixel-sized device is ideal for Light Detection and Ranging (lidar) receivers, which require high-resolution sensors that detect optical signals reflected from distant objects. Lidar is an important part of self-driving car technology, and it also has applications in robotics, surveillance and terrain mapping. More

  • in

    These cognitive exercises help young children boost their math skills, study shows

    Young children who practice visual working memory and reasoning tasks improve their math skills more than children who focus on spatial rotation exercises, according to a large study by researchers at Karolinska Institutet in Sweden. The findings support the notion that training spatial cognition can enhance academic performance and that when it comes to math, the type of training matters. The study is published in the journal Nature Human Behaviour.
    “In this large, randomized study we found that when it comes to enhancing mathematical learning in young children, the type of cognitive training performed plays a significant role,” says corresponding author Torkel Klingberg, professor in the Department of Neuroscience, Karolinska Institutet. “It is an important finding because it provides strong evidence that cognitive training transfers to an ability that is different from the one you practiced.”
    Numerous studies have linked spatial ability — that is the capacity to understand and remember dimensional relations among objects — to performance in science, technology, engineering and mathematics. As a result, some employers in these fields use spatial ability tests to vet candidates during the hiring process. This has also fueled an interest in spatial cognition training, which focuses on improving one’s ability to memorize and manipulate various shapes and objects and spot patterns in recurring sequences. Some schools today include spatial exercises as part of their tutoring.
    However, previous studies assessing the effect of spatial training on academic performance have had mixed results, with some showing significant improvement and others no effect at all. Thus, there is a need for large, randomized studies to determine if and to what extent spatial cognition training actually improves performance.
    In this study, more than 17,000 Swedish schoolchildren between the ages of six and eight completed cognitive training via an app for either 20 or 33 minutes per day over the course of seven weeks. In the first week, the children were given identical exercises, after which they were randomly split into one of five training plans. In all groups, children spent about half of their time on mathematical number line tasks. The remaining time was randomly allotted to different proportions of cognitive training in the form of rotation tasks (2D mental rotation and tangram puzzle), visual working memory tasks or non-verbal reasoning tasks (see examples below for details). The children’s math performance was tested in the first, fifth and seventh week.
    The researchers found that all groups improved on mathematical performance, but that reasoning training had the largest positive impact followed by working memory tasks. Both reasoning and memory training significantly outperformed rotation training when it came to mathematical improvement. They also observed that the benefits of cognitive training could differ threefold between individuals. That could explain differences in results from some previous studies seeing as individual characteristics of study participants tend to impact the results.
    The researchers note there were some limitations to the study, including the lack of a passive control group that would allow for an estimation of the absolute effect size. Also, this study did not include a group of students who received math training only.
    “While it is likely that for any given test, training on that particular skill is the most time-effective way to improve test results, our study offers a proof of principle that spatial cognitive training transfers to academic abilities,” Torkel Klingberg says. “Given the wide range of areas associated with spatial cognition, it is possible that training transfers to multiple areas and we believe this should be included in any calculation by teachers and policymakers of how time-efficient spatial training is relative to training for a particular test.”
    The researchers have received funding by the Swedish Research Council. Torkel Klingberg holds an unpaid position as chief scientific officer for Cognition Matters, the non-profit foundation that owns the cognition training app Vektor that was used in this study.
    Examples of training tasks in the study In a number line task, a person is asked to identify the right position of a number on a line bound by a start and an end point. Difficulty is typically moderated by removing spatial cues, for example ticks on the number line, and progress to include mathematical problems such as addition, subtraction and division. In a visual working memory task, a person is asked to recollect visual objects. In this study, the children reproduced a sequence of dots on a grid by touching the screen. Difficulty was increased by adding more items. In a non-verbal reasoning task, a person is asked to complete sequences of spatial patterns. In this study, the children were asked to choose the correct image to fill a blank space based on previous sequences. Difficulty was increased by adding new dimensions such as colors, shapes and dots. In a rotation task, a person is asked to figure out what an object would look like if rotated. In this study, the children were asked to rotate a 2D object to fit various angles. Difficulty was moderated by increasing the angle of the rotation or the complexity of the object being rotated.
    Story Source:
    Materials provided by Karolinska Institutet. Note: Content may be edited for style and length. More

  • in

    Walking in their shoes: Using virtual reality to elicit empathy in healthcare providers

    Research has shown empathy gives healthcare workers the ability to provide appropriate supports and make fewer mistakes. This helps increase patient satisfaction and enhance patient outcomes, resulting in better overall care. In an upcoming issue of the Journal of Medical Imaging and Radiation Sciences, published by Elsevier, multidisciplinary clinicians and researchers from Dalhousie University performed an integrative review to synthesize the findings regarding virtual reality (VR) as a pedagogical tool for eliciting empathetic behavior in medical radiation technologists (MRTs).
    Informally, empathy is often described as the capacity to put oneself in the shoes of another. Empathy is essential to patient-centered care and crucial to the development of therapeutic relationships between carers (healthcare providers, healthcare students, and informal caregivers such as parents, spouses, friends, family, clergy, social workers, and fellow patients) and care recipients. Currently, there is a need for the development of effective tools and approaches that are standardizable, low-risk, safe-to-fail, easily repeatable, and could assist in eliciting empathetic behavior.
    This research synthesis looked at studies investigating VR experiences that ranged from a single eight-minute session to sessions lasting 20-25 minutes in duration delivered on two separate days, both in immersive VR environments where participants assumed the role of a care recipient, and non-immersive VR environments where the participants assumed the role of a care provider in a simulated care setting. The two types of studies helped researchers gain an understanding of what it is like to have a specific disease or need and to practice interacting with virtual care recipients.
    “Although the studies we looked at don’t definitively show VR can help sustain empathy behaviors over time, there is a lot of promise for research and future applications in this area,” explained lead author Megan Brydon, MSc, BHSc, RTNM, IWK Health Centre, Halifax, Nova Scotia, Canada.
    The authors conclude that VR may provide an effective and wide-ranging tool for the learning of care recipients’ perspectives and that future studies should seek to determine which VR experiences are the most effective in evoking empathetic behaviors. They recommend that these studies employ high order designs that are better able to control biases.
    Story Source:
    Materials provided by Elsevier. Note: Content may be edited for style and length. More

  • in

    ‘Tree farts’ contribute about a fifth of greenhouse gases from ghost forests

    If a tree farts in the forest, does it make a sound? No, but it does add a smidge of greenhouse gas to the atmosphere.

    Gases released by dead trees — dubbed “tree farts” — account for roughly one-fifth of the greenhouse gases emitted by skeletal, marshy forests along the coast of North Carolina, researchers report online May 10 in Biogeochemistry. While these emissions pale in comparison with other sources, an accurate accounting is necessary to get a full picture of where climate-warming gases come from.

    A team of ecologists went sniffing for tree farts in ghost forests, which form when saltwater from rising sea levels poisons a woodland, leaving behind a marsh full of standing dead trees. These phantom ecosystems are expected to expand with climate change, but it’s unclear exactly how they contribute to the world’s carbon budget.

    “The emergence of ghost forests is one of the biggest changes happening in response to sea level rise,” says Keryn Gedan, a coastal ecologist at George Washington University in Washington, D.C., who was not involved in the work. “As forests convert to wetlands, we expect over long timescales that’s going to represent a substantial carbon sink,” she says, since wetlands store more carbon than forests. But in the short term, dead trees decay and stop taking up carbon dioxide through photosynthesis, “so that’s going to be a major greenhouse gas source.”

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    To better understand how ghost forests pass gas into the atmosphere, the researchers measured greenhouse gases wafting off dead trees and soil in five ghost forests on the Albemarle-Pamlico Peninsula in North Carolina. “It’s kind of eerie” out there, says Melinda Martinez, a wetland ecologist at North Carolina State University in Raleigh.

    But Martinez ain’t afraid of no ghost forest. In 2018 and 2019, she measured CO2, methane and nitrous oxide emissions from dead trees using a portable gas analyzer she toted on her back. “I definitely looked like a ghostbuster,” she says.

    Wetland ecologist Melinda Martinez totes a portable gas analyzer on her back to measure the “tree farts” emitted by a ghost forest tree. A tube connects the gas analyzer to an airtight seal around the trunk of the tree.M. Ardón

    Soils gave off most of the greenhouse gases from the ghost forests. Each square meter of ground emitted an average 416 milligrams of CO2, 5.9 milligrams of methane and 0.1 milligrams of nitrous oxide per hour. On average, dead trees released about 116 milligrams of CO2, 0.3 milligrams of methane and 0.04 milligrams of nitrous oxide per square meter per hour — totaling about one-fourth the soil’s emissions.

    Measuring greenhouse gases from the trees is “kind of measuring the last breath of these forests,” says Marcelo Ardón, an ecosystems ecologist and biogeochemist at North Carolina State University. The dead trees “don’t emit a ton, but they are important” to a ghost forest’s overall emissions.

    Ardón coined the term “tree farts” to describe the dead trees’ greenhouse gas emissions. “I have an 8-year-old and an 11-year-old, and fart jokes are what we talk about,” he explains. But the analogy has a biological basis, too. Actual farts are caused by microbes in the body; the greenhouse gases emitted by ghost forests are created by microbes in the soil and trees.

    In the grand scheme of carbon emissions, ghost forests’ role may be minor. Tree farts, for instance, have nothing on cow burps (SN: 11/18/15). A single dairy cow can emit up to 27 grams of methane — a far more potent greenhouse gas than CO2 — per hour. But accounting for even minor sources of carbon is important for fine-tuning our understanding of the global carbon budget, says Martinez (SN: 10/1/19). So it would behoove scientists not to turn up their noses at ghost tree farts.   More

  • in

    Envisioning safer cities with AI

    Artificial intelligence is providing new opportunities in a range of fields, from business to industrial design to entertainment. But how about civil engineering and city planning? How might machine- and deep-learning help us create safer, more sustainable, and resilient built environments?
    A team of researchers from the NSF NHERI SimCenter, a computational modeling and simulation center for the natural hazards engineering community based at the University of California, Berkeley, have developed a suite of tools called BRAILS — Building Recognition using AI at Large-Scale — that can automatically identify characteristics of buildings in a city and even detect the risks that a city’s structures would face in an earthquake, hurricane, or tsunami.
    Charles (Chaofeng) Wang, a postdoctoral researcher at the University of California, Berkeley, and the lead developer of BRAILS, says the project grew out of a need to quickly and reliably characterize the structures in a city.
    “We want to simulate the impact of hazards on all of the buildings in a region, but we don’t have a description of the building attributes,” Wang said. “For example, in the San Francisco Bay area, there are millions of buildings. Using AI, we are able to get the needed information. We can train neural network models to infer building information from images and other sources of data.”
    BRAILS uses machine learning, deep learning, and computer vision to extract information about the built environment. It is envisioned as a tool for architects, engineers and planning professionals to more efficiently plan, design, and manage buildings and infrastructure systems.
    The SimCenter recently released BRAILS version 2.0 which includes modules to predict a larger spectrum of building characteristics. These include occupancy class (commercial, single-family, or multi-family), roof type (flat, gabled, or hipped), foundation elevation, year built, number of floors, and whether a building has a “soft-story” — a civil engineering term for structures that include ground floors with large openings (like storefronts) that may be more prone to collapse during an earthquake. More

  • in

    Magnetically propelled cilia power climbing soft robots and microfluidic pumps

    The rhythmic motions of hair-like cilia move liquids around cells or propel the cells themselves. In nature, cilia flap independently, and mimicking these movements with artificial materials requires complex mechanisms. Now, researchers reporting in ACS Applied Materials & Interfaces have made artificial cilia that move in a wave-like fashion when a rotating magnetic field is applied, making them suitable for versatile, climbing soft robots and microfluidic devices. Watch a video of the artificial cilia here.
    Replicating movements found in nature — for example, the small, whip-like movements of cilia — could help researchers create better robots or microscopic devices. As cilia vibrate sequentially, they produce a traveling wave that moves water more efficiently and with a better pumping speed than when the cilia move at the same time. Previous researchers have recreated these wave-like movements, but the artificial cilia were expensive, needed sophisticated moving parts and were too large to be used for micro-scale devices. So, Shuaizhong Zhang, Jaap den Toonder and colleagues wanted to create microscale cilia that would move in a wave when a magnetic field was applied, pumping water quickly over them or acting as a soft robot that can crawl and climb.
    The researchers infused a polymer with carbonyl iron powder particles and poured the mixture into a series of identical 50 ?m-wide cylindrical holes. While the polymer cured, the team placed magnets underneath the mold, slightly altering the particles’ alignments and magnetic properties in adjacent cilia. To test the artificial cilia’s ability to move in water and glycerol, the researchers applied a rotating magnetic field. As magnets moved around the array, the cilia whipped back and forth, and flow was generated at a rate better than for most artificial cilia. Finally, the researchers flipped the array over, and it scuttled across a flat surface, reaching a maximum speed proportional to a human’s running speed, and the robot reversed when the magnetic field flipped directions. The soft robot crawled up and down a 45-degree incline, climbed vertical surfaces, walked upside down and carried an object 10 times heavier than its own weight. The researchers say that because these artificial cilia are magnetically propelled and unconnected to any other device, they could be used to produce microfluidic pumps and agile soft robots for biomedical applications.
    The authors acknowledge funding from a European Research Council (ERC) Advanced Grant and the China Scholarship Council.
    Story Source:
    Materials provided by American Chemical Society. Note: Content may be edited for style and length. More

  • in

    Researchers use 'hole-y' math and machine learning to study cellular self-assembly

    The field of mathematical topology is often described in terms of donuts and pretzels.
    To most of us, the two differ in the way they taste or in their compatibility with morning coffee. But to a topologist, the only difference between the two is that one has a single hole and the other has three. There’s no way to stretch or contort a donut to make it look like a pretzel — at least not without ripping it or pasting different parts together, both of which are verboten in topology. The different number of holes make two shapes that are fundamentally, inexorably different.
    In recent years, researchers have drawn on mathematical topology to help explain a range of phenomena like phase transitions in matter, aspects of Earth’s climate and even how zebrafish form their iconic stripes. Now, a Brown University research team is working to use topology in yet another realm: training computers to classify how human cells organize into tissue-like architectures.
    In a study published in the May 7 issue of the journal Soft Matter, the researchers demonstrate a machine learning technique that measures the topological traits of cell clusters. They showed that the system can accurately categorize cell clusters and infer the motility and adhesion of the cells that comprise them.
    “You can think of this as topology-informed machine learning,” said Dhananjay Bhaskar, a recent Ph.D. graduate who led the work. “The hope is that this can help us to avoid some of the pitfalls that affect the accuracy of machine learning algorithms.”
    Bhaskar developed the algorithm with Ian Y. Wong, an assistant professor in Brown’s School of Engineering, and William Zhang, a Brown undergraduate. More