More stories

  • in

    Studying chaos with one of the world's fastest cameras

    There are things in life that can be predicted reasonably well. The tides rise and fall. The moon waxes and wanes. A billiard ball bounces around a table according to orderly geometry.
    And then there are things that defy easy prediction: The hurricane that changes direction without warning. The splashing of water in a fountain. The graceful disorder of branches growing from a tree.
    These phenomena and others like them can be described as chaotic systems, and are notable for exhibiting behavior that is predictable at first, but grows increasingly random with time.
    Because of the large role that chaotic systems play in the world around us, scientists and mathematicians have long sought to better understand them. Now, Caltech’s Lihong Wang, the Bren Professor in the Andrew and Peggy Cherng department of Medical Engineering, has developed a new tool that might help in this quest.
    In the latest issue of Science Advances, Wang describes how he has used an ultrafast camera of his own design that recorded video at one billion frames per second to observe the movement of laser light in a chamber specially designed to induce chaotic reflections.
    “Some cavities are non-chaotic, so the path the light takes is predictable,” Wang says. But in the current work, he and his colleagues have used that ultrafast camera as a tool to study a chaotic cavity, “in which the light takes a different path every time we repeat the experiment.”
    The camera makes use of a technology called compressed ultrafast photography (CUP), which Wang has demonstrated in other research to be capable of speeds as fast as 70 trillion frames per second. The speed at which a CUP camera takes video makes it capable of seeing light — the fastest thing in the universe — as it travels.

    advertisement

    But CUP cameras have another feature that make them uniquely suited for studying chaotic systems. Unlike a traditional camera that shoots one frame of video at a time, a CUP camera essentially shoots all of its frames at once. This allows the camera to capture the entirety of a laser beam’s chaotic path through the chamber all in one go.
    That matters because in a chaotic system, the behavior is different every time. If the camera only captured part of the action, the behavior that was not recorded could never be studied, because it would never occur in exactly the same way again. It would be like trying to photograph a bird, but with a camera that can only capture one body part at a time; furthermore, every time the bird landed near you, it would be a different species. Although you could try to assemble all your photos into one composite bird image, that cobbled-together bird would have the beak of a crow, the neck of a stork, the wings of a duck, the tail of a hawk, and the legs of a chicken. Not exactly useful.
    Wang says that the ability of his CUP camera to capture the chaotic movement of light may breathe new life into the study of optical chaos, which has applications in physics, communications, and cryptography.
    “It was a really hot field some time ago, but it’s died down, maybe because we didn’t have the tools we needed,” he says. “The experimentalists lost interest because they couldn’t do the experiments, and the theoreticians lost interest because they couldn’t validate their theories experimentally. This was a fun demonstration to show people in that field that they finally have an experimental tool.”
    The paper describing the research, titled “Real-time observation and control of optical chaos,” appears in the January 13 issue of Science Advances. Co-authors are Linran Fan, formerly of Caltech, now an assistant professor at Wyant College of Optical Sciences at the University of Arizona; and Xiaodong Yan and Han Wang, of the University of Southern California.
    Funding for the research was provided by the Army Research Office Young Investigator Program, the Air Force Office of Scientific Research, the National Science Foundation, and the National Institutes of Health. More

  • in

    Pivotal discovery in quantum and classical information processing

    Scientists tame photon-magnon interaction.
    Working with theorists in the University of Chicago’s Pritzker School of Molecular Engineering, researchers in the U.S. Department of Energy’s (DOE) Argonne National Laboratory have achieved a scientific control that is a first of its kind. They demonstrated a novel approach that allows real-time control of the interactions between microwave photons and magnons, potentially leading to advances in electronic devices and quantum signal processing.
    Microwave photons are elementary particles forming the electromagnetic waves that we use for wireless communications. On the other hand, magnons are the elementary particles forming what scientists call “spin waves” — wave-like disturbances in an ordered array of microscopic aligned spins that can occur in certain magnetic materials.
    Microwave photon-magnon interaction has emerged in recent years as a promising platform for both classical and quantum information processing. Yet, this interaction had proved impossible to manipulate in real time, until now.
    “Before our discovery, controlling the photon-magnon interaction was like shooting an arrow into the air,” said Xufeng Zhang, an assistant scientist in the Center for Nanoscale Materials, a DOE User Facility at Argonne, and the corresponding author of this work. “One has no control at all over that arrow once in flight.”
    The team’s discovery has changed that. “Now, it is more like flying a drone, where we can guide and control its flight electronically,” said Zhang.
    By smart engineering, the team employs an electrical signal to periodically alter the magnon vibrational frequency and thereby induce effective magnon-photon interaction. The result is a first-ever microwave-magnonic device with on-demand tunability.
    The team’s device can control the strength of the photon-magnon interaction at any point as information is being transferred between photons and magnons. It can even completely turn the interaction on and off. With this tuning capability, scientists can process and manipulate information in ways that far surpass present-day hybrid magnonic devices.
    “Researchers have been searching for a way to control this interaction for the past few years,” noted Zhang. The team’s discovery opens a new direction for magnon-based signal processing and should lead to electronic devices with new capabilities. It may also enable important applications for quantum signal processing, where microwave-magnonic interactions are being explored as a promising candidate for transferring information between different quantum systems.

    Story Source:
    Materials provided by DOE/Argonne National Laboratory. Note: Content may be edited for style and length. More

  • in

    Researchers use deep learning to identify gene regulation at single-cell level

    Scientists at the University of California, Irvine have developed a new deep-learning framework that predicts gene regulation at the single-cell level.
    Deep learning, a family of machine-learning methods based on artificial neural networks, has revolutionized applications such as image interpretation, natural language processing and autonomous driving. In a study published recently in Science Advances, UCI researchers describe how the technique can also be successfully used to observe gene regulation at the cellular level. Until now, that process had been limited to tissue-level analysis.
    According to co-senior author Xiaohui Xie, UCI professor of computer science, the framework enables the study of transcription factor binding at the cellular level, which was previously impossible due to the intrinsic noise and sparsity of single-cell data. A transcription factor is a protein that controls the translation of genetic information from DNA to RNA; TFs regulate genes to ensure they’re expressed in proper sequence and at the right time in cells.
    “The breakthrough was in realizing that we could leverage deep learning and massive datasets of tissue-level TF binding profiles to understand how TFs regulate target genes in individual cells through specific signals,” Xie said.
    By training a neural network on large-scale genomic and epigenetic datasets, and by drawing on the expertise of collaborators across three departments, the researchers were able to identify novel gene regulations for individual cells or cell types.
    “Our capability of predicting whether certain transcriptional factors are binding to DNA in a specific cell or cell type at a particular time provides a new way to tease out small populations of cells that could be critical to understanding and treating diseases,” said co-senior author Qing Nie, UCI Chancellor’s Professor of mathematics and director of the campus’s National Science Foundation-Simons Center for Multiscale Cell Fate Research, which supported the project.
    He said that scientists can use the deep-learning framework to identify key signals in cancer stem cells — a small cell population that is difficult to specifically target in treatment or even quantify.
    “This interdisciplinary project is a prime example of how researchers with different areas of expertise can work together to solve complex biological questions through machine-learning techniques,” Nie added.

    Story Source:
    Materials provided by University of California – Irvine. Note: Content may be edited for style and length. More

  • in

    Trapping light without back reflections

    Researchers demonstrate a new technique for suppressing back reflections of light, leading to better signal quality for sensing and information technology.
    Microresonators are small glass structures in which light can circulate and build up in intensity. Due to material imperfections, some amount of light is reflected backwards, which is disturbing their function.
    Researchers have now demonstrated a method for suppressing these unwanted back reflections. Their findings can help improve a multitude of microresonator-based applications from measurement technology such as sensors used for example in drones, to optical information processing in fibre networks and computers.
    The results of the team spanning the Max Planck Institute for the Science of Light (Germany), Imperial College London, and the National Physical Laboratory (UK) were recently published today in the Nature-family journal Light: Science and Applications.
    Researchers and engineers are discovering many uses and applications for optical microresonators, a type of device often referred to as a light trap. One limitation of these devices is that they have some amount of back reflection, or backscattering, of light due to material and surface imperfections. The back reflected light negatively impacts of the usefulness of the tiny glass structures. To reduce the unwanted backscattering, the British and German scientists were inspired by noise cancelling headphones, but rather using optical than acoustic interference.
    “In these headphones, out-of-phase sound is played to cancel out undesirable background noise,” says lead author Andreas Svela from the Quantum Measurement Lab at Imperial College London. “In our case, we are introducing out-of-phase light to cancel out the back reflected light,” Svela continues.
    To generate the out-of-phase light, the researchers position a sharp metal tip close to the microresonator surface. Just like the intrinsic imperfections, the tip also causes light to scatter backwards, but there is an important difference: The phase of the reflected light can be chosen by controlling the position of the tip. With this control, the added backscattered light’s phase can be set so it annihilates the intrinsic back reflected light — the researchers produce darkness from light.
    “It is an unintuitive result, by introducing an additional scatterer, we can reduce the total backscattering,” says co-author and principal investigator Pascal Del’Haye at the Max Planck Institute for the Science of Light. The published paper shows a record suppression of more than 30 decibels compared to the intrinsic back reflections. In other words, the unwanted light is less than a thousandth of what it was prior to applying the method.
    “These findings are exciting as the technique can be applied to a wide range of existing and future microresonator technologies,” comments principal investigator Michael Vanner from the Quantum Measurement Lab at Imperial College London.
    For example, the method can be used to improve gyroscopes, sensors that for instance help drones navigate; or to improve portable optical spectroscopy systems, opening for scenarios like built-in sensors in smartphones for detection of dangerous gasses or helping check the quality of groceries. Furthermore, optical components and networks with better signal quality allows us to transport more information even faster.

    Story Source:
    Materials provided by Imperial College London. Note: Content may be edited for style and length. More

  • in

    Nanosheet-based electronics could be one drop away

    Scientists at Japan’s Nagoya University and the National Institute for Materials Science have found that a simple one-drop approach is cheaper and faster for tiling functional nanosheets together in a single layer. If the process, described in the journal ACS Nano, can be scaled up, it could advance development of next-generation oxide electronics.
    “Drop casting is one of the most versatile and cost-effective methods for depositing nanomaterials on a solid surface,” says Nagoya University materials scientist Minoru Osada, the study’s corresponding author. “But it has serious drawbacks, one being the so-called coffee-ring effect: a pattern left by particles once the liquid they are in evaporates. We found, to our great surprise, that controlled convection by a pipette and a hotplate causes uniform deposition rather than the ring-like pattern, suggesting a new possibility for drop casting.”
    The process Osada describes is surprisingly simple, especially when compared to currently available tiling techniques, which can be costly, time-consuming, and wasteful. The scientists found that dropping a solution containing 2D nanosheets with a simple pipette onto a substrate heated on a hotplate to a temperature of about 100°C, followed by removal of the solution, causes the nanosheets to come together in about 30 seconds to form a tile-like layer.
    Analyses showed that the nanosheets were uniformly distributed over the substrate’s surface, with limited gaps. This is probably a result of surface tension driving how particles disperse, and the shape of the deposited droplet changing as the solution evaporates.
    The scientists used the process to deposit particle solutions of titanium dioxide, calcium niobate, ruthenium oxide, and graphene oxide. They also tried different sizes and shapes of a variety of substrates, including silicon, silicon dioxide, quartz glass, and polyethylene terephthalate (PET). They found they could control the surface tension and evaporation rate of the solution by adding a small amount of ethanol.
    Furthermore, the team successfully used this process to deposit multiple layers of tiled nanosheets, fabricating functional nanocoatings with various features: conducting, semiconducting, insulating, magnetic and photochromic.
    “We expect that our solution-based process using 2D nanosheets will have a great impact on environmentally benign manufacturing and oxide electronics,” says Osada. This could lead to next-generation transparent and flexible electronics, optoelectronics, magnetoelectronics, and power harvesting devices.

    Story Source:
    Materials provided by Nagoya University. Note: Content may be edited for style and length. More

  • in

    Artificial intelligence puts focus on the life of insects

    Scientists are combining artificial intelligence and advanced computer technology with biological know how to identify insects with supernatural speed. This opens up new possibilities for describing unknown species and for tracking the life of insects across space and time
    Insects are the most diverse group of animals on Earth and only a small fraction of these have been found and formally described. In fact, there are so many species that discovering all of them in the near future is unlikely.
    This enormous diversity among insects also means that they have very different life histories and roles in the ecosystems.
    For instance, a hoverfly in Greenland lives a very different life than a mantid in the Brazilian rainforest. But even within each of these two groups, numerous species exist each with their own special characteristics and ecological roles.
    To examine the biology of each species and its interactions with other species, it is necessary to catch, identify, and count a lot of insects. It goes without saying that this is a very time-consuming process, which to a large degree, has constrained the ability of scientists to gain insights into how external factors shape the life of insects.
    A new study published in the Proceedings of the National Academy of Sciences shows how advanced computer technology and artificial intelligence quickly and efficiently can identify and count insects. It is a huge step forward for the scientists to be able to understand how this important group of animals changes through time — for example in response to loss of habitat and climate change.

    advertisement

    Deep Learning
    “With the help of advanced camera technology, we can now collect millions of photos at our field sites. When we, at the same time, teach the computer to tell the different species apart, the computer can quickly identify the different species in the images and count how many it found of each of them. It is a game-changer compared to having a person with binoculars in the field or in front of the microscope in the lab who manually identifies and counts the animals,” explains senior scientist Toke T. Høye from Department of Bioscience and Arctic Research Centre at Aarhus University, who headed the new study. The international team behind the study included biologists, statisticians, and mechanical, electrical and software engineers.
    The methods described in the paper go by the umbrella term deep learning and are forms of artificial intelligence mostly used in other areas of research such as in the development of driverless cars. But now the researchers have demonstrated how the technology can be an alternative to the laborious task of manually observing insects in their natural environment as well as the tasks of sorting and identifying insect samples.
    “We can use the deep learning to find the needle in the hay stack so to speak — the specimen of a rare or undescribed species among all the specimens of widespread and common species. In the future, all the trivial work can be done by the computer and we can focus on the most demanding tasks, such as describing new species, which until now was unknown to the computer, and to interpret the wealth of new results we will have” explains Toke T. Høye.
    And there is indeed many tasks ahead, when it comes to research on insects and other invertebrates, called entomology. One thing is the lack of good databases to compare unknown species to those which have already been described, but also because a proportionally larger share of researchers concentrate on well-known species like birds and mammals. With deep learning, the researchers expect to be able to rapidly advance knowledge about insects considerably.

    advertisement

    Long time series are necessary
    To understand how insect populations change through time, observations need to be made in the same place and in the same way over a long time. It is necessary with long time series of data.
    Some species become more numerous and others more rare, but to understand the mechanisms that causes these changes, it is critical that the same observations are made year after year.
    An easy method is to mount cameras in the same location and take pictures of the same local area. For instance, cameras can take a picture every minute. This will give piles of data, which over the years can inform about how insects respond to warmer climates or to the changes caused by land management. Such data can become an important tool to ensure a proper balance between human use and protection of natural resources.
    “There are still challenges ahead before these new methods can become widely available, but our study points to a number of results from other research disciplines, which can help solve the challenges for entomology. Here, a close interdisciplinary collaboration among biologists and engineers is critical,” says Toke T. Høye.

    Story Source:
    Materials provided by Aarhus University. Original written by Peter Bondo. Note: Content may be edited for style and length. More

  • in

    Why independent cultures think alike when it comes to categories: It's not in the brain

    Imagine you gave the exact same art pieces to two different groups of people and asked them to curate an art show. The art is radical and new. The groups never speak with one another, and they organize and plan all the installations independently. On opening night, imagine your surprise when the two art shows are nearly identical. How did these groups categorize and organize all the art the same way when they never spoke with one another?
    The dominant hypothesis is that people are born with categories already in their brains, but a study from the Network Dynamics Group (NDG) at the Annenberg School for Communication has discovered a novel explanation. In an experiment in which people were asked to categorize unfamiliar shapes, individuals and small groups created many different unique categorization systems while large groups created systems nearly identical to one another.
    “If people are all born seeing the world the same way, we would not observe so many differences in how individuals organize things,” says senior author Damon Centola, Professor of Communication, Sociology, and Engineering at the University of Pennsylvania. “But this raises a big scientific puzzle. If people are so different, why do anthropologists find the same categories, for instance for shapes, colors, and emotions, arising independently in many different cultures? Where do these categories come from and why is there so much similarity across independent populations?”
    To answer this question, the researchers assigned participants to various sized groups, ranging from 1 to 50, and then asked them to play an online game in which they were shown unfamiliar shapes that they then had to categorize in a meaningful way. All of the small groups invented wildly different ways of categorizing the shapes. Yet, when large groups were left to their own devices, each one independently invented a nearly identical category system.
    “If I assign an individual to a small group, they are much more likely to arrive at a category system that is very idiosyncratic and specific to them,” says lead author and Annenberg alum Douglas Guilbeault (Ph.D. ’20), now an Assistant Professor at the Haas School of Business at the University of California, Berkeley. “But if I assign that same individual to a large group, I can predict the category system that they will end up creating, regardless of whatever unique viewpoint that person happens to bring to the table.”
    “Even though we predicted it,” Centola adds, “I was nevertheless stunned to see it really happen. This result challenges many long — held ideas about culture and how it forms.”
    The explanation is connected to previous work conducted by the NDG on tipping points and how people interact within networks. As options are suggested within a network, certain ones begin to be reinforced as they are repeated through individuals’ interactions with one another, and eventually a particular idea has enough traction to take over and become dominant. This only applies to large enough networks, but according to Centola, even just 50 people is enough to see this phenomenon occur.
    Centola and Guilbeault say they plan to build on their findings and apply them to a variety of real — world problems. One current study involves content moderation on Facebook and Twitter. Can the process of categorizing free speech versus hate speech (and thus what should be allowed versus removed) be improved if done in networks rather than by solitary individuals? Another current study is investigating how to use network interactions among physicians and other health care professionals to decrease the likelihood that patients will be incorrectly diagnosed or treated due to prejudice or bias, like racism or sexism. These topics are explored in Centola’s forthcoming book, CHANGE: How to Make Big Things Happen (Little, Brown & Co., 2021).
    “Many of the worst social problems reappear in every culture, which leads some to believe these problems are intrinsic to the human condition,” says Centola. “Our research shows that these problems are intrinsic to the social experiences humans have, not necessarily to humans themselves. If we can alter that social experience, we can change the way people organize things, and address some of the world’s greatest problems.”
    This study was partially funded by a Dissertation Award granted to Guilbeault by the Institute for Research on Innovation and Science at the University of Michigan.

    Story Source:
    Materials provided by University of Pennsylvania. Note: Content may be edited for style and length. More

  • in

    Tweaking AI software to function like a human brain improves computer's learning ability

    Computer-based artificial intelligence can function more like human intelligence when programmed to use a much faster technique for learning new objects, say two neuroscientists who designed such a model that was designed to mirror human visual learning.
    In the journal Frontiers in Computational Neuroscience, Maximilian Riesenhuber, PhD, professor of neuroscience, at Georgetown University Medical Center, and Joshua Rule, PhD, a postdoctoral scholar at UC Berkeley, explain how the new approach vastly improves the ability of AI software to quickly learn new visual concepts.
    “Our model provides a biologically plausible way for artificial neural networks to learn new visual concepts from a small number of examples,” says Riesenhuber. “We can get computers to learn much better from few examples by leveraging prior learning in a way that we think mirrors what the brain is doing.”
    Humans can quickly and accurately learn new visual concepts from sparse data ¬- sometimes just a single example. Even three- to four-month-old babies can easily learn to recognize zebras and distinguish them from cats, horses, and giraffes. But computers typically need to “see” many examples of the same object to know what it is, Riesenhuber explains.
    The big change needed was in designing software to identify relationships between entire visual categories, instead of trying the more standard approach of identifying an object using only low-level and intermediate information, such as shape and color, Riesenhuber says.
    “The computational power of the brain’s hierarchy lies in the potential to simplify learning by leveraging previously learned representations from a databank, as it were, full of concepts about objects,” he says.

    advertisement

    Riesenhuber and Rule found that artificial neural networks, which represent objects in terms of previously learned concepts, learned new visual concepts significantly faster.
    Rule explains, “Rather than learn high-level concepts in terms of low-level visual features, our approach explains them in terms of other high-level concepts. It is like saying that a platypus looks a bit like a duck, a beaver, and a sea otter.”
    The brain architecture underlying human visual concept learning builds on the neural networks involved in object recognition. The anterior temporal lobe of the brain is thought to contain “abstract” concept representations that go beyond shape. These complex neural hierarchies for visual recognition allow humans to learn new tasks and, crucially, leverage prior learning.
    “By reusing these concepts, you can more easily learn new concepts, new meaning, such as the fact that a zebra is simply a horse of a different stripe,” Riesenhuber says.
    Despite advances in AI, the human visual system is still the gold standard in terms of ability to generalize from few examples, robustly deal with image variations, and comprehend scenes, the scientists say.
    “Our findings not only suggest techniques that could help computers learn more quickly and efficiently, they can also lead to improved neuroscience experiments aimed at understanding how people learn so quickly, which is not yet well understood,” Riesenhuber concludes.
    This work was supported in part by Lawrence Livermore National Laboratory and by the National Science Foundation (1026934 and 1232530) Graduate Research Fellowship Grants. More