More stories

  • in

    Nanosheet-based electronics could be one drop away

    Scientists at Japan’s Nagoya University and the National Institute for Materials Science have found that a simple one-drop approach is cheaper and faster for tiling functional nanosheets together in a single layer. If the process, described in the journal ACS Nano, can be scaled up, it could advance development of next-generation oxide electronics.
    “Drop casting is one of the most versatile and cost-effective methods for depositing nanomaterials on a solid surface,” says Nagoya University materials scientist Minoru Osada, the study’s corresponding author. “But it has serious drawbacks, one being the so-called coffee-ring effect: a pattern left by particles once the liquid they are in evaporates. We found, to our great surprise, that controlled convection by a pipette and a hotplate causes uniform deposition rather than the ring-like pattern, suggesting a new possibility for drop casting.”
    The process Osada describes is surprisingly simple, especially when compared to currently available tiling techniques, which can be costly, time-consuming, and wasteful. The scientists found that dropping a solution containing 2D nanosheets with a simple pipette onto a substrate heated on a hotplate to a temperature of about 100°C, followed by removal of the solution, causes the nanosheets to come together in about 30 seconds to form a tile-like layer.
    Analyses showed that the nanosheets were uniformly distributed over the substrate’s surface, with limited gaps. This is probably a result of surface tension driving how particles disperse, and the shape of the deposited droplet changing as the solution evaporates.
    The scientists used the process to deposit particle solutions of titanium dioxide, calcium niobate, ruthenium oxide, and graphene oxide. They also tried different sizes and shapes of a variety of substrates, including silicon, silicon dioxide, quartz glass, and polyethylene terephthalate (PET). They found they could control the surface tension and evaporation rate of the solution by adding a small amount of ethanol.
    Furthermore, the team successfully used this process to deposit multiple layers of tiled nanosheets, fabricating functional nanocoatings with various features: conducting, semiconducting, insulating, magnetic and photochromic.
    “We expect that our solution-based process using 2D nanosheets will have a great impact on environmentally benign manufacturing and oxide electronics,” says Osada. This could lead to next-generation transparent and flexible electronics, optoelectronics, magnetoelectronics, and power harvesting devices.

    Story Source:
    Materials provided by Nagoya University. Note: Content may be edited for style and length. More

  • in

    Artificial intelligence puts focus on the life of insects

    Scientists are combining artificial intelligence and advanced computer technology with biological know how to identify insects with supernatural speed. This opens up new possibilities for describing unknown species and for tracking the life of insects across space and time
    Insects are the most diverse group of animals on Earth and only a small fraction of these have been found and formally described. In fact, there are so many species that discovering all of them in the near future is unlikely.
    This enormous diversity among insects also means that they have very different life histories and roles in the ecosystems.
    For instance, a hoverfly in Greenland lives a very different life than a mantid in the Brazilian rainforest. But even within each of these two groups, numerous species exist each with their own special characteristics and ecological roles.
    To examine the biology of each species and its interactions with other species, it is necessary to catch, identify, and count a lot of insects. It goes without saying that this is a very time-consuming process, which to a large degree, has constrained the ability of scientists to gain insights into how external factors shape the life of insects.
    A new study published in the Proceedings of the National Academy of Sciences shows how advanced computer technology and artificial intelligence quickly and efficiently can identify and count insects. It is a huge step forward for the scientists to be able to understand how this important group of animals changes through time — for example in response to loss of habitat and climate change.

    advertisement

    Deep Learning
    “With the help of advanced camera technology, we can now collect millions of photos at our field sites. When we, at the same time, teach the computer to tell the different species apart, the computer can quickly identify the different species in the images and count how many it found of each of them. It is a game-changer compared to having a person with binoculars in the field or in front of the microscope in the lab who manually identifies and counts the animals,” explains senior scientist Toke T. Høye from Department of Bioscience and Arctic Research Centre at Aarhus University, who headed the new study. The international team behind the study included biologists, statisticians, and mechanical, electrical and software engineers.
    The methods described in the paper go by the umbrella term deep learning and are forms of artificial intelligence mostly used in other areas of research such as in the development of driverless cars. But now the researchers have demonstrated how the technology can be an alternative to the laborious task of manually observing insects in their natural environment as well as the tasks of sorting and identifying insect samples.
    “We can use the deep learning to find the needle in the hay stack so to speak — the specimen of a rare or undescribed species among all the specimens of widespread and common species. In the future, all the trivial work can be done by the computer and we can focus on the most demanding tasks, such as describing new species, which until now was unknown to the computer, and to interpret the wealth of new results we will have” explains Toke T. Høye.
    And there is indeed many tasks ahead, when it comes to research on insects and other invertebrates, called entomology. One thing is the lack of good databases to compare unknown species to those which have already been described, but also because a proportionally larger share of researchers concentrate on well-known species like birds and mammals. With deep learning, the researchers expect to be able to rapidly advance knowledge about insects considerably.

    advertisement

    Long time series are necessary
    To understand how insect populations change through time, observations need to be made in the same place and in the same way over a long time. It is necessary with long time series of data.
    Some species become more numerous and others more rare, but to understand the mechanisms that causes these changes, it is critical that the same observations are made year after year.
    An easy method is to mount cameras in the same location and take pictures of the same local area. For instance, cameras can take a picture every minute. This will give piles of data, which over the years can inform about how insects respond to warmer climates or to the changes caused by land management. Such data can become an important tool to ensure a proper balance between human use and protection of natural resources.
    “There are still challenges ahead before these new methods can become widely available, but our study points to a number of results from other research disciplines, which can help solve the challenges for entomology. Here, a close interdisciplinary collaboration among biologists and engineers is critical,” says Toke T. Høye.

    Story Source:
    Materials provided by Aarhus University. Original written by Peter Bondo. Note: Content may be edited for style and length. More

  • in

    Why independent cultures think alike when it comes to categories: It's not in the brain

    Imagine you gave the exact same art pieces to two different groups of people and asked them to curate an art show. The art is radical and new. The groups never speak with one another, and they organize and plan all the installations independently. On opening night, imagine your surprise when the two art shows are nearly identical. How did these groups categorize and organize all the art the same way when they never spoke with one another?
    The dominant hypothesis is that people are born with categories already in their brains, but a study from the Network Dynamics Group (NDG) at the Annenberg School for Communication has discovered a novel explanation. In an experiment in which people were asked to categorize unfamiliar shapes, individuals and small groups created many different unique categorization systems while large groups created systems nearly identical to one another.
    “If people are all born seeing the world the same way, we would not observe so many differences in how individuals organize things,” says senior author Damon Centola, Professor of Communication, Sociology, and Engineering at the University of Pennsylvania. “But this raises a big scientific puzzle. If people are so different, why do anthropologists find the same categories, for instance for shapes, colors, and emotions, arising independently in many different cultures? Where do these categories come from and why is there so much similarity across independent populations?”
    To answer this question, the researchers assigned participants to various sized groups, ranging from 1 to 50, and then asked them to play an online game in which they were shown unfamiliar shapes that they then had to categorize in a meaningful way. All of the small groups invented wildly different ways of categorizing the shapes. Yet, when large groups were left to their own devices, each one independently invented a nearly identical category system.
    “If I assign an individual to a small group, they are much more likely to arrive at a category system that is very idiosyncratic and specific to them,” says lead author and Annenberg alum Douglas Guilbeault (Ph.D. ’20), now an Assistant Professor at the Haas School of Business at the University of California, Berkeley. “But if I assign that same individual to a large group, I can predict the category system that they will end up creating, regardless of whatever unique viewpoint that person happens to bring to the table.”
    “Even though we predicted it,” Centola adds, “I was nevertheless stunned to see it really happen. This result challenges many long — held ideas about culture and how it forms.”
    The explanation is connected to previous work conducted by the NDG on tipping points and how people interact within networks. As options are suggested within a network, certain ones begin to be reinforced as they are repeated through individuals’ interactions with one another, and eventually a particular idea has enough traction to take over and become dominant. This only applies to large enough networks, but according to Centola, even just 50 people is enough to see this phenomenon occur.
    Centola and Guilbeault say they plan to build on their findings and apply them to a variety of real — world problems. One current study involves content moderation on Facebook and Twitter. Can the process of categorizing free speech versus hate speech (and thus what should be allowed versus removed) be improved if done in networks rather than by solitary individuals? Another current study is investigating how to use network interactions among physicians and other health care professionals to decrease the likelihood that patients will be incorrectly diagnosed or treated due to prejudice or bias, like racism or sexism. These topics are explored in Centola’s forthcoming book, CHANGE: How to Make Big Things Happen (Little, Brown & Co., 2021).
    “Many of the worst social problems reappear in every culture, which leads some to believe these problems are intrinsic to the human condition,” says Centola. “Our research shows that these problems are intrinsic to the social experiences humans have, not necessarily to humans themselves. If we can alter that social experience, we can change the way people organize things, and address some of the world’s greatest problems.”
    This study was partially funded by a Dissertation Award granted to Guilbeault by the Institute for Research on Innovation and Science at the University of Michigan.

    Story Source:
    Materials provided by University of Pennsylvania. Note: Content may be edited for style and length. More

  • in

    Tweaking AI software to function like a human brain improves computer's learning ability

    Computer-based artificial intelligence can function more like human intelligence when programmed to use a much faster technique for learning new objects, say two neuroscientists who designed such a model that was designed to mirror human visual learning.
    In the journal Frontiers in Computational Neuroscience, Maximilian Riesenhuber, PhD, professor of neuroscience, at Georgetown University Medical Center, and Joshua Rule, PhD, a postdoctoral scholar at UC Berkeley, explain how the new approach vastly improves the ability of AI software to quickly learn new visual concepts.
    “Our model provides a biologically plausible way for artificial neural networks to learn new visual concepts from a small number of examples,” says Riesenhuber. “We can get computers to learn much better from few examples by leveraging prior learning in a way that we think mirrors what the brain is doing.”
    Humans can quickly and accurately learn new visual concepts from sparse data ¬- sometimes just a single example. Even three- to four-month-old babies can easily learn to recognize zebras and distinguish them from cats, horses, and giraffes. But computers typically need to “see” many examples of the same object to know what it is, Riesenhuber explains.
    The big change needed was in designing software to identify relationships between entire visual categories, instead of trying the more standard approach of identifying an object using only low-level and intermediate information, such as shape and color, Riesenhuber says.
    “The computational power of the brain’s hierarchy lies in the potential to simplify learning by leveraging previously learned representations from a databank, as it were, full of concepts about objects,” he says.

    advertisement

    Riesenhuber and Rule found that artificial neural networks, which represent objects in terms of previously learned concepts, learned new visual concepts significantly faster.
    Rule explains, “Rather than learn high-level concepts in terms of low-level visual features, our approach explains them in terms of other high-level concepts. It is like saying that a platypus looks a bit like a duck, a beaver, and a sea otter.”
    The brain architecture underlying human visual concept learning builds on the neural networks involved in object recognition. The anterior temporal lobe of the brain is thought to contain “abstract” concept representations that go beyond shape. These complex neural hierarchies for visual recognition allow humans to learn new tasks and, crucially, leverage prior learning.
    “By reusing these concepts, you can more easily learn new concepts, new meaning, such as the fact that a zebra is simply a horse of a different stripe,” Riesenhuber says.
    Despite advances in AI, the human visual system is still the gold standard in terms of ability to generalize from few examples, robustly deal with image variations, and comprehend scenes, the scientists say.
    “Our findings not only suggest techniques that could help computers learn more quickly and efficiently, they can also lead to improved neuroscience experiments aimed at understanding how people learn so quickly, which is not yet well understood,” Riesenhuber concludes.
    This work was supported in part by Lawrence Livermore National Laboratory and by the National Science Foundation (1026934 and 1232530) Graduate Research Fellowship Grants. More

  • in

    Comprehensive characterization of vascular structure in plants

    The leaf vasculature of plants plays a key role in transporting solutes from where they are made — for example from the plant cells driving photosynthesis — to where they are stored or used. Sugars and amino acids are transported from the leaves to the roots and the seeds via the conductive pathways of the phloem.
    Phloem is the part of the tissue in vascular plants that comprises the sieve elements — where actual translocation takes place — and the companion cells as well as the phloem parenchyma cells. The leaf veins consist of at least seven distinct cell types, with specific roles in transport, metabolism and signalling.
    Little is known about the vascular cells in leaves, in particular the phloem parenchyma. Two teams of Alexander von Humboldt professorship students from Düsseldorf and Tübingen, a colleague from Champaign Urbana in Illinois, USA, and a chair of bioinformatics from Düsseldorf have presented the first comprehensive analysis of the vascular cells in the leaves of thale cress (Arabidopsis thaliana) using single cell sequencing.
    The team led up by Alexander von Humboldt Professor Dr. Marja Timmermans from Tübingen University was the first to use single cell sequencing in plants to characterise root cells. In collaboration with Prof. Timmermans’ group, researchers from the Alexander von Humboldt Professor Dr. Wolf Frommer in Düsseldorf succeeded for the first time in isolating plant cells to create an atlas of all regulatory RNA molecules (the transcriptome) of the leaf vasculature. They were able to define the role of the different cells by analysing the metabolic pathways.
    Among other things, the research team proved for the first time that the transcript of sugars (SWEET) and amino acids (UmamiT) transporters are found in the phloem parenchyma cells which transport these compounds from where they are produced to the vascular system. The compounds are subsequently actively imported into the sieve element companion cell complex via the second group of transporters (SUT or AAP) and then exported from the source leaf.
    These extensive investigations involved close collaborations with HHU bioinformatics researchers in Prof. Dr. Martin Lercher’s working group. Together they were able to determine that phloem parenchyma and companion cells have complementary metabolic pathways and are therefore in a position to control the composition of the phloem sap.
    First author and leader of the work group Dr. Ji-Yun Kim from HHU explains: “Our analysis provides completely new insights into the leaf vasculature and the role and relationship of the individual leaf cell types.” Institute Head Prof. Frommer adds: “The cooperation between the four working groups made it possible to use new methods to gain insights for the first time into the important cells in plant pathways and to therefore obtain a basis for a better understanding of plant metabolism.”

    Story Source:
    Materials provided by Heinrich-Heine University Duesseldorf. Original written by Arne Claussen. Note: Content may be edited for style and length. More

  • in

    Researchers report quantum-limit-approaching chemical sensing chip

    University at Buffalo researchers are reporting an advancement of a chemical sensing chip that could lead to handheld devices that detect trace chemicals — everything from illicit drugs to pollution — as quickly as a breathalyzer identifies alcohol.
    The chip, which also may have uses in food safety monitoring, anti-counterfeiting and other fields where trace chemicals are analyzed, is described in a study that appears on the cover of the Dec. 17 edition of the journal Advanced Optical Materials.
    “There is a great need for portable and cost-effective chemical sensors in many areas, especially drug abuse,” says the study’s lead author Qiaoqiang Gan, PhD, professor of electrical engineering in the UB School of Engineering and Applied Sciences.
    The work builds upon previous research Gan’s lab led that involved creating a chip that traps light at the edges of gold and silver nanoparticles.
    When biological or chemical molecules land on the chip’s surface, some of the captured light interacts with the molecules and is “scattered” into light of new energies. This effect occurs in recognizable patterns that act as fingerprints of chemical or biological molecules, revealing information about what compounds are present.
    Because all chemicals have unique light-scattering signatures, the technology could eventually be integrated into a handheld device for detecting drugs in blood, breath, urine and other biological samples. It could also be incorporated into other devices to identify chemicals in the air or from water, as well as other surfaces.
    The sensing method is called surface-enhanced Raman spectroscopy (SERS).
    While effective, the chip the Gan group previously created wasn’t uniform in its design. Because the gold and silver was spaced unevenly, it could make scattered molecules difficult to identify, especially if they appeared on different locations of the chip.
    Gan and a team of researchers — featuring members of his lab at UB, and researchers from the University of Shanghai for Science and Technology in China, and King Abdullah University of Science and Technology in Saudi Arabia — have been working to remedy this shortcoming.
    The team used four molecules (BZT, 4-MBA, BPT, and TPT), each with different lengths, in the fabrication process to control the size of the gaps in between the gold and silver nanoparticles. The updated fabrication process is based upon two techniques, atomic layer deposition and self-assembled monolayers, as opposed to the more common and expensive method for SERS chips, electron-beam lithography.
    The result is a SERS chip with unprecedented uniformity that is relatively inexpensive to produce. More importantly, it approaches quantum-limit sensing capabilities, says Gan, which was a challenge for conventional SERS chips
    “We think the chip will have many uses in addition to handheld drug detection devices,” says the first author of this work, Nan Zhang, PhD, a postdoctoral researcher in Gan’s lab. “For example, it could be used to assess air and water pollution or the safety of food. It could be useful in the security and defense sectors, and it has tremendous potential in health care.”

    Story Source:
    Materials provided by University at Buffalo. Original written by Cory Nealon. Note: Content may be edited for style and length. More

  • in

    2D compound shows unique versatility

    An atypical two-dimensional sandwich has the tasty part on the outside for scientists and engineers developing multifunctional nanodevices.
    An atom-thin layer of semiconductor antimony paired with ferroelectric indium selenide would display unique properties depending on the side and polarization by an external electric field.
    The field could be used to stabilize indium selenide’s polarization, a long-sought property that tends to be wrecked by internal fields in materials like perovskites but would be highly useful for solar energy applications.
    Calculations by Rice materials theorist Boris Yakobson, lead author and researcher Jun-Jie Zhang and graduate student Dongyang Zhu shows switching the material’s polarization with an external electric field makes it either a simple insulator with a band gap suitable for visible light absorption or a topological insulator, a material that only conducts electrons along its surface.
    Turning the field inward would make the material good for solar panels. Turning it outward could make it useful as a spintronic device for quantum computing.
    The lab’s study appears in the American Chemical Society journal Nano Letters.
    “The ability to switch at will the material’s electronic band structure is a very attractive knob,” Yakobson said. “The strong coupling between ferroelectric state and topological order can help: the applied voltage switches the topology through the ferroelectric polarization, which serves as an intermediary. This provides a new paradigm for device engineering and control.”
    Weakly bound by the van der Waals force, the layers change their physical configuration when exposed to an electric field. That changes the compound’s band gap, and the change is not trivial, Zhang said.
    “The central selenium atoms shift along with switching ferroelectric polarization,” he said. “This kind of switching in indium selenide has been observed in recent experiments.”
    Unlike other structures proposed and ultimately made by experimentalists — boron buckyballs are a good example — the switching material may be relatively simple to make, according to the researchers.
    “As opposed to typical bulk solids, easy exfoliation of van der Waals crystals along the low surface energy plane realistically allows their reassembly into heterobilayers, opening new possibilities like the one we discovered here,” Zhang said.

    Story Source:
    Materials provided by Rice University. Note: Content may be edited for style and length. More

  • in

    Using light to revolutionize artificial intelligence

    An international team of researchers, including Professor Roberto Morandotti of the Institut national de la recherche scientifique (INRS), just introduced a new photonic processor that could revolutionize artificial intelligence, as reported by the  journal Nature.
    Artificial neural networks, layers of interconnected artificial neurons, are of great interest for machine learning tasks such as speech recognition and medical diagnosis. Actually, electronic computing hardware are nearing the limit of their capabilities, yet the demand for greater computing power is constantly growing.
    Researchers turned themselves to photons instead of electrons to carry information at the speed of light. In fact, not only photons can process information much faster than electrons, but they are the basis of the current Internet, where it is important to avoid the so-called electronic bottleneck (conversion of an optical signal into an electronic signal, and vice versa).
    Increased Computing Speed
    The proposed optical neural network is capable of recognizing and processing large-scale data and images at ultra-high computing speeds, beyond ten trillion operations per second. Professor Morandotti, an expert in integrated photonics, explains how an optical frequency comb, a light source comprised of many equally spaced frequency modes, was integrated into a computer chip and used as a power-efficient source for optical computing.
    This device performs a type of matrix-vector multiplication known as a convolution for image-processing applications. It shows promising results for real-time massive-data machine learning tasks, such as identifying faces in cameras or pathology identification in clinical scanning applications. Their approach is scalable and trainable to much more complex networks for demanding applications such as unmanned vehicles and real-time video recognition, allowing, in a not-so-far future, a full integration with the up-and-coming Internet of Things.

    Story Source:
    Materials provided by Institut national de la recherche scientifique – INRS. Original written by Audrey-Maude Vézina. Note: Content may be edited for style and length. More