More stories

  • in

    Sustainable chemistry at the quantum level

    Developing catalysts for sustainable fuel and chemical production requires a kind of Goldilocks Effect — some catalysts are too ineffective while others are too uneconomical. Catalyst testing also takes a lot of time and resources. New breakthroughs in computational quantum chemistry, however, hold promise for discovering catalysts that are “just right” and thousands of times faster than standard approaches.
    University of Pittsburgh Associate Professor John A. Keith and his lab group at the Swanson School of Engineering are using new quantum chemistry computing procedures to categorize hypothetical electrocatalysts that are “too slow” or “too expensive,” far more thoroughly and quickly than was considered possible a few years ago. Keith is also the Richard King Mellon Faculty Fellow in Energy in the Swanson School’s Department of Chemical and Petroleum Engineering.
    The Keith Group’s research compilation, “Computational Quantum Chemical Explorations of Chemical/Material Space for Efficient Electrocatalysts,” was featured this month in Interface, a quarterly magazine of The Electrochemical Society.
    “For decades, catalyst development was the result of trial and error — years-long development and testing in the lab, giving us a basic understanding of how catalytic processes work. Today, computational modeling provides us with new insight into these reactions at the molecular level,” Keith explained. “Most exciting however is computational quantum chemistry, which can simulate the structures and dynamics of many atoms at a time. Coupled with the growing field of machine learning, we can more quickly and precisely predict and simulate catalytic models.”
    In the article, Keith explained a three-pronged approach for predicting novel electrocatalysts: 1) analyzing hypothetical reaction paths; 2) predicting ideal electrochemical environments; and 3) high-throughput screening powered by alchemical perturbation density functional theory and machine learning. The article explains how these approaches can transform how engineers and scientists develop electrocatalysts needed for society.
    “These emerging computational methods can allow researchers to be more than a thousand times as effective at discovering new systems compared to standard protocols,” Keith said. “For centuries chemistry and materials science relied on traditional Edisonian models of laboratory exploration, which bring far more failures than successes and thus a lot of wasted time and resources. Traditional computational quantum chemistry has accelerated these efforts, but the newest methods supercharge them. This helps researchers better pinpoint the undiscovered catalysts society desperately needs for a sustainable future.”

    Story Source:
    Materials provided by University of Pittsburgh. Note: Content may be edited for style and length. More

  • in

    Spray-on clear coatings for cheaper smart windows

    Researchers have developed a spray-on method for making conductive clear coatings, or transparent electrodes. Fast, scalable and based on cheaper materials, the new approach could simplify the fabrication of smart windows and low-emissivity glass. It can also be optimised to produce coatings tailored to the requirements of different applications of transparent electrodes, like touchscreen displays, LED lighting and solar panels. More

  • in

    Break it down: A new way to address common computing problem

    In this era of big data, there are some problems in scientific computing that are so large, so complex and contain so much information that attempting to solve them would be too big of a task for most computers.
    Now, researchers at the McKelvey School of Engineering at Washington University in St. Louis have developed a new algorithm for solving a common class of problem — known as linear inverse problems — by breaking them down into smaller tasks, each of which can be solved in parallel on standard computers.
    The research, from the lab of Jr-Shin Li, professor in the Preston M. Green Department of Electrical & Systems Engineering, was published July 30 in the journal Scientific Reports.
    In addition to providing a framework for solving this class of problems, the approach, called Parallel Residual Projection (PRP), also delivers enhanced security and mitigates privacy concerns.
    Linear inverse problems are those that attempt to take observational data and try to find a model that describes it. In their simplest form, they may look familiar: 2x+y = 1, x-y = 3. Many a high school student has solved for x and y without the help of a supercomputer.
    And as more researchers in different fields collect increasing amounts of data in order to gain deeper insights, these equations continue to grow in size and complexity.

    advertisement

    “We developed a computational framework to solve for the case when there are thousands or millions of such equations and variables,” Li said.
    This project was conceived while working on research problems from other fields involving big data. Li’s lab had been working with a biologist researching the network of neurons that deal with the sleep-wake cycle.
    “In the context of network inference, looking at a network of neurons, the inverse problem looks like this,” said Vignesh Narayanan, a research associate in Li’s lab:
    Given the data recorded from a bunch of neurons, what is the ‘model’ that describes how these neurons are connected with each other?
    “In an earlier work from our lab, we showed that this inference problem can be formulated as a linear inverse problem,” Narayanan said.

    advertisement

    If the system has a few hundred nodes — in this case, the nodes are the neurons — the matrix which describes the interaction among neurons could be millions by millions; that’s huge.
    “Storing this matrix itself exceeds the memory of a common desktop,” said Wei Miao, a PhD student in Li’s lab.
    Add to that the fact that such complex systems are often dynamic, as is our understanding of them. “Say we already have a solution, but now I want to consider interaction of some additional cells,” Miao said. Instead of starting a new problem and solving it from scratch, PRP adds flexibility and scalability. “You can manipulate the problem any way you want.”
    Even if you do happen to have a supercomputer, Miao said, “There is still a chance that by breaking down the big problem, you can solve it faster.”
    In addition to breaking down a complex problem and solving in parallel on different machines, the computational framework also, importantly, consolidates results and computes an accurate solution to the initial problem.
    An unintentional benefit of PRP is enhanced data security and privacy. When credit card companies use algorithms to research fraud, or a hospital wants to analyze its massive database, “No one wants to give all of that access to one individual,” Narayanan said.
    “This was an extra benefit that we didn’t even strive for,” Narayanan said. More

  • in

    How thoughts could one day control electronic prostheses, wirelessly

    Stanford researchers have been working for years to advance a technology that could one day help people with paralysis regain use of their limbs, and enable amputees to use their thoughts to control prostheses and interact with computers.
    The team has been focusing on improving a brain-computer interface, a device implanted beneath the skull on the surface of a patient’s brain. This implant connects the human nervous system to an electronic device that might, for instance, help restore some motor control to a person with a spinal cord injury, or someone with a neurological condition like amyotrophic lateral sclerosis, also called Lou Gehrig’s disease.
    The current generation of these devices record enormous amounts of neural activity, then transmit these brain signals through wires to a computer. But when researchers have tried to create wireless brain-computer interfaces to do this, it took so much power to transmit the data that the devices would generate too much heat to be safe for the patient.
    Now, a team led by electrical engineers and neuroscientists Krishna Shenoy, PhD, and Boris Murmann, PhD, and neurosurgeon and neuroscientist Jaimie Henderson, MD, have shown how it would be possible to create a wireless device, capable of gathering and transmitting accurate neural signals, but using a tenth of the power required by current wire-enabled systems. These wireless devices would look more natural than the wired models and give patients freer range of motion.
    Graduate student Nir Even-Chen and postdoctoral fellow Dante Muratore, PhD, describe the team’s approach in a Nature Biomedical Engineering paper.
    The team’s neuroscientists identified the specific neural signals needed to control a prosthetic device, such as a robotic arm or a computer cursor. The team’s electrical engineers then designed the circuitry that would enable a future, wireless brain-computer interface to process and transmit these these carefully identified and isolated signals, using less power and thus making it safe to implant the device on the surface of the brain.
    To test their idea, the researchers collected neuronal data from three nonhuman primates and one human participant in a (BrainGate) clinical trial.
    As the subjects performed movement tasks, such as positioning a cursor on a computer screen, the researchers took measurements. The findings validated their hypothesis that a wireless interface could accurately control an individual’s motion by recording a subset of action-specific brain signals, rather than acting like the wired device and collecting brain signals in bulk.
    The next step will be to build an implant based on this new approach and proceed through a series of tests toward the ultimate goal.

    Story Source:
    Materials provided by Stanford School of Engineering. Note: Content may be edited for style and length. More

  • in

    Understanding why some children enjoy TV more than others

    Children’s own temperament could be driving the amount of TV they watch — according to new research from the University of East Anglia and Birkbeck, University of London.
    New findings published today show that the brain responses of 10-month-old babies could predict whether they would enjoy watching fast-paced TV shows six months later.
    The research team says that the findings are important for the ongoing debate around early TV exposure.
    Lead researcher Dr Teodora Gliga, from UEA’s School of Psychology, said: “The sensory environment surrounding babies and young children is really complex and cluttered, but the ability to pay attention to something is one of the first developmental milestones in babies.
    “Even before they can ask questions, children vary greatly in how driven they are to explore their surroundings and engage with new sights or sounds.
    “We wanted to find out why babies appear to be so different in the way that they seek out new visual sensory stimulation — such as being attracted to shiny objects, bright colours or moving images on TV.

    advertisement

    “There have been various theories to explain these differences, with some suggesting that infants who are less sensitive will seek less stimulation, others suggesting that some infants are simply faster at processing information — an ability which could drive them to seek out new stimulation more frequently.
    “In this study we bring support for a third theory by showing that a preference for novelty makes some infants seek more varied stimulation.”
    Using a brain imaging method known as electroencephalography (EEG), the research team studied brain activity in 48 10-month old babies while they watched a 40-second clip from the Disney movie Fantasia on repeat.
    They studied how the children’s brain waves responded to random interruptions to the movie — in the form of a black and white chequerboard suddenly flashing on screen.
    Dr Gliga said: “As the babies watched the repeated video clip, EEG responses told us that they learned its content. We expected that, as the video became less novel and therefore engaged their attention less, they would start noticing the checkerboard.

    advertisement

    “But some of the babies started responding to the checkerboard earlier on while still learning about the video — suggesting that these children had had enough of the old information.
    “Conversely, others remained engaged with the video even when there was not much to learn from it,” she added.
    Parents and carers were also asked to fill in a questionnaire about their babies’ sensory behaviours — including whether they enjoyed watching fast-paced brightly-coloured TV shows. This was followed up with a second similar questionnaire six months later.
    Dr Gliga said: “It was very interesting to find that brain responses at 10 months, indicating how quickly infants switched their attention from the repeated video to the checkerboard, predicted whether they would enjoy watching fast-paced TV shows six months later.
    “These findings are important for the ongoing debate on early TV exposure since they suggest that children’s temperament may drive differences in TV exposure.
    “It is unlikely that our findings are explained by early TV exposure since parents reported that only a small proportion of 10-month-olds were watching TV shows,” she added.
    Elena Serena Piccardi, from Birkbeck, University of London, said: “The next part of our research will aim to understand exactly what drives these individual differences in attention to novelty, including the role that early environments may have.
    “Exploration and discovery are essential for children’s learning and cognitive development. Yet, different children may benefit from different environments for their learning. As such, this research will help us understand how individualized environments may nurture children’s learning, promote their cognitive development and, ultimately, support achievement of their full potential.
    The research was led by UEA in collaboration with Birkbeck, University of London and Cambridge University. It was funded by the Medical Research Council. More

  • in

    Recovering data: Neural network model finds small objects in dense images

    In efforts to automatically capture important data from scientific papers, computer scientists at the National Institute of Standards and Technology (NIST) have developed a method that can accurately detect small, geometric objects such as triangles within dense, low-quality plots contained in image data. Employing a neural network approach designed to detect patterns, the NIST model has many possible applications in modern life.
    NIST’s neural network model captured 97% of objects in a defined set of test images, locating the objects’ centers to within a few pixels of manually selected locations.
    “The purpose of the project was to recover the lost data in journal articles,” NIST computer scientist Adele Peskin explained. “But the study of small, dense object detection has a lot of other applications. Object detection is used in a wide range of image analyses, self-driving cars, machine inspections, and so on, for which small, dense objects are particularly hard to locate and separate.”
    The researchers took the data from journal articles dating as far back as the early 1900s in a database of metallic properties at NIST’s Thermodynamics Research Center (TRC). Often the results were presented only in graphical format, sometimes drawn by hand and degraded by scanning or photocopying. The researchers wanted to extract the locations of data points to recover the original, raw data for additional analysis. Until now such data have been extracted manually.
    The images present data points with a variety of different markers, mainly circles, triangles, and squares, both filled and open, of varying size and clarity. Such geometrical markers are often used to label data in a scientific graph. Text, numbers and other symbols, which can falsely appear to be data points, were manually removed from a subset of the figures with graphics editing software before training the neural networks.
    Accurately detecting and localizing the data markers was a challenge for several reasons. The markers are inconsistent in clarity and exact shape; they may be open or filled and are sometimes fuzzy or distorted. Some circles appear extremely circular, for example, whereas others do not have enough pixels to fully define their shape. In addition, many images contain very dense patches of overlapping circles, squares, and triangles.
    The researchers sought to create a network model that identified plot points at least as accurately as manual detection — within 5 pixels of the actual location on a plot size of several thousand pixels per side.
    As described in a new journal paper, NIST researchers adopted a network architecture originally developed by German researchers for analyzing biomedical images, called U-Net. First the image dimensions are contracted to reduce spatial information, and then layers of feature and context information are added to build up precise, high-resolution results.
    To help train the network to classify marker shapes and locate their centers, the researchers experimented with four ways of marking the training data with masks, using different-sized center markings and outlines for each geometric object.
    The researchers found that adding more information to the masks, such as thicker outlines, increased the accuracy of classifying object shapes but reduced the accuracy of pinpointing their locations on the plots. In the end, the researchers combined the best aspects of several models to get the best classification and smallest location errors. Altering the masks turned out to be the best way to improve network performance, more effective than other approaches such as small changes at the end of the network.
    The network’s best performance — an accuracy of 97% in locating object centers — was possible only for a subset of images in which plot points were originally represented by very clear circles, triangles, and squares. The performance is good enough for the TRC to use the neural network to recover data from plots in newer journal papers.
    Although NIST researchers currently have no plans for follow-up studies, the neural network model “absolutely” could be applied to other image analysis problems, Peskin said. More

  • in

    Droplet spread from humans doesn’t always follow airflow

    If aerosol transmission of COVID-19 is confirmed to be significant, we will need to reconsider guidelines on social distancing, ventilation systems and shared spaces. Researchers in the U.K. believe a better understanding of droplet behaviors and their different dispersion mechanisms is also needed. In a new article, the group presents a model that demarcates differently sized droplets. This has implications for understanding airborne diseases, because the dispersion tests revealed the absence of intermediate-sized droplets. More