More stories

  • in

    Spray-on clear coatings for cheaper smart windows

    Researchers have developed a spray-on method for making conductive clear coatings, or transparent electrodes. Fast, scalable and based on cheaper materials, the new approach could simplify the fabrication of smart windows and low-emissivity glass. It can also be optimised to produce coatings tailored to the requirements of different applications of transparent electrodes, like touchscreen displays, LED lighting and solar panels. More

  • in

    Break it down: A new way to address common computing problem

    In this era of big data, there are some problems in scientific computing that are so large, so complex and contain so much information that attempting to solve them would be too big of a task for most computers.
    Now, researchers at the McKelvey School of Engineering at Washington University in St. Louis have developed a new algorithm for solving a common class of problem — known as linear inverse problems — by breaking them down into smaller tasks, each of which can be solved in parallel on standard computers.
    The research, from the lab of Jr-Shin Li, professor in the Preston M. Green Department of Electrical & Systems Engineering, was published July 30 in the journal Scientific Reports.
    In addition to providing a framework for solving this class of problems, the approach, called Parallel Residual Projection (PRP), also delivers enhanced security and mitigates privacy concerns.
    Linear inverse problems are those that attempt to take observational data and try to find a model that describes it. In their simplest form, they may look familiar: 2x+y = 1, x-y = 3. Many a high school student has solved for x and y without the help of a supercomputer.
    And as more researchers in different fields collect increasing amounts of data in order to gain deeper insights, these equations continue to grow in size and complexity.

    advertisement

    “We developed a computational framework to solve for the case when there are thousands or millions of such equations and variables,” Li said.
    This project was conceived while working on research problems from other fields involving big data. Li’s lab had been working with a biologist researching the network of neurons that deal with the sleep-wake cycle.
    “In the context of network inference, looking at a network of neurons, the inverse problem looks like this,” said Vignesh Narayanan, a research associate in Li’s lab:
    Given the data recorded from a bunch of neurons, what is the ‘model’ that describes how these neurons are connected with each other?
    “In an earlier work from our lab, we showed that this inference problem can be formulated as a linear inverse problem,” Narayanan said.

    advertisement

    If the system has a few hundred nodes — in this case, the nodes are the neurons — the matrix which describes the interaction among neurons could be millions by millions; that’s huge.
    “Storing this matrix itself exceeds the memory of a common desktop,” said Wei Miao, a PhD student in Li’s lab.
    Add to that the fact that such complex systems are often dynamic, as is our understanding of them. “Say we already have a solution, but now I want to consider interaction of some additional cells,” Miao said. Instead of starting a new problem and solving it from scratch, PRP adds flexibility and scalability. “You can manipulate the problem any way you want.”
    Even if you do happen to have a supercomputer, Miao said, “There is still a chance that by breaking down the big problem, you can solve it faster.”
    In addition to breaking down a complex problem and solving in parallel on different machines, the computational framework also, importantly, consolidates results and computes an accurate solution to the initial problem.
    An unintentional benefit of PRP is enhanced data security and privacy. When credit card companies use algorithms to research fraud, or a hospital wants to analyze its massive database, “No one wants to give all of that access to one individual,” Narayanan said.
    “This was an extra benefit that we didn’t even strive for,” Narayanan said. More

  • in

    How thoughts could one day control electronic prostheses, wirelessly

    Stanford researchers have been working for years to advance a technology that could one day help people with paralysis regain use of their limbs, and enable amputees to use their thoughts to control prostheses and interact with computers.
    The team has been focusing on improving a brain-computer interface, a device implanted beneath the skull on the surface of a patient’s brain. This implant connects the human nervous system to an electronic device that might, for instance, help restore some motor control to a person with a spinal cord injury, or someone with a neurological condition like amyotrophic lateral sclerosis, also called Lou Gehrig’s disease.
    The current generation of these devices record enormous amounts of neural activity, then transmit these brain signals through wires to a computer. But when researchers have tried to create wireless brain-computer interfaces to do this, it took so much power to transmit the data that the devices would generate too much heat to be safe for the patient.
    Now, a team led by electrical engineers and neuroscientists Krishna Shenoy, PhD, and Boris Murmann, PhD, and neurosurgeon and neuroscientist Jaimie Henderson, MD, have shown how it would be possible to create a wireless device, capable of gathering and transmitting accurate neural signals, but using a tenth of the power required by current wire-enabled systems. These wireless devices would look more natural than the wired models and give patients freer range of motion.
    Graduate student Nir Even-Chen and postdoctoral fellow Dante Muratore, PhD, describe the team’s approach in a Nature Biomedical Engineering paper.
    The team’s neuroscientists identified the specific neural signals needed to control a prosthetic device, such as a robotic arm or a computer cursor. The team’s electrical engineers then designed the circuitry that would enable a future, wireless brain-computer interface to process and transmit these these carefully identified and isolated signals, using less power and thus making it safe to implant the device on the surface of the brain.
    To test their idea, the researchers collected neuronal data from three nonhuman primates and one human participant in a (BrainGate) clinical trial.
    As the subjects performed movement tasks, such as positioning a cursor on a computer screen, the researchers took measurements. The findings validated their hypothesis that a wireless interface could accurately control an individual’s motion by recording a subset of action-specific brain signals, rather than acting like the wired device and collecting brain signals in bulk.
    The next step will be to build an implant based on this new approach and proceed through a series of tests toward the ultimate goal.

    Story Source:
    Materials provided by Stanford School of Engineering. Note: Content may be edited for style and length. More

  • in

    Understanding why some children enjoy TV more than others

    Children’s own temperament could be driving the amount of TV they watch — according to new research from the University of East Anglia and Birkbeck, University of London.
    New findings published today show that the brain responses of 10-month-old babies could predict whether they would enjoy watching fast-paced TV shows six months later.
    The research team says that the findings are important for the ongoing debate around early TV exposure.
    Lead researcher Dr Teodora Gliga, from UEA’s School of Psychology, said: “The sensory environment surrounding babies and young children is really complex and cluttered, but the ability to pay attention to something is one of the first developmental milestones in babies.
    “Even before they can ask questions, children vary greatly in how driven they are to explore their surroundings and engage with new sights or sounds.
    “We wanted to find out why babies appear to be so different in the way that they seek out new visual sensory stimulation — such as being attracted to shiny objects, bright colours or moving images on TV.

    advertisement

    “There have been various theories to explain these differences, with some suggesting that infants who are less sensitive will seek less stimulation, others suggesting that some infants are simply faster at processing information — an ability which could drive them to seek out new stimulation more frequently.
    “In this study we bring support for a third theory by showing that a preference for novelty makes some infants seek more varied stimulation.”
    Using a brain imaging method known as electroencephalography (EEG), the research team studied brain activity in 48 10-month old babies while they watched a 40-second clip from the Disney movie Fantasia on repeat.
    They studied how the children’s brain waves responded to random interruptions to the movie — in the form of a black and white chequerboard suddenly flashing on screen.
    Dr Gliga said: “As the babies watched the repeated video clip, EEG responses told us that they learned its content. We expected that, as the video became less novel and therefore engaged their attention less, they would start noticing the checkerboard.

    advertisement

    “But some of the babies started responding to the checkerboard earlier on while still learning about the video — suggesting that these children had had enough of the old information.
    “Conversely, others remained engaged with the video even when there was not much to learn from it,” she added.
    Parents and carers were also asked to fill in a questionnaire about their babies’ sensory behaviours — including whether they enjoyed watching fast-paced brightly-coloured TV shows. This was followed up with a second similar questionnaire six months later.
    Dr Gliga said: “It was very interesting to find that brain responses at 10 months, indicating how quickly infants switched their attention from the repeated video to the checkerboard, predicted whether they would enjoy watching fast-paced TV shows six months later.
    “These findings are important for the ongoing debate on early TV exposure since they suggest that children’s temperament may drive differences in TV exposure.
    “It is unlikely that our findings are explained by early TV exposure since parents reported that only a small proportion of 10-month-olds were watching TV shows,” she added.
    Elena Serena Piccardi, from Birkbeck, University of London, said: “The next part of our research will aim to understand exactly what drives these individual differences in attention to novelty, including the role that early environments may have.
    “Exploration and discovery are essential for children’s learning and cognitive development. Yet, different children may benefit from different environments for their learning. As such, this research will help us understand how individualized environments may nurture children’s learning, promote their cognitive development and, ultimately, support achievement of their full potential.
    The research was led by UEA in collaboration with Birkbeck, University of London and Cambridge University. It was funded by the Medical Research Council. More

  • in

    Penguin poop spotted from space ups the tally of emperor penguin colonies

    Patches of penguin poop spotted in new high-resolution satellite images of Antarctica reveal a handful of small, previously overlooked emperor penguin colonies.
    Eight new colonies, plus three newly confirmed, brings the total to 61 — about 20 percent more colonies than thought, researchers report August 5 in Remote Sensing in Ecology and Conservation. That’s the good news, says Peter Fretwell, a geographer at the British Antarctic Survey in Cambridge, England.
    The bad news, he says, is that the new colonies tend to be in regions highly vulnerable to climate change, including a few out on the sea ice. One newly discovered group lives about 180 kilometers from shore, on sea ice ringing a shoaled iceberg. The study is the first to describe such offshore breeding sites for the penguins.

    Penguin guano shows up as a reddish-brown stain against white snow and ice (SN: 3/2/18). Before 2016, Fretwell and BAS penguin biologist Phil Trathan hunted for the telltale stains in images from NASA’s Landsat satellites, which have a resolution of 30 meters by 30 meters.
    Emperor penguins turned a ring of sea ice around an iceberg into a breeding site. The previously unknown colony was found near Ninnis Bank, a spot 180 kilometers offshore, thanks to a brown smudge (arrow) left by penguin poop.P.T. Fretwell and P.N. Trathan/Remote Sensing in Ecology and Conservation 2020
    The launch of the European Space Agency’s Sentinel satellites, with a much finer resolution of 10 meters by 10 meters, “makes us able to see things in much greater detail, and pick out much smaller things,” such as tinier patches of guano representing smaller colonies, Fretwell says. The new colony tally therefore ups the estimated emperor penguin population by only about 10 percent at most, or 55,000 birds.
    Unlike other penguins, emperors (Aptenodytes forsteri) live their entire lives at sea, foraging and breeding on the sea ice. That increases their vulnerability to future warming: Even moderate greenhouse gas emissions scenarios are projected to melt much of the fringing ice around Antarctica (SN: 4/30/20). Previous work has suggested this ice loss could decrease emperor penguin populations by about 31 percent over the next 60 years, an assessment that is shifting the birds’ conservation status from near threatened to vulnerable. More

  • in

    Recovering data: Neural network model finds small objects in dense images

    In efforts to automatically capture important data from scientific papers, computer scientists at the National Institute of Standards and Technology (NIST) have developed a method that can accurately detect small, geometric objects such as triangles within dense, low-quality plots contained in image data. Employing a neural network approach designed to detect patterns, the NIST model has many possible applications in modern life.
    NIST’s neural network model captured 97% of objects in a defined set of test images, locating the objects’ centers to within a few pixels of manually selected locations.
    “The purpose of the project was to recover the lost data in journal articles,” NIST computer scientist Adele Peskin explained. “But the study of small, dense object detection has a lot of other applications. Object detection is used in a wide range of image analyses, self-driving cars, machine inspections, and so on, for which small, dense objects are particularly hard to locate and separate.”
    The researchers took the data from journal articles dating as far back as the early 1900s in a database of metallic properties at NIST’s Thermodynamics Research Center (TRC). Often the results were presented only in graphical format, sometimes drawn by hand and degraded by scanning or photocopying. The researchers wanted to extract the locations of data points to recover the original, raw data for additional analysis. Until now such data have been extracted manually.
    The images present data points with a variety of different markers, mainly circles, triangles, and squares, both filled and open, of varying size and clarity. Such geometrical markers are often used to label data in a scientific graph. Text, numbers and other symbols, which can falsely appear to be data points, were manually removed from a subset of the figures with graphics editing software before training the neural networks.
    Accurately detecting and localizing the data markers was a challenge for several reasons. The markers are inconsistent in clarity and exact shape; they may be open or filled and are sometimes fuzzy or distorted. Some circles appear extremely circular, for example, whereas others do not have enough pixels to fully define their shape. In addition, many images contain very dense patches of overlapping circles, squares, and triangles.
    The researchers sought to create a network model that identified plot points at least as accurately as manual detection — within 5 pixels of the actual location on a plot size of several thousand pixels per side.
    As described in a new journal paper, NIST researchers adopted a network architecture originally developed by German researchers for analyzing biomedical images, called U-Net. First the image dimensions are contracted to reduce spatial information, and then layers of feature and context information are added to build up precise, high-resolution results.
    To help train the network to classify marker shapes and locate their centers, the researchers experimented with four ways of marking the training data with masks, using different-sized center markings and outlines for each geometric object.
    The researchers found that adding more information to the masks, such as thicker outlines, increased the accuracy of classifying object shapes but reduced the accuracy of pinpointing their locations on the plots. In the end, the researchers combined the best aspects of several models to get the best classification and smallest location errors. Altering the masks turned out to be the best way to improve network performance, more effective than other approaches such as small changes at the end of the network.
    The network’s best performance — an accuracy of 97% in locating object centers — was possible only for a subset of images in which plot points were originally represented by very clear circles, triangles, and squares. The performance is good enough for the TRC to use the neural network to recover data from plots in newer journal papers.
    Although NIST researchers currently have no plans for follow-up studies, the neural network model “absolutely” could be applied to other image analysis problems, Peskin said. More

  • in

    Droplet spread from humans doesn’t always follow airflow

    If aerosol transmission of COVID-19 is confirmed to be significant, we will need to reconsider guidelines on social distancing, ventilation systems and shared spaces. Researchers in the U.K. believe a better understanding of droplet behaviors and their different dispersion mechanisms is also needed. In a new article, the group presents a model that demarcates differently sized droplets. This has implications for understanding airborne diseases, because the dispersion tests revealed the absence of intermediate-sized droplets. More

  • in

    Consumers don't fully trust smart home technologies

    Smart home technologies are marketed to enhance your home and make life easier. However, UK consumers are not convinced that they can trust the privacy and security of these technologies, a study by WMG, University of Warwick has shown.Smart Home technology
    The ‘smart home’ can be defined as the integration of Internet-enabled, digital devices with sensors and machine learning in the home. The aim of smart home devices is to provide enhanced entertainment services, easier management of the home, domestic chores and protection from domestic risks. They can be found in devices such as smart speakers and hubs, lighting, sensors, door locks and cameras, central heating thermostats and domestic appliances.
    To better understand consumers perceptions of the desirability of the smart home, researchers from WMG and Computer Science, University of Warwick have carried out a nationally representative survey of UK consumers designed to measure adoption and acceptability, focusing on awareness, ownership, experience, trust, satisfaction and intention to use.
    The article ‘Trust in the smart home: Findings from a nationally representative survey in the UK’ published in the top journal PLOS ONE reveals their results, with the main finding that the businesses proposal of added meaning and value when adopting the smart home have not yet achieved closure from consumers, as they have highlighted concern for risks to privacy and security.
    Researchers sent 2101 participants a survey, with questions to assess:
    – Awareness of the Internet of Things (IoT)
    – Current ownership of smart home devices
    – Experiences of their use of smart home devices

    advertisement

    – Trust in the reliability and competence of the devices
    – Trust in privacy
    – Trust in security
    – Satisfaction and intention to use the devices in the future, and intention to recommend it to others

    The findings suggest consumers had anxiety about the likelihood of a security incident, as overall people tend to mildlySmart home tehnology agree that they are likely to risk privacy as well as security breach when using smart home devices, in other words they are unconvinced that their privacy and security will not be at risk when they use smart home devices.

    advertisement

    It also emerged that when asked to evaluate the impact of a privacy breach people tend to disagree that its impact will be low, suggesting they expect the impact of a privacy breach to be significant. This emerges as a prominent factor influencing whether or not they would adopt smart home technology, furthermore making it less likely.
    Other interesting results highlight:
    – More females than males have adopted smart home devices over the last year, possibly as they tend to run the house and find the technology helpful
    – Young people ages 18-24) were the earliest adopters of smart home technology, however older people (ages 65+) also adopted it early, possibly as they have more disposable income and less responsibilities — e.g. no mortgage, no dependent children
    – People aged 65 and over are less willing to use smart home devices in case of unauthorised data collection compared to younger people, indicating younger people are less aware of privacy breaches
    – Less well-educated people are the least interested in using smart home devices in the future, and that these might constitute market segments that will be lost to smart home adoption, unless their concerns are specifically addressed and targeted by policymakers and businesses.

    Dr Sara Cannizzaro, from WMG, University of Warwick comments:Dr Sara Cannizzaro, WMG, University of Warwick: “Our study underlines how businesses and policymakers will need to work together to act on the sociotechnical affordances of smart home technology in order to increase consumers’ trust. This intervention is necessary if barriers to adoption and acceptability of the smart home are to be addressed now and in the future.
    “Proof of cybersecurity and low risk to privacy breaches will be key in smart home technology companies persuading a number of consumers to invest in their technology.”
    Professor Rob Procter, from the Department of Computer Science, University of Warwick, adds:Professor Rob Procter, Department of Computer Science at the University of Warwick: “Businesses are still actively promoting positive visions of what the smart home means for consumers (e.g., convenience, economy, home security)… However, at the same time, as we see from our survey results, consumers are actively comparing their interactional experiences against these visions and are coming up with different interpretations and meanings from those that business is trying to promote.” More