More stories

  • in

    Understanding why some children enjoy TV more than others

    Children’s own temperament could be driving the amount of TV they watch — according to new research from the University of East Anglia and Birkbeck, University of London.
    New findings published today show that the brain responses of 10-month-old babies could predict whether they would enjoy watching fast-paced TV shows six months later.
    The research team says that the findings are important for the ongoing debate around early TV exposure.
    Lead researcher Dr Teodora Gliga, from UEA’s School of Psychology, said: “The sensory environment surrounding babies and young children is really complex and cluttered, but the ability to pay attention to something is one of the first developmental milestones in babies.
    “Even before they can ask questions, children vary greatly in how driven they are to explore their surroundings and engage with new sights or sounds.
    “We wanted to find out why babies appear to be so different in the way that they seek out new visual sensory stimulation — such as being attracted to shiny objects, bright colours or moving images on TV.

    advertisement

    “There have been various theories to explain these differences, with some suggesting that infants who are less sensitive will seek less stimulation, others suggesting that some infants are simply faster at processing information — an ability which could drive them to seek out new stimulation more frequently.
    “In this study we bring support for a third theory by showing that a preference for novelty makes some infants seek more varied stimulation.”
    Using a brain imaging method known as electroencephalography (EEG), the research team studied brain activity in 48 10-month old babies while they watched a 40-second clip from the Disney movie Fantasia on repeat.
    They studied how the children’s brain waves responded to random interruptions to the movie — in the form of a black and white chequerboard suddenly flashing on screen.
    Dr Gliga said: “As the babies watched the repeated video clip, EEG responses told us that they learned its content. We expected that, as the video became less novel and therefore engaged their attention less, they would start noticing the checkerboard.

    advertisement

    “But some of the babies started responding to the checkerboard earlier on while still learning about the video — suggesting that these children had had enough of the old information.
    “Conversely, others remained engaged with the video even when there was not much to learn from it,” she added.
    Parents and carers were also asked to fill in a questionnaire about their babies’ sensory behaviours — including whether they enjoyed watching fast-paced brightly-coloured TV shows. This was followed up with a second similar questionnaire six months later.
    Dr Gliga said: “It was very interesting to find that brain responses at 10 months, indicating how quickly infants switched their attention from the repeated video to the checkerboard, predicted whether they would enjoy watching fast-paced TV shows six months later.
    “These findings are important for the ongoing debate on early TV exposure since they suggest that children’s temperament may drive differences in TV exposure.
    “It is unlikely that our findings are explained by early TV exposure since parents reported that only a small proportion of 10-month-olds were watching TV shows,” she added.
    Elena Serena Piccardi, from Birkbeck, University of London, said: “The next part of our research will aim to understand exactly what drives these individual differences in attention to novelty, including the role that early environments may have.
    “Exploration and discovery are essential for children’s learning and cognitive development. Yet, different children may benefit from different environments for their learning. As such, this research will help us understand how individualized environments may nurture children’s learning, promote their cognitive development and, ultimately, support achievement of their full potential.
    The research was led by UEA in collaboration with Birkbeck, University of London and Cambridge University. It was funded by the Medical Research Council. More

  • in

    Recovering data: Neural network model finds small objects in dense images

    In efforts to automatically capture important data from scientific papers, computer scientists at the National Institute of Standards and Technology (NIST) have developed a method that can accurately detect small, geometric objects such as triangles within dense, low-quality plots contained in image data. Employing a neural network approach designed to detect patterns, the NIST model has many possible applications in modern life.
    NIST’s neural network model captured 97% of objects in a defined set of test images, locating the objects’ centers to within a few pixels of manually selected locations.
    “The purpose of the project was to recover the lost data in journal articles,” NIST computer scientist Adele Peskin explained. “But the study of small, dense object detection has a lot of other applications. Object detection is used in a wide range of image analyses, self-driving cars, machine inspections, and so on, for which small, dense objects are particularly hard to locate and separate.”
    The researchers took the data from journal articles dating as far back as the early 1900s in a database of metallic properties at NIST’s Thermodynamics Research Center (TRC). Often the results were presented only in graphical format, sometimes drawn by hand and degraded by scanning or photocopying. The researchers wanted to extract the locations of data points to recover the original, raw data for additional analysis. Until now such data have been extracted manually.
    The images present data points with a variety of different markers, mainly circles, triangles, and squares, both filled and open, of varying size and clarity. Such geometrical markers are often used to label data in a scientific graph. Text, numbers and other symbols, which can falsely appear to be data points, were manually removed from a subset of the figures with graphics editing software before training the neural networks.
    Accurately detecting and localizing the data markers was a challenge for several reasons. The markers are inconsistent in clarity and exact shape; they may be open or filled and are sometimes fuzzy or distorted. Some circles appear extremely circular, for example, whereas others do not have enough pixels to fully define their shape. In addition, many images contain very dense patches of overlapping circles, squares, and triangles.
    The researchers sought to create a network model that identified plot points at least as accurately as manual detection — within 5 pixels of the actual location on a plot size of several thousand pixels per side.
    As described in a new journal paper, NIST researchers adopted a network architecture originally developed by German researchers for analyzing biomedical images, called U-Net. First the image dimensions are contracted to reduce spatial information, and then layers of feature and context information are added to build up precise, high-resolution results.
    To help train the network to classify marker shapes and locate their centers, the researchers experimented with four ways of marking the training data with masks, using different-sized center markings and outlines for each geometric object.
    The researchers found that adding more information to the masks, such as thicker outlines, increased the accuracy of classifying object shapes but reduced the accuracy of pinpointing their locations on the plots. In the end, the researchers combined the best aspects of several models to get the best classification and smallest location errors. Altering the masks turned out to be the best way to improve network performance, more effective than other approaches such as small changes at the end of the network.
    The network’s best performance — an accuracy of 97% in locating object centers — was possible only for a subset of images in which plot points were originally represented by very clear circles, triangles, and squares. The performance is good enough for the TRC to use the neural network to recover data from plots in newer journal papers.
    Although NIST researchers currently have no plans for follow-up studies, the neural network model “absolutely” could be applied to other image analysis problems, Peskin said. More

  • in

    Droplet spread from humans doesn’t always follow airflow

    If aerosol transmission of COVID-19 is confirmed to be significant, we will need to reconsider guidelines on social distancing, ventilation systems and shared spaces. Researchers in the U.K. believe a better understanding of droplet behaviors and their different dispersion mechanisms is also needed. In a new article, the group presents a model that demarcates differently sized droplets. This has implications for understanding airborne diseases, because the dispersion tests revealed the absence of intermediate-sized droplets. More

  • in

    Consumers don't fully trust smart home technologies

    Smart home technologies are marketed to enhance your home and make life easier. However, UK consumers are not convinced that they can trust the privacy and security of these technologies, a study by WMG, University of Warwick has shown.Smart Home technology
    The ‘smart home’ can be defined as the integration of Internet-enabled, digital devices with sensors and machine learning in the home. The aim of smart home devices is to provide enhanced entertainment services, easier management of the home, domestic chores and protection from domestic risks. They can be found in devices such as smart speakers and hubs, lighting, sensors, door locks and cameras, central heating thermostats and domestic appliances.
    To better understand consumers perceptions of the desirability of the smart home, researchers from WMG and Computer Science, University of Warwick have carried out a nationally representative survey of UK consumers designed to measure adoption and acceptability, focusing on awareness, ownership, experience, trust, satisfaction and intention to use.
    The article ‘Trust in the smart home: Findings from a nationally representative survey in the UK’ published in the top journal PLOS ONE reveals their results, with the main finding that the businesses proposal of added meaning and value when adopting the smart home have not yet achieved closure from consumers, as they have highlighted concern for risks to privacy and security.
    Researchers sent 2101 participants a survey, with questions to assess:
    – Awareness of the Internet of Things (IoT)
    – Current ownership of smart home devices
    – Experiences of their use of smart home devices

    advertisement

    – Trust in the reliability and competence of the devices
    – Trust in privacy
    – Trust in security
    – Satisfaction and intention to use the devices in the future, and intention to recommend it to others

    The findings suggest consumers had anxiety about the likelihood of a security incident, as overall people tend to mildlySmart home tehnology agree that they are likely to risk privacy as well as security breach when using smart home devices, in other words they are unconvinced that their privacy and security will not be at risk when they use smart home devices.

    advertisement

    It also emerged that when asked to evaluate the impact of a privacy breach people tend to disagree that its impact will be low, suggesting they expect the impact of a privacy breach to be significant. This emerges as a prominent factor influencing whether or not they would adopt smart home technology, furthermore making it less likely.
    Other interesting results highlight:
    – More females than males have adopted smart home devices over the last year, possibly as they tend to run the house and find the technology helpful
    – Young people ages 18-24) were the earliest adopters of smart home technology, however older people (ages 65+) also adopted it early, possibly as they have more disposable income and less responsibilities — e.g. no mortgage, no dependent children
    – People aged 65 and over are less willing to use smart home devices in case of unauthorised data collection compared to younger people, indicating younger people are less aware of privacy breaches
    – Less well-educated people are the least interested in using smart home devices in the future, and that these might constitute market segments that will be lost to smart home adoption, unless their concerns are specifically addressed and targeted by policymakers and businesses.

    Dr Sara Cannizzaro, from WMG, University of Warwick comments:Dr Sara Cannizzaro, WMG, University of Warwick: “Our study underlines how businesses and policymakers will need to work together to act on the sociotechnical affordances of smart home technology in order to increase consumers’ trust. This intervention is necessary if barriers to adoption and acceptability of the smart home are to be addressed now and in the future.
    “Proof of cybersecurity and low risk to privacy breaches will be key in smart home technology companies persuading a number of consumers to invest in their technology.”
    Professor Rob Procter, from the Department of Computer Science, University of Warwick, adds:Professor Rob Procter, Department of Computer Science at the University of Warwick: “Businesses are still actively promoting positive visions of what the smart home means for consumers (e.g., convenience, economy, home security)… However, at the same time, as we see from our survey results, consumers are actively comparing their interactional experiences against these visions and are coming up with different interpretations and meanings from those that business is trying to promote.” More

  • in

    'Deepfakes' ranked as most serious AI crime threat

    Fake audio or video content has been ranked by experts as the most worrying use of artificial intelligence in terms of its potential applications for crime or terrorism, according to a new UCL report.
    The study, published in Crime Science and funded by the Dawes Centre for Future Crime at UCL (and available as a policy briefing), identified 20 ways AI could be used to facilitate crime over the next 15 years. These were ranked in order of concern — based on the harm they could cause, the potential for criminal profit or gain, how easy they would be to carry out and how difficult they would be to stop.
    Authors said fake content would be difficult to detect and stop, and that it could have a variety of aims — from discrediting a public figure to extracting funds by impersonating a couple’s son or daughter in a video call. Such content, they said, may lead to a widespread distrust of audio and visual evidence, which itself would be a societal harm.
    Aside from fake content, five other AI-enabled crimes were judged to be of high concern. These were using driverless vehicles as weapons, helping to craft more tailored phishing messages (spear phishing), disrupting AI-controlled systems, harvesting online information for the purposes of large-scale blackmail, and AI-authored fake news.
    Senior author Professor Lewis Griffin (UCL Computer Science) said: “As the capabilities of AI-based technologies expand, so too has their potential for criminal exploitation. To adequately prepare for possible AI threats, we need to identify what these threats might be, and how they may impact our lives.”
    Researchers compiled the 20 AI-enabled crimes from academic papers, news and current affairs reports, and fiction and popular culture. They then gathered 31 people with an expertise in AI for two days of discussions to rank the severity of the potential crimes. The participants were drawn from academia, the private sector, the police, the government and state security agencies.
    Crimes that were of medium concern included the sale of items and services fraudulently labelled as “AI,” such as security screening and targeted advertising. These would be easy to achieve, with potentially large profits.
    Crimes of low concern included burglar bots — small robots used to gain entry into properties through access points such as letterboxes or cat flaps — which were judged to be easy to defeat, for instance through letterbox cages, and AI-assisted stalking, which, although extremely damaging to individuals, could not operate at scale.
    First author Dr Matthew Caldwell (UCL Computer Science) said: “People now conduct large parts of their lives online and their online activity can make and break reputations. Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity.
    “Unlike many traditional crimes, crimes in the digital realm can be easily shared, repeated, and even sold, allowing criminal techniques to be marketed and for crime to be provided as a service. This means criminals may be able to outsource the more challenging aspects of their AI-based crime.”
    Professor Shane Johnson, Director of the Dawes Centre for Future Crimes at UCL, which funded the study, said: “We live in an ever changing world which creates new opportunities — good and bad. As such, it is imperative that we anticipate future crime threats so that policy makers and other stakeholders with the competency to act can do so before new ‘crime harvests’ occur. This report is the first in a series that will identify the future crime threats associated with new and emerging technologies and what we might do about them.” More

  • in

    AI and single-cell genomics

    Traditional single-cell sequencing methods help to reveal insights about cellular differences and functions — but they do this with static snapshots only rather than time-lapse films. This limitation makes it difficult to draw conclusions about the dynamics of cell development and gene activity. The recently introduced method “RNA velocity” aims to reconstruct the developmental trajectory of a cell on a computational basis (leveraging ratios of unspliced and spliced transcripts). This method, however, is applicable to steady-state populations only. Researchers were therefore looking for ways to extend the concept of RNA velocity to dynamic populations which are of crucial importance to understand cell development and disease response.
    Single-cell velocity
    Researchers from the Institute of Computational Biology at Helmholtz Zentrum München and the Department of Mathematics at TUM developed “scVelo” (single-cell velocity). The method estimates RNA velocity with an AI-based model by solving the full gene-wise transcriptional dynamics. This allows them to generalize the concept of RNA velocity to a wide variety of biological systems including dynamic populations.
    “We have used scVelo to reveal cell development in the endocrine pancreas, in the hippocampus, and to study dynamic processes in lung regeneration — and this is just the beginning,” says Volker Bergen, main creator of scVelo and first author of the corresponding study in Nature Biotechnology.
    With scVelo researchers can estimate reaction rates of RNA transcription, splicing and degradation without the need of any experimental data. These rates can help to better understand the cell identity and phenotypic heterogeneity. Their introduction of a latent time reconstructs the unknown developmental time to position the cells along the trajectory of the underlying biological process. That is particularly useful to better understand cellular decision making. Moreover, scVelo reveals regulatory changes and putative driver genes therein. This helps to understand not only how but also why cells are developing the way they do.
    Empowering personalized treatments
    AI-based tools like scVelo give rise to personalized treatments. Going from static snapshots to full dynamics allows researchers to move from descriptive towards predictive models. In the future, this might help to better understand disease progression such as tumor formation, or to unravel cell signaling in response to cancer treatment.
    “scVelo has been downloaded almost 60,000 times since its release last year. It has become a stepping-stone tooltowards the kinetic foundation for single-cell transcriptomics,” adds Prof. Fabian Theis, who conceived the study and serves as Director at the Institute for Computational Biology at Helmholtz Zentrums München and Chair for Mathematical Modeling of Biological Systems at TUM.

    Story Source:
    Materials provided by Helmholtz Zentrum München – German Research Center for Environmental Health. Note: Content may be edited for style and length. More

  • in

    Simplified circuit design could revolutionize how wearables are manufactured

    Researchers have demonstrated the use of a ground-breaking circuit design that could transform manufacturing processes for wearable technology.
    Silicon-based electronics have aggressively become smaller and more efficient over a short period of time, leading to major advances in devices such as mobile phones. However, large-area electronics, such as display screens, have not seen similar advances because they rely on a device, thin-film transistor (TFT), which has serious limitations.
    In a study published by IEEE Sensors Journal, researchers from the University of Surrey, University of Cambridge and the National Research Institute in Rome have demonstrated the use of a pioneering circuit design that uses an alternative type of device, the source-gated transistor (SGT), to create compact circuit blocks.
    In the study, the researchers showed that they are able to achieve the same functionality from two SGTs as would normally be the case from today’s devices that use roughly 12 TFTs — improving performance, reducing waste and making the new process far more cost effective.
    The research team believe that the new fabrication process could result in a generation of ultralightweight, flexible electronics for wearables and sensors.
    Dr Radu Sporea, lead author of the study and Lecturer in Semiconductor Devices at the University of Surrey, said: “We are entering what may be another golden age of electronics, with the arrival of 5G and IoT enabled devices. However, the way we have manufactured many of our electronics has increasingly become overcomplicated and has hindered the performance of many devices.
    “Our design offers a much simpler build process than regular thin-film transistors. Source-gated transistor circuits may also be cheaper to manufacture on a large scale because their simplicity means there is less waste in the form of rejected components. This elegant design of large area electronics could result in future phones, fitness tracker or smart sensors that are energy efficient, thinner and far more flexible than the ones we are able to produce today.”

    Story Source:
    Materials provided by University of Surrey. Note: Content may be edited for style and length. More

  • in

    Language may undermine women in science and tech

    Despite decades of positive messaging to encourage women and girls to pursue education tracks and careers in STEM, women continue to fall far below their male counterparts in these fields. A new study at Carnegie Mellon University examined 25 languages to explore the gender stereotypes in language that undermine efforts to support equality across STEM career paths. The results are available in the August 3rd issue of Nature Human Behavior.
    Molly Lewis, special faculty at CMU and her research partner, Gary Lupyan, associate professor at University of Wisconsin-Madison, set out to examine the effect of language on career stereotypes by gender. They found that implicit gender associations are strongly predicted by the language we speak. Their work suggests that linguistic associations may be causally related to people’s implicit judgement of what women can accomplish.
    “Young children have strong gender stereotypes as do older adults, and the question is where do these biases come from,” said Lewis, first author on the study. No one has looked at implicit language — simple language that co-occurs over a large body of text — that could give information about stereotypical norms in our culture across different languages.”
    In general, the team examined how words co-occur with women compared to men. For example, how often is ‘woman’ associated with ‘home,’ ‘children’ and ‘family,’ where as ‘man’ was associated with ‘work,’ ‘career’ and ‘business.’
    “What’s not obvious is that a lot of information that is contained in language, including information about cultural stereotypes, [occurs not as] direct statements but in large-scale statistical relationships between words,” said Lupyan, senior author on the study. “Even without encountering direct statements, it is possible to learn that there is stereotype embedded in the language of women being better at some things and men at others.”
    They found that languages with a stronger embedded gender association are more clearly associated with career stereotypes. They also found that a positive relationship between gender-marked occupation terms and the strength of these gender stereotypes.

    advertisement

    Previous work has shown that children begin to ingrain gender stereotypes in their culture by the age of two. The team examined statistics regarding gender associations embedded in 25 languages and related the results to an international dataset of gender bias (Implicit Association Test).
    Surprisingly, they found that the median age of the country influences the study results. Countries with a larger older population have a stronger bias in career-gender associations.
    “The consequences of these results are pretty profound,” said Lewis. “The results suggest that if you speak a language that is really biased then you are more likely to have a gender stereotype that associates men with career and women with family.”
    She suggests children’s books be written and designed to not have gender-biased statistics. These results also have implications for algorithmic fairness research aimed at eliminating gender bias in computer algorithms.
    “Our study shows that language statistics predict people’s implicit biases — languages with greater gender biases tend to have speakers with greater gender biases,” Lupyan said. “The results are correlational, but that the relationship persists under various controls [and] does suggest a causal influence.”
    Lewis notes that the Implicit Association Test used in this study has been criticized for low reliability and limited external validity. She stresses that additional work using longitudinal analyses and experimental designs is necessary to explore language statistics and implicit associations with gender stereotypes.
    Lewis and Lupyan received funding for the project, titled “Gender stereotypes are reflected in the distributional structure of 25 languages,” from the National Science Foundation.

    Story Source:
    Materials provided by Carnegie Mellon University. Original written by Stacy Kish. Note: Content may be edited for style and length. More