More stories

  • in

    Break it down: A new way to address common computing problem

    In this era of big data, there are some problems in scientific computing that are so large, so complex and contain so much information that attempting to solve them would be too big of a task for most computers.
    Now, researchers at the McKelvey School of Engineering at Washington University in St. Louis have developed a new algorithm for solving a common class of problem — known as linear inverse problems — by breaking them down into smaller tasks, each of which can be solved in parallel on standard computers.
    The research, from the lab of Jr-Shin Li, professor in the Preston M. Green Department of Electrical & Systems Engineering, was published July 30 in the journal Scientific Reports.
    In addition to providing a framework for solving this class of problems, the approach, called Parallel Residual Projection (PRP), also delivers enhanced security and mitigates privacy concerns.
    Linear inverse problems are those that attempt to take observational data and try to find a model that describes it. In their simplest form, they may look familiar: 2x+y = 1, x-y = 3. Many a high school student has solved for x and y without the help of a supercomputer.
    And as more researchers in different fields collect increasing amounts of data in order to gain deeper insights, these equations continue to grow in size and complexity.

    advertisement

    “We developed a computational framework to solve for the case when there are thousands or millions of such equations and variables,” Li said.
    This project was conceived while working on research problems from other fields involving big data. Li’s lab had been working with a biologist researching the network of neurons that deal with the sleep-wake cycle.
    “In the context of network inference, looking at a network of neurons, the inverse problem looks like this,” said Vignesh Narayanan, a research associate in Li’s lab:
    Given the data recorded from a bunch of neurons, what is the ‘model’ that describes how these neurons are connected with each other?
    “In an earlier work from our lab, we showed that this inference problem can be formulated as a linear inverse problem,” Narayanan said.

    advertisement

    If the system has a few hundred nodes — in this case, the nodes are the neurons — the matrix which describes the interaction among neurons could be millions by millions; that’s huge.
    “Storing this matrix itself exceeds the memory of a common desktop,” said Wei Miao, a PhD student in Li’s lab.
    Add to that the fact that such complex systems are often dynamic, as is our understanding of them. “Say we already have a solution, but now I want to consider interaction of some additional cells,” Miao said. Instead of starting a new problem and solving it from scratch, PRP adds flexibility and scalability. “You can manipulate the problem any way you want.”
    Even if you do happen to have a supercomputer, Miao said, “There is still a chance that by breaking down the big problem, you can solve it faster.”
    In addition to breaking down a complex problem and solving in parallel on different machines, the computational framework also, importantly, consolidates results and computes an accurate solution to the initial problem.
    An unintentional benefit of PRP is enhanced data security and privacy. When credit card companies use algorithms to research fraud, or a hospital wants to analyze its massive database, “No one wants to give all of that access to one individual,” Narayanan said.
    “This was an extra benefit that we didn’t even strive for,” Narayanan said. More

  • in

    How thoughts could one day control electronic prostheses, wirelessly

    Stanford researchers have been working for years to advance a technology that could one day help people with paralysis regain use of their limbs, and enable amputees to use their thoughts to control prostheses and interact with computers.
    The team has been focusing on improving a brain-computer interface, a device implanted beneath the skull on the surface of a patient’s brain. This implant connects the human nervous system to an electronic device that might, for instance, help restore some motor control to a person with a spinal cord injury, or someone with a neurological condition like amyotrophic lateral sclerosis, also called Lou Gehrig’s disease.
    The current generation of these devices record enormous amounts of neural activity, then transmit these brain signals through wires to a computer. But when researchers have tried to create wireless brain-computer interfaces to do this, it took so much power to transmit the data that the devices would generate too much heat to be safe for the patient.
    Now, a team led by electrical engineers and neuroscientists Krishna Shenoy, PhD, and Boris Murmann, PhD, and neurosurgeon and neuroscientist Jaimie Henderson, MD, have shown how it would be possible to create a wireless device, capable of gathering and transmitting accurate neural signals, but using a tenth of the power required by current wire-enabled systems. These wireless devices would look more natural than the wired models and give patients freer range of motion.
    Graduate student Nir Even-Chen and postdoctoral fellow Dante Muratore, PhD, describe the team’s approach in a Nature Biomedical Engineering paper.
    The team’s neuroscientists identified the specific neural signals needed to control a prosthetic device, such as a robotic arm or a computer cursor. The team’s electrical engineers then designed the circuitry that would enable a future, wireless brain-computer interface to process and transmit these these carefully identified and isolated signals, using less power and thus making it safe to implant the device on the surface of the brain.
    To test their idea, the researchers collected neuronal data from three nonhuman primates and one human participant in a (BrainGate) clinical trial.
    As the subjects performed movement tasks, such as positioning a cursor on a computer screen, the researchers took measurements. The findings validated their hypothesis that a wireless interface could accurately control an individual’s motion by recording a subset of action-specific brain signals, rather than acting like the wired device and collecting brain signals in bulk.
    The next step will be to build an implant based on this new approach and proceed through a series of tests toward the ultimate goal.

    Story Source:
    Materials provided by Stanford School of Engineering. Note: Content may be edited for style and length. More

  • in

    Understanding why some children enjoy TV more than others

    Children’s own temperament could be driving the amount of TV they watch — according to new research from the University of East Anglia and Birkbeck, University of London.
    New findings published today show that the brain responses of 10-month-old babies could predict whether they would enjoy watching fast-paced TV shows six months later.
    The research team says that the findings are important for the ongoing debate around early TV exposure.
    Lead researcher Dr Teodora Gliga, from UEA’s School of Psychology, said: “The sensory environment surrounding babies and young children is really complex and cluttered, but the ability to pay attention to something is one of the first developmental milestones in babies.
    “Even before they can ask questions, children vary greatly in how driven they are to explore their surroundings and engage with new sights or sounds.
    “We wanted to find out why babies appear to be so different in the way that they seek out new visual sensory stimulation — such as being attracted to shiny objects, bright colours or moving images on TV.

    advertisement

    “There have been various theories to explain these differences, with some suggesting that infants who are less sensitive will seek less stimulation, others suggesting that some infants are simply faster at processing information — an ability which could drive them to seek out new stimulation more frequently.
    “In this study we bring support for a third theory by showing that a preference for novelty makes some infants seek more varied stimulation.”
    Using a brain imaging method known as electroencephalography (EEG), the research team studied brain activity in 48 10-month old babies while they watched a 40-second clip from the Disney movie Fantasia on repeat.
    They studied how the children’s brain waves responded to random interruptions to the movie — in the form of a black and white chequerboard suddenly flashing on screen.
    Dr Gliga said: “As the babies watched the repeated video clip, EEG responses told us that they learned its content. We expected that, as the video became less novel and therefore engaged their attention less, they would start noticing the checkerboard.

    advertisement

    “But some of the babies started responding to the checkerboard earlier on while still learning about the video — suggesting that these children had had enough of the old information.
    “Conversely, others remained engaged with the video even when there was not much to learn from it,” she added.
    Parents and carers were also asked to fill in a questionnaire about their babies’ sensory behaviours — including whether they enjoyed watching fast-paced brightly-coloured TV shows. This was followed up with a second similar questionnaire six months later.
    Dr Gliga said: “It was very interesting to find that brain responses at 10 months, indicating how quickly infants switched their attention from the repeated video to the checkerboard, predicted whether they would enjoy watching fast-paced TV shows six months later.
    “These findings are important for the ongoing debate on early TV exposure since they suggest that children’s temperament may drive differences in TV exposure.
    “It is unlikely that our findings are explained by early TV exposure since parents reported that only a small proportion of 10-month-olds were watching TV shows,” she added.
    Elena Serena Piccardi, from Birkbeck, University of London, said: “The next part of our research will aim to understand exactly what drives these individual differences in attention to novelty, including the role that early environments may have.
    “Exploration and discovery are essential for children’s learning and cognitive development. Yet, different children may benefit from different environments for their learning. As such, this research will help us understand how individualized environments may nurture children’s learning, promote their cognitive development and, ultimately, support achievement of their full potential.
    The research was led by UEA in collaboration with Birkbeck, University of London and Cambridge University. It was funded by the Medical Research Council. More

  • in

    Recovering data: Neural network model finds small objects in dense images

    In efforts to automatically capture important data from scientific papers, computer scientists at the National Institute of Standards and Technology (NIST) have developed a method that can accurately detect small, geometric objects such as triangles within dense, low-quality plots contained in image data. Employing a neural network approach designed to detect patterns, the NIST model has many possible applications in modern life.
    NIST’s neural network model captured 97% of objects in a defined set of test images, locating the objects’ centers to within a few pixels of manually selected locations.
    “The purpose of the project was to recover the lost data in journal articles,” NIST computer scientist Adele Peskin explained. “But the study of small, dense object detection has a lot of other applications. Object detection is used in a wide range of image analyses, self-driving cars, machine inspections, and so on, for which small, dense objects are particularly hard to locate and separate.”
    The researchers took the data from journal articles dating as far back as the early 1900s in a database of metallic properties at NIST’s Thermodynamics Research Center (TRC). Often the results were presented only in graphical format, sometimes drawn by hand and degraded by scanning or photocopying. The researchers wanted to extract the locations of data points to recover the original, raw data for additional analysis. Until now such data have been extracted manually.
    The images present data points with a variety of different markers, mainly circles, triangles, and squares, both filled and open, of varying size and clarity. Such geometrical markers are often used to label data in a scientific graph. Text, numbers and other symbols, which can falsely appear to be data points, were manually removed from a subset of the figures with graphics editing software before training the neural networks.
    Accurately detecting and localizing the data markers was a challenge for several reasons. The markers are inconsistent in clarity and exact shape; they may be open or filled and are sometimes fuzzy or distorted. Some circles appear extremely circular, for example, whereas others do not have enough pixels to fully define their shape. In addition, many images contain very dense patches of overlapping circles, squares, and triangles.
    The researchers sought to create a network model that identified plot points at least as accurately as manual detection — within 5 pixels of the actual location on a plot size of several thousand pixels per side.
    As described in a new journal paper, NIST researchers adopted a network architecture originally developed by German researchers for analyzing biomedical images, called U-Net. First the image dimensions are contracted to reduce spatial information, and then layers of feature and context information are added to build up precise, high-resolution results.
    To help train the network to classify marker shapes and locate their centers, the researchers experimented with four ways of marking the training data with masks, using different-sized center markings and outlines for each geometric object.
    The researchers found that adding more information to the masks, such as thicker outlines, increased the accuracy of classifying object shapes but reduced the accuracy of pinpointing their locations on the plots. In the end, the researchers combined the best aspects of several models to get the best classification and smallest location errors. Altering the masks turned out to be the best way to improve network performance, more effective than other approaches such as small changes at the end of the network.
    The network’s best performance — an accuracy of 97% in locating object centers — was possible only for a subset of images in which plot points were originally represented by very clear circles, triangles, and squares. The performance is good enough for the TRC to use the neural network to recover data from plots in newer journal papers.
    Although NIST researchers currently have no plans for follow-up studies, the neural network model “absolutely” could be applied to other image analysis problems, Peskin said. More

  • in

    Droplet spread from humans doesn’t always follow airflow

    If aerosol transmission of COVID-19 is confirmed to be significant, we will need to reconsider guidelines on social distancing, ventilation systems and shared spaces. Researchers in the U.K. believe a better understanding of droplet behaviors and their different dispersion mechanisms is also needed. In a new article, the group presents a model that demarcates differently sized droplets. This has implications for understanding airborne diseases, because the dispersion tests revealed the absence of intermediate-sized droplets. More

  • in

    Consumers don't fully trust smart home technologies

    Smart home technologies are marketed to enhance your home and make life easier. However, UK consumers are not convinced that they can trust the privacy and security of these technologies, a study by WMG, University of Warwick has shown.Smart Home technology
    The ‘smart home’ can be defined as the integration of Internet-enabled, digital devices with sensors and machine learning in the home. The aim of smart home devices is to provide enhanced entertainment services, easier management of the home, domestic chores and protection from domestic risks. They can be found in devices such as smart speakers and hubs, lighting, sensors, door locks and cameras, central heating thermostats and domestic appliances.
    To better understand consumers perceptions of the desirability of the smart home, researchers from WMG and Computer Science, University of Warwick have carried out a nationally representative survey of UK consumers designed to measure adoption and acceptability, focusing on awareness, ownership, experience, trust, satisfaction and intention to use.
    The article ‘Trust in the smart home: Findings from a nationally representative survey in the UK’ published in the top journal PLOS ONE reveals their results, with the main finding that the businesses proposal of added meaning and value when adopting the smart home have not yet achieved closure from consumers, as they have highlighted concern for risks to privacy and security.
    Researchers sent 2101 participants a survey, with questions to assess:
    – Awareness of the Internet of Things (IoT)
    – Current ownership of smart home devices
    – Experiences of their use of smart home devices

    advertisement

    – Trust in the reliability and competence of the devices
    – Trust in privacy
    – Trust in security
    – Satisfaction and intention to use the devices in the future, and intention to recommend it to others

    The findings suggest consumers had anxiety about the likelihood of a security incident, as overall people tend to mildlySmart home tehnology agree that they are likely to risk privacy as well as security breach when using smart home devices, in other words they are unconvinced that their privacy and security will not be at risk when they use smart home devices.

    advertisement

    It also emerged that when asked to evaluate the impact of a privacy breach people tend to disagree that its impact will be low, suggesting they expect the impact of a privacy breach to be significant. This emerges as a prominent factor influencing whether or not they would adopt smart home technology, furthermore making it less likely.
    Other interesting results highlight:
    – More females than males have adopted smart home devices over the last year, possibly as they tend to run the house and find the technology helpful
    – Young people ages 18-24) were the earliest adopters of smart home technology, however older people (ages 65+) also adopted it early, possibly as they have more disposable income and less responsibilities — e.g. no mortgage, no dependent children
    – People aged 65 and over are less willing to use smart home devices in case of unauthorised data collection compared to younger people, indicating younger people are less aware of privacy breaches
    – Less well-educated people are the least interested in using smart home devices in the future, and that these might constitute market segments that will be lost to smart home adoption, unless their concerns are specifically addressed and targeted by policymakers and businesses.

    Dr Sara Cannizzaro, from WMG, University of Warwick comments:Dr Sara Cannizzaro, WMG, University of Warwick: “Our study underlines how businesses and policymakers will need to work together to act on the sociotechnical affordances of smart home technology in order to increase consumers’ trust. This intervention is necessary if barriers to adoption and acceptability of the smart home are to be addressed now and in the future.
    “Proof of cybersecurity and low risk to privacy breaches will be key in smart home technology companies persuading a number of consumers to invest in their technology.”
    Professor Rob Procter, from the Department of Computer Science, University of Warwick, adds:Professor Rob Procter, Department of Computer Science at the University of Warwick: “Businesses are still actively promoting positive visions of what the smart home means for consumers (e.g., convenience, economy, home security)… However, at the same time, as we see from our survey results, consumers are actively comparing their interactional experiences against these visions and are coming up with different interpretations and meanings from those that business is trying to promote.” More

  • in

    'Deepfakes' ranked as most serious AI crime threat

    Fake audio or video content has been ranked by experts as the most worrying use of artificial intelligence in terms of its potential applications for crime or terrorism, according to a new UCL report.
    The study, published in Crime Science and funded by the Dawes Centre for Future Crime at UCL (and available as a policy briefing), identified 20 ways AI could be used to facilitate crime over the next 15 years. These were ranked in order of concern — based on the harm they could cause, the potential for criminal profit or gain, how easy they would be to carry out and how difficult they would be to stop.
    Authors said fake content would be difficult to detect and stop, and that it could have a variety of aims — from discrediting a public figure to extracting funds by impersonating a couple’s son or daughter in a video call. Such content, they said, may lead to a widespread distrust of audio and visual evidence, which itself would be a societal harm.
    Aside from fake content, five other AI-enabled crimes were judged to be of high concern. These were using driverless vehicles as weapons, helping to craft more tailored phishing messages (spear phishing), disrupting AI-controlled systems, harvesting online information for the purposes of large-scale blackmail, and AI-authored fake news.
    Senior author Professor Lewis Griffin (UCL Computer Science) said: “As the capabilities of AI-based technologies expand, so too has their potential for criminal exploitation. To adequately prepare for possible AI threats, we need to identify what these threats might be, and how they may impact our lives.”
    Researchers compiled the 20 AI-enabled crimes from academic papers, news and current affairs reports, and fiction and popular culture. They then gathered 31 people with an expertise in AI for two days of discussions to rank the severity of the potential crimes. The participants were drawn from academia, the private sector, the police, the government and state security agencies.
    Crimes that were of medium concern included the sale of items and services fraudulently labelled as “AI,” such as security screening and targeted advertising. These would be easy to achieve, with potentially large profits.
    Crimes of low concern included burglar bots — small robots used to gain entry into properties through access points such as letterboxes or cat flaps — which were judged to be easy to defeat, for instance through letterbox cages, and AI-assisted stalking, which, although extremely damaging to individuals, could not operate at scale.
    First author Dr Matthew Caldwell (UCL Computer Science) said: “People now conduct large parts of their lives online and their online activity can make and break reputations. Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity.
    “Unlike many traditional crimes, crimes in the digital realm can be easily shared, repeated, and even sold, allowing criminal techniques to be marketed and for crime to be provided as a service. This means criminals may be able to outsource the more challenging aspects of their AI-based crime.”
    Professor Shane Johnson, Director of the Dawes Centre for Future Crimes at UCL, which funded the study, said: “We live in an ever changing world which creates new opportunities — good and bad. As such, it is imperative that we anticipate future crime threats so that policy makers and other stakeholders with the competency to act can do so before new ‘crime harvests’ occur. This report is the first in a series that will identify the future crime threats associated with new and emerging technologies and what we might do about them.” More

  • in

    AI and single-cell genomics

    Traditional single-cell sequencing methods help to reveal insights about cellular differences and functions — but they do this with static snapshots only rather than time-lapse films. This limitation makes it difficult to draw conclusions about the dynamics of cell development and gene activity. The recently introduced method “RNA velocity” aims to reconstruct the developmental trajectory of a cell on a computational basis (leveraging ratios of unspliced and spliced transcripts). This method, however, is applicable to steady-state populations only. Researchers were therefore looking for ways to extend the concept of RNA velocity to dynamic populations which are of crucial importance to understand cell development and disease response.
    Single-cell velocity
    Researchers from the Institute of Computational Biology at Helmholtz Zentrum München and the Department of Mathematics at TUM developed “scVelo” (single-cell velocity). The method estimates RNA velocity with an AI-based model by solving the full gene-wise transcriptional dynamics. This allows them to generalize the concept of RNA velocity to a wide variety of biological systems including dynamic populations.
    “We have used scVelo to reveal cell development in the endocrine pancreas, in the hippocampus, and to study dynamic processes in lung regeneration — and this is just the beginning,” says Volker Bergen, main creator of scVelo and first author of the corresponding study in Nature Biotechnology.
    With scVelo researchers can estimate reaction rates of RNA transcription, splicing and degradation without the need of any experimental data. These rates can help to better understand the cell identity and phenotypic heterogeneity. Their introduction of a latent time reconstructs the unknown developmental time to position the cells along the trajectory of the underlying biological process. That is particularly useful to better understand cellular decision making. Moreover, scVelo reveals regulatory changes and putative driver genes therein. This helps to understand not only how but also why cells are developing the way they do.
    Empowering personalized treatments
    AI-based tools like scVelo give rise to personalized treatments. Going from static snapshots to full dynamics allows researchers to move from descriptive towards predictive models. In the future, this might help to better understand disease progression such as tumor formation, or to unravel cell signaling in response to cancer treatment.
    “scVelo has been downloaded almost 60,000 times since its release last year. It has become a stepping-stone tooltowards the kinetic foundation for single-cell transcriptomics,” adds Prof. Fabian Theis, who conceived the study and serves as Director at the Institute for Computational Biology at Helmholtz Zentrums München and Chair for Mathematical Modeling of Biological Systems at TUM.

    Story Source:
    Materials provided by Helmholtz Zentrum München – German Research Center for Environmental Health. Note: Content may be edited for style and length. More