More stories

  • in

    Penguin poop spotted from space ups the tally of emperor penguin colonies

    Patches of penguin poop spotted in new high-resolution satellite images of Antarctica reveal a handful of small, previously overlooked emperor penguin colonies.
    Eight new colonies, plus three newly confirmed, brings the total to 61 — about 20 percent more colonies than thought, researchers report August 5 in Remote Sensing in Ecology and Conservation. That’s the good news, says Peter Fretwell, a geographer at the British Antarctic Survey in Cambridge, England.
    The bad news, he says, is that the new colonies tend to be in regions highly vulnerable to climate change, including a few out on the sea ice. One newly discovered group lives about 180 kilometers from shore, on sea ice ringing a shoaled iceberg. The study is the first to describe such offshore breeding sites for the penguins.

    Penguin guano shows up as a reddish-brown stain against white snow and ice (SN: 3/2/18). Before 2016, Fretwell and BAS penguin biologist Phil Trathan hunted for the telltale stains in images from NASA’s Landsat satellites, which have a resolution of 30 meters by 30 meters.
    Emperor penguins turned a ring of sea ice around an iceberg into a breeding site. The previously unknown colony was found near Ninnis Bank, a spot 180 kilometers offshore, thanks to a brown smudge (arrow) left by penguin poop.P.T. Fretwell and P.N. Trathan/Remote Sensing in Ecology and Conservation 2020
    The launch of the European Space Agency’s Sentinel satellites, with a much finer resolution of 10 meters by 10 meters, “makes us able to see things in much greater detail, and pick out much smaller things,” such as tinier patches of guano representing smaller colonies, Fretwell says. The new colony tally therefore ups the estimated emperor penguin population by only about 10 percent at most, or 55,000 birds.
    Unlike other penguins, emperors (Aptenodytes forsteri) live their entire lives at sea, foraging and breeding on the sea ice. That increases their vulnerability to future warming: Even moderate greenhouse gas emissions scenarios are projected to melt much of the fringing ice around Antarctica (SN: 4/30/20). Previous work has suggested this ice loss could decrease emperor penguin populations by about 31 percent over the next 60 years, an assessment that is shifting the birds’ conservation status from near threatened to vulnerable. More

  • in

    Recovering data: Neural network model finds small objects in dense images

    In efforts to automatically capture important data from scientific papers, computer scientists at the National Institute of Standards and Technology (NIST) have developed a method that can accurately detect small, geometric objects such as triangles within dense, low-quality plots contained in image data. Employing a neural network approach designed to detect patterns, the NIST model has many possible applications in modern life.
    NIST’s neural network model captured 97% of objects in a defined set of test images, locating the objects’ centers to within a few pixels of manually selected locations.
    “The purpose of the project was to recover the lost data in journal articles,” NIST computer scientist Adele Peskin explained. “But the study of small, dense object detection has a lot of other applications. Object detection is used in a wide range of image analyses, self-driving cars, machine inspections, and so on, for which small, dense objects are particularly hard to locate and separate.”
    The researchers took the data from journal articles dating as far back as the early 1900s in a database of metallic properties at NIST’s Thermodynamics Research Center (TRC). Often the results were presented only in graphical format, sometimes drawn by hand and degraded by scanning or photocopying. The researchers wanted to extract the locations of data points to recover the original, raw data for additional analysis. Until now such data have been extracted manually.
    The images present data points with a variety of different markers, mainly circles, triangles, and squares, both filled and open, of varying size and clarity. Such geometrical markers are often used to label data in a scientific graph. Text, numbers and other symbols, which can falsely appear to be data points, were manually removed from a subset of the figures with graphics editing software before training the neural networks.
    Accurately detecting and localizing the data markers was a challenge for several reasons. The markers are inconsistent in clarity and exact shape; they may be open or filled and are sometimes fuzzy or distorted. Some circles appear extremely circular, for example, whereas others do not have enough pixels to fully define their shape. In addition, many images contain very dense patches of overlapping circles, squares, and triangles.
    The researchers sought to create a network model that identified plot points at least as accurately as manual detection — within 5 pixels of the actual location on a plot size of several thousand pixels per side.
    As described in a new journal paper, NIST researchers adopted a network architecture originally developed by German researchers for analyzing biomedical images, called U-Net. First the image dimensions are contracted to reduce spatial information, and then layers of feature and context information are added to build up precise, high-resolution results.
    To help train the network to classify marker shapes and locate their centers, the researchers experimented with four ways of marking the training data with masks, using different-sized center markings and outlines for each geometric object.
    The researchers found that adding more information to the masks, such as thicker outlines, increased the accuracy of classifying object shapes but reduced the accuracy of pinpointing their locations on the plots. In the end, the researchers combined the best aspects of several models to get the best classification and smallest location errors. Altering the masks turned out to be the best way to improve network performance, more effective than other approaches such as small changes at the end of the network.
    The network’s best performance — an accuracy of 97% in locating object centers — was possible only for a subset of images in which plot points were originally represented by very clear circles, triangles, and squares. The performance is good enough for the TRC to use the neural network to recover data from plots in newer journal papers.
    Although NIST researchers currently have no plans for follow-up studies, the neural network model “absolutely” could be applied to other image analysis problems, Peskin said. More

  • in

    Droplet spread from humans doesn’t always follow airflow

    If aerosol transmission of COVID-19 is confirmed to be significant, we will need to reconsider guidelines on social distancing, ventilation systems and shared spaces. Researchers in the U.K. believe a better understanding of droplet behaviors and their different dispersion mechanisms is also needed. In a new article, the group presents a model that demarcates differently sized droplets. This has implications for understanding airborne diseases, because the dispersion tests revealed the absence of intermediate-sized droplets. More

  • in

    Consumers don't fully trust smart home technologies

    Smart home technologies are marketed to enhance your home and make life easier. However, UK consumers are not convinced that they can trust the privacy and security of these technologies, a study by WMG, University of Warwick has shown.Smart Home technology
    The ‘smart home’ can be defined as the integration of Internet-enabled, digital devices with sensors and machine learning in the home. The aim of smart home devices is to provide enhanced entertainment services, easier management of the home, domestic chores and protection from domestic risks. They can be found in devices such as smart speakers and hubs, lighting, sensors, door locks and cameras, central heating thermostats and domestic appliances.
    To better understand consumers perceptions of the desirability of the smart home, researchers from WMG and Computer Science, University of Warwick have carried out a nationally representative survey of UK consumers designed to measure adoption and acceptability, focusing on awareness, ownership, experience, trust, satisfaction and intention to use.
    The article ‘Trust in the smart home: Findings from a nationally representative survey in the UK’ published in the top journal PLOS ONE reveals their results, with the main finding that the businesses proposal of added meaning and value when adopting the smart home have not yet achieved closure from consumers, as they have highlighted concern for risks to privacy and security.
    Researchers sent 2101 participants a survey, with questions to assess:
    – Awareness of the Internet of Things (IoT)
    – Current ownership of smart home devices
    – Experiences of their use of smart home devices

    advertisement

    – Trust in the reliability and competence of the devices
    – Trust in privacy
    – Trust in security
    – Satisfaction and intention to use the devices in the future, and intention to recommend it to others

    The findings suggest consumers had anxiety about the likelihood of a security incident, as overall people tend to mildlySmart home tehnology agree that they are likely to risk privacy as well as security breach when using smart home devices, in other words they are unconvinced that their privacy and security will not be at risk when they use smart home devices.

    advertisement

    It also emerged that when asked to evaluate the impact of a privacy breach people tend to disagree that its impact will be low, suggesting they expect the impact of a privacy breach to be significant. This emerges as a prominent factor influencing whether or not they would adopt smart home technology, furthermore making it less likely.
    Other interesting results highlight:
    – More females than males have adopted smart home devices over the last year, possibly as they tend to run the house and find the technology helpful
    – Young people ages 18-24) were the earliest adopters of smart home technology, however older people (ages 65+) also adopted it early, possibly as they have more disposable income and less responsibilities — e.g. no mortgage, no dependent children
    – People aged 65 and over are less willing to use smart home devices in case of unauthorised data collection compared to younger people, indicating younger people are less aware of privacy breaches
    – Less well-educated people are the least interested in using smart home devices in the future, and that these might constitute market segments that will be lost to smart home adoption, unless their concerns are specifically addressed and targeted by policymakers and businesses.

    Dr Sara Cannizzaro, from WMG, University of Warwick comments:Dr Sara Cannizzaro, WMG, University of Warwick: “Our study underlines how businesses and policymakers will need to work together to act on the sociotechnical affordances of smart home technology in order to increase consumers’ trust. This intervention is necessary if barriers to adoption and acceptability of the smart home are to be addressed now and in the future.
    “Proof of cybersecurity and low risk to privacy breaches will be key in smart home technology companies persuading a number of consumers to invest in their technology.”
    Professor Rob Procter, from the Department of Computer Science, University of Warwick, adds:Professor Rob Procter, Department of Computer Science at the University of Warwick: “Businesses are still actively promoting positive visions of what the smart home means for consumers (e.g., convenience, economy, home security)… However, at the same time, as we see from our survey results, consumers are actively comparing their interactional experiences against these visions and are coming up with different interpretations and meanings from those that business is trying to promote.” More

  • in

    ‘The End of Everything’ explores the ways the universe could perish

    The End of EverythingKatie MackScribner, $26
    Eventually, the universe will end. And it won’t be pretty.
    The universe is expanding at an accelerating clip, and that evolution, physicists expect, will lead the cosmos to a conclusion. Scientists don’t know quite what that end will look like, but they have plenty of ideas. In The End of Everything, theoretical astrophysicist Katie Mack provides a tour of the admittedly bleak possibilities. But far from being depressing, Mack’s account mixes a sense of reverence for the wonders of physics with an irreverent sense of humor and a disarming dose of candor.
    Some potential finales are violent: If the universe’s expansion were to reverse, the cosmos collapsing inward in a Big Crunch, extremely energetic swells of radiation would ignite the surfaces of stars, exploding them. Another version of the end is quieter but no less terrifying: The universe’s expansion could continue forever. That end, Mack writes, “like immortality, only sounds good until you really think about it.” Endless expansion would beget a state known as “heat death” — a barren universe that has reached a uniform temperature throughout (SN: 10/2/09). Stars will have burned out, and black holes will have evaporated until no organized structures exist. Nothing meaningful will happen anymore because energy can no longer flow from one place to another. In such a universe, time ceases to have meaning.
    Perhaps more merciful than the purgatory of heat death is the possibility of a Big Rip, in which the universe’s expansion accelerates faster and faster, until stars and planets are torn apart, molecules are shredded and the very fabric of space is ripped apart.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    These potential endings are all many billions of years into the future — or perhaps much further off. But there’s also the possibility that the universe could end abruptly at any moment. That demise would not be a result of expansion or contraction, but due to a phenomenon called vacuum decay. If the universe turns out to be fundamentally unstable, a tiny bubble of the cosmos could convert to a more stable state. Then, the edge of that bubble would expand across the cosmos at the speed of light, obliterating anything in its path with no warning. In a passage a bit reminiscent of a Kurt Vonnegut story, Mack writes, “Maybe it’s for the best that you don’t see it coming.”
    Already known for her engaging Twitter personality, public lectures and popular science writing, Mack has well-honed scientific communication chops. Her evocative writing about some of the most violent processes in the universe, mixed with her obvious glee at the unfathomable grandness of it all, should both satisfy longtime physics fans and inspire younger generations of physicists.
    Reading Mack’s prose feels like learning physics from a brilliant, quirky friend. The book is sprinkled with plenty of informal quips: “I’m not going to sugarcoat this. The universe is frickin’ weird.” Readers will find themselves good-naturedly rolling their eyes at some of the goofy footnotes and nerdy pop-culture references. At the same time, the book delves deep into gritty physics details, thoroughly explaining important concepts like the cosmic microwave background — the oldest light in the universe — and tackling esoteric topics in theoretical physics. Throughout, Mack does an excellent job of recognizing where points of confusion might trip up a reader and offers clarity instead.
    Mack continues a long-standing tradition of playfulness among physicists. That’s how we got stuck with somewhat cheesy names for certain fundamental particles, such as “charm” and “strange” quarks, for example. But she also brings an emotional openness that is uncommon among scientists. Sometimes this is conveyed by declarations in all caps about how amazing the universe is. But other times, it comes when Mack makes herself vulnerable by leveling with the reader about how unnerving this topic is: “I’m trying not to get hung up on it … the end of this great experiment of existence. It’s the journey, I repeat to myself. It’s the journey.”
    Yes, this is a dark subject. Yes, the universe will end, and everything that has ever happened, from the tiniest of human kindnesses to the grandest of cosmic explosions, will one day be erased from the record. Mack struggles with what the inevitable demise of everything means for humankind. By contemplating the end times, we can refine our understanding of the universe, but we can’t change its fate.
    Buy The End of Everything from Amazon.com. Science News is a participant in the Amazon Services LLC Associates Program. Please see our FAQ for more details. More

  • in

    'Deepfakes' ranked as most serious AI crime threat

    Fake audio or video content has been ranked by experts as the most worrying use of artificial intelligence in terms of its potential applications for crime or terrorism, according to a new UCL report.
    The study, published in Crime Science and funded by the Dawes Centre for Future Crime at UCL (and available as a policy briefing), identified 20 ways AI could be used to facilitate crime over the next 15 years. These were ranked in order of concern — based on the harm they could cause, the potential for criminal profit or gain, how easy they would be to carry out and how difficult they would be to stop.
    Authors said fake content would be difficult to detect and stop, and that it could have a variety of aims — from discrediting a public figure to extracting funds by impersonating a couple’s son or daughter in a video call. Such content, they said, may lead to a widespread distrust of audio and visual evidence, which itself would be a societal harm.
    Aside from fake content, five other AI-enabled crimes were judged to be of high concern. These were using driverless vehicles as weapons, helping to craft more tailored phishing messages (spear phishing), disrupting AI-controlled systems, harvesting online information for the purposes of large-scale blackmail, and AI-authored fake news.
    Senior author Professor Lewis Griffin (UCL Computer Science) said: “As the capabilities of AI-based technologies expand, so too has their potential for criminal exploitation. To adequately prepare for possible AI threats, we need to identify what these threats might be, and how they may impact our lives.”
    Researchers compiled the 20 AI-enabled crimes from academic papers, news and current affairs reports, and fiction and popular culture. They then gathered 31 people with an expertise in AI for two days of discussions to rank the severity of the potential crimes. The participants were drawn from academia, the private sector, the police, the government and state security agencies.
    Crimes that were of medium concern included the sale of items and services fraudulently labelled as “AI,” such as security screening and targeted advertising. These would be easy to achieve, with potentially large profits.
    Crimes of low concern included burglar bots — small robots used to gain entry into properties through access points such as letterboxes or cat flaps — which were judged to be easy to defeat, for instance through letterbox cages, and AI-assisted stalking, which, although extremely damaging to individuals, could not operate at scale.
    First author Dr Matthew Caldwell (UCL Computer Science) said: “People now conduct large parts of their lives online and their online activity can make and break reputations. Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity.
    “Unlike many traditional crimes, crimes in the digital realm can be easily shared, repeated, and even sold, allowing criminal techniques to be marketed and for crime to be provided as a service. This means criminals may be able to outsource the more challenging aspects of their AI-based crime.”
    Professor Shane Johnson, Director of the Dawes Centre for Future Crimes at UCL, which funded the study, said: “We live in an ever changing world which creates new opportunities — good and bad. As such, it is imperative that we anticipate future crime threats so that policy makers and other stakeholders with the competency to act can do so before new ‘crime harvests’ occur. This report is the first in a series that will identify the future crime threats associated with new and emerging technologies and what we might do about them.” More

  • in

    AI and single-cell genomics

    Traditional single-cell sequencing methods help to reveal insights about cellular differences and functions — but they do this with static snapshots only rather than time-lapse films. This limitation makes it difficult to draw conclusions about the dynamics of cell development and gene activity. The recently introduced method “RNA velocity” aims to reconstruct the developmental trajectory of a cell on a computational basis (leveraging ratios of unspliced and spliced transcripts). This method, however, is applicable to steady-state populations only. Researchers were therefore looking for ways to extend the concept of RNA velocity to dynamic populations which are of crucial importance to understand cell development and disease response.
    Single-cell velocity
    Researchers from the Institute of Computational Biology at Helmholtz Zentrum München and the Department of Mathematics at TUM developed “scVelo” (single-cell velocity). The method estimates RNA velocity with an AI-based model by solving the full gene-wise transcriptional dynamics. This allows them to generalize the concept of RNA velocity to a wide variety of biological systems including dynamic populations.
    “We have used scVelo to reveal cell development in the endocrine pancreas, in the hippocampus, and to study dynamic processes in lung regeneration — and this is just the beginning,” says Volker Bergen, main creator of scVelo and first author of the corresponding study in Nature Biotechnology.
    With scVelo researchers can estimate reaction rates of RNA transcription, splicing and degradation without the need of any experimental data. These rates can help to better understand the cell identity and phenotypic heterogeneity. Their introduction of a latent time reconstructs the unknown developmental time to position the cells along the trajectory of the underlying biological process. That is particularly useful to better understand cellular decision making. Moreover, scVelo reveals regulatory changes and putative driver genes therein. This helps to understand not only how but also why cells are developing the way they do.
    Empowering personalized treatments
    AI-based tools like scVelo give rise to personalized treatments. Going from static snapshots to full dynamics allows researchers to move from descriptive towards predictive models. In the future, this might help to better understand disease progression such as tumor formation, or to unravel cell signaling in response to cancer treatment.
    “scVelo has been downloaded almost 60,000 times since its release last year. It has become a stepping-stone tooltowards the kinetic foundation for single-cell transcriptomics,” adds Prof. Fabian Theis, who conceived the study and serves as Director at the Institute for Computational Biology at Helmholtz Zentrums München and Chair for Mathematical Modeling of Biological Systems at TUM.

    Story Source:
    Materials provided by Helmholtz Zentrum München – German Research Center for Environmental Health. Note: Content may be edited for style and length. More

  • in

    Simplified circuit design could revolutionize how wearables are manufactured

    Researchers have demonstrated the use of a ground-breaking circuit design that could transform manufacturing processes for wearable technology.
    Silicon-based electronics have aggressively become smaller and more efficient over a short period of time, leading to major advances in devices such as mobile phones. However, large-area electronics, such as display screens, have not seen similar advances because they rely on a device, thin-film transistor (TFT), which has serious limitations.
    In a study published by IEEE Sensors Journal, researchers from the University of Surrey, University of Cambridge and the National Research Institute in Rome have demonstrated the use of a pioneering circuit design that uses an alternative type of device, the source-gated transistor (SGT), to create compact circuit blocks.
    In the study, the researchers showed that they are able to achieve the same functionality from two SGTs as would normally be the case from today’s devices that use roughly 12 TFTs — improving performance, reducing waste and making the new process far more cost effective.
    The research team believe that the new fabrication process could result in a generation of ultralightweight, flexible electronics for wearables and sensors.
    Dr Radu Sporea, lead author of the study and Lecturer in Semiconductor Devices at the University of Surrey, said: “We are entering what may be another golden age of electronics, with the arrival of 5G and IoT enabled devices. However, the way we have manufactured many of our electronics has increasingly become overcomplicated and has hindered the performance of many devices.
    “Our design offers a much simpler build process than regular thin-film transistors. Source-gated transistor circuits may also be cheaper to manufacture on a large scale because their simplicity means there is less waste in the form of rejected components. This elegant design of large area electronics could result in future phones, fitness tracker or smart sensors that are energy efficient, thinner and far more flexible than the ones we are able to produce today.”

    Story Source:
    Materials provided by University of Surrey. Note: Content may be edited for style and length. More