More stories

  • in

    Boosting fiber optics communications with advanced quantum-enhanced receiver

    Fiber optic technology is the holy grail of high-speed, long-distance telecommunications. Still, with the continuing exponential growth of internet traffic, researchers are warning of a capacity crunch.
    In AVS Quantum Science, by AIP Publishing, researchers from the National Institute of Standards and Technology and the University of Maryland show how quantum-enhanced receivers could play a critical role in addressing this challenge.
    The scientists developed a method to enhance receivers based on quantum physics properties to dramatically increase network performance while significantly reducing the error bit rate (EBR) and energy consumption.
    Fiber optic technology relies on receivers to detect optical signals and convert them into electrical signals. The conventional detection process, largely as a result of random light fluctuations, produces “shot noise,” which decreases detection ability and increases EBR.
    To accommodate this problem, signals must continually be amplified as pulsating light becomes weaker along the optic cable, but there is a limit to maintaining adequate amplification when signals become barely perceptible.
    Quantum-enhanced receivers that process up to two bits of classical information and can overcome the shot noise have been demonstrated to improve detection accuracy in laboratory environments. In these and other quantum receivers, a separate reference beam with a single-photon detection feedback is used so the reference pulse eventually cancels out the input signal to eliminate the shot noise.
    The researchers’ enhanced receiver, however, can decode as many as four bits per pulse, because it does a better job in distinguishing among different input states.
    To accomplish more efficient detection, they developed a modulation method and implemented a feedback algorithm that takes advantage of the exact times of single photon detection. Still, no single measurement is perfect, but the new “holistically” designed communication system yields increasingly more accurate results on average.
    “We studied the theory of communications and the experimental techniques of quantum receivers to come up with a practical telecommunication protocol that takes maximal advantage of the quantum measurement,” author Sergey Polyakov said. “With our protocol, because we want the input signal to contain as few photons as possible, we maximize the chance that the reference pulse updates to the right state after the very first photon detection, so at the end of the measurement, the EBR is minimized.”
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Fixed network of smartphones provides earthquake early warning in Costa Rica

    Earthquake early warnings can be delivered successfully using a small network of off-the-shelf smartphones attached to building baseboards, according to a study conducted in Costa Rica last year.
    In his presentation at the Seismological Society of America (SSA)’s 2021 Annual Meeting, Ben Brooks of the U.S. Geological Survey said the ASTUTI (Alerta Sismica Temprana Utilizando Teléfonos Inteligentes) network of more than 80 stations performed comparably to scientific-grade warning systems.
    During six months’ of ASTUTI operation, there were 13 earthquakes that caused noticeable shaking in Costa Rica, including in the city of San Jose where the network was deployed. The system was able to detect and alert on five of these earthquakes, Brooks and his colleagues determined when they “replayed” the seismic events to test their network.
    Alerts for the system are triggered when shaking exceeds a certain threshold, equivalent to slightly less than what would be expected for a magnitude 5 earthquake, as measured by the accelerometers that are already built into the phones, Brooks said.
    In simulations of the magnitude 7.6 Nicoya earthquake that took place in 2012 in Costa Rica, ASTUTI would have delivered its first alerts on average nine to 13 seconds after the event.
    “The performance level over the six months is encouraging,” Brooks said. “Cascadia events in the Pacific Northwest are similar to the Costa Rican subduction zone, and latencies for ShakeAlert in Cascadia are about 10 seconds, so it’s comparable.”
    ASTUTI demonstrates the possibilities of lower-cost earthquake early warning for regions that lack the scientific-grade network stations such as those behind ShakeAlert, he noted. More

  • in

    Forensics puzzle cracked via fluid mechanical principles

    In 2009, music producer Phil Spector was convicted for the 2003 murder of actress Lana Clarkson, who was shot in the face from a very short distance. He was dressed in white clothes, but no bloodstains were found on his clothing — even though significant backward blood spatter occurred.
    How could his clothing remain clean if he was the shooter? This real-life forensic puzzle inspired University of Illinois at Chicago and Iowa State University researchers to explore the fluid physics involved.
    In Physics of Fluids, from AIP Publishing, the researchers present theoretical results revealing an interaction of the incoming vortex ring of propellant muzzle gases with backward blood spatter.
    A detailed analytical theory of such turbulent self-similar vortex rings was given by this group in earlier work and is linked mathematically to the theory of quantum oscillators.
    “In our previous work, we determined the physical mechanism of backward spatter as an inevitable instability triggered by acceleration of a denser fluid, blood, toward a lighter fluid, air,” said Alexander Yarin, a distinguished professor at the University of Illinois at Chicago. “This is the so-called Rayleigh-Taylor instability, which is responsible for water dripping from a ceiling.”
    Backward spatter droplets fly from the victim toward the shooter after being splashed by a penetrating bullet. So the researchers zeroed in on how these blood droplets interact with a turbulent vortex ring of muzzle gases moving from the shooter toward the victim.
    They predict that backward blood spatter droplets can be entrained — incorporated and swept along within its flow — by the approaching turbulent vortex ring, even being turned around.
    “This means that such droplets can even land behind the victim, along with the forward splatter being caused by a penetrated bullet,” said Yarin. “With a certain position of the shooter relative to the victim, it is possible for the shooter’s clothing to remain practically free of bloodstains.”
    The physical understanding reached in this work will be helpful in forensic analysis of cases such as that of Clarkson’s murder.
    “Presumably, many forensic puzzles of this type can be solved based on sound fluid mechanical principles,” said Yarin.
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    AI agent helps identify material properties faster

    A team headed by Dr. Phillip M. Maffettone (currently at National Synchrotron Light Source II in Upton, USA) and Professor Andrew Cooper from the Department of Chemistry and Materials Innovation Factory at the University of Liverpool joined forces with the Bochum-based group headed by Lars Banko and Professor Alfred Ludwig from the Chair of Materials Discovery and Interfaces and Yury Lysogorskiy from the Interdisciplinary Centre for Advanced Materials Simulation. The international team published their report in the journal Nature Computational Science from 19 April 2021.
    Previously manual, time-consuming, error-prone
    Efficient analysis of X-ray diffraction data (XRD) plays a crucial role in the discovery of new materials, for example for the energy systems of the future. It is used to analyse the crystal structures of new materials in order to find out, for which applications they might be suitable. XRD measurements have already been significantly accelerated in recent years through automation and provide large amounts of data when measuring material libraries. “However, XRD analysis techniques are still largely manual, time-consuming, error-prone and not scalable,” says Alfred Ludwig. “In order to discover and optimise new materials faster in the future using autonomous high-throughput experiments, new methods are required.”
    In their publication, the team shows how artificial intelligence can be used to make XRD data analysis faster and more accurate. The solution is an AI agent called Crystallography Companion Agent (XCA), which collaborates with the scientists. XCA can perform autonomous phase identifications from XRD data while it is measured. The agent is suitable for both organic and inorganic material systems. This is enabled by the large-scale simulation of physically correct X-ray diffraction data that is used to train the algorithm.
    Expert discussion is simulated
    What is more, a unique feature of the agent that the team has adapted for the current task is that it overcomes the overconfidence of traditional neuronal networks: this is because such networks make a final decision even if the data doesn’t support a definite conclusion. Whereas a scientist would communicate their uncertainty and discuss results with other researchers. “This process of decision-making in the group is simulated by an ensemble of neural networks, similar to a vote among experts,” explains Lars Banko. In XCA, an ensemble of neural networks forms the expert panel, so to speak, which submits a recommendation to the researchers. “This is accomplished without manual, human-labelled data and is robust to many sources of experimental complexity,” says Banko.
    XCA can also be expanded to other forms of characterisation such as spectroscopy. “By complementing recent advances in automation and autonomous experimentation, this development constitutes an important step in accelerating the discovery of new materials,” concludes Alfred Ludwig.
    Story Source:
    Materials provided by Ruhr-University Bochum. Note: Content may be edited for style and length. More

  • in

    Is social media use a potentially addictive behavior? Maybe not

    Frequent use of social media may not amount to the same as addiction, according to research at the University of Strathclyde.
    The study invited 100 participants to locate specific social media apps on a simulated smartphone screen as quickly and accurately as possible, while ignoring other apps. The participants were varied in the extent and type of their social media use and engagement.
    The exercise aimed to assess whether social media users who reported the greatest level of use were more likely to have their attention drawn to the apps through a process known as ‘attentional bias,’ which is a recognised hallmark of addiction.
    It also assessed whether this bias was associated with scores on established measures of social media engagement and ‘addiction’.
    The findings did not indicate that users’ attention was drawn more to social media apps than to any others, such as a weather app; they were also not associated with self-reported or measurable levels of addictive severity. This contrasted with other studies which have shown attentional bias related to addictions such as gambling and alcohol.
    The research has been published in the Journal of Behavioural Addictions.
    Dr David Robertson, a Lecturer in Psychology at Strathclyde and a partner in the research, said: “Social media use has become a ubiquitous part of society, with 3.8 billion users worldwide. While research has shown that there are positive aspects to social media engagement, such as feelings of social connectedness and wellbeing, much of the focus has been on the negative mental health outcomes which are associated with excessive use, such as higher levels of depression and anxiety.
    “The evidence to support such negative associations is mixed but there is also a growing debate as to whether excessive levels of social media use should become a clinically defined addictive behaviour.
    “We did not find evidence of attentional bias. People who frequently checked and posted their social media accounts were no more likely to have their attention drawn to the icon of a social media app than those who check and post less often.
    “Much more research is required into the effects of social media use, both positive and negative, before definitive conclusions can be reached about the psychological effects of engagement with these platforms. Our research indicates that frequent social media use may not, at present, necessarily fit into traditional addiction frameworks.”
    Story Source:
    Materials provided by University of Strathclyde. Note: Content may be edited for style and length. More

  • in

    Body mass index, age can affect your risk for neck pain

    With roughly 80% of jobs being sedentary, often requiring several hours of sitting stooped in front of a computer screen, neck pain is a growing occupational hazard. Smartphones and other devices have also caused people to bend their necks for prolonged periods. But is bad posture solely to blame?
    In a recent study, researchers at Texas A&M University have found that while poor neck and head postures are indeed the primary determinants of neck pain, body mass index, age and the time of the day also influence the neck’s ability to perform sustained or repeated movements.
    “Neck pain is one of the leading and fastest-growing causes of disability in the world,” said Xudong Zhang, professor in the Wm Michael Barnes ’64 Department of Industrial and Systems Engineering. “Our study has pointed to a combination of work and personal factors that strongly influence the strength and endurance of the neck over time. More importantly, since these factors have been identified, they can then be modified so that the neck is in better health and pain is avoided or deterred.”
    The results of the study are published online in the journal Human Factors, a flagship journal in the field of human factors and ergonomics.
    According to the Global Burden of Disease Study by the Institute for Health Metrics and Evaluation, neck pain is ranked as the fourth leading cause of global disability. One of the main reasons for neck pain has been attributed to lifestyle, particularly when people spend long durations of time with their necks bent forward. However, Zhang said a systematic, quantitative study has been lacking on how personal factors, such as sex, weight, age and work-related habits, can affect neck strength and endurance.
    For their experiments, Zhang and his team recruited 20 adult men and 20 adult women with no previous neck-related issues to perform controlled head-neck exertions in a laboratory setting. Instead of asking the participants to hold a specific neck posture for a long time, similar to what might happen at a workplace, they performed “sustained-till exhaustion” head-neck exertions. More

  • in

    Researchers use AI to empower environmental regulators

    Monitoring environmental compliance is a particular challenge for governments in poor countries. A new machine learning approach that uses satellite imagery to pinpoint highly polluting brick kilns in Bangladesh could provide a low-cost solution. Like superheroes capable of seeing through obstacles, environmental regulators may soon wield the power of all-seeing eyes that can identify violators anywhere at any time, according to a new Stanford University-led study. The paper, published the week of April 19 in Proceedings of the National Academy of Sciences (PNAS), demonstrates how artificial intelligence combined with satellite imagery can provide a low-cost, scalable method for locating and monitoring otherwise hard-to-regulate industries.
    “Brick kilns have proliferated across Bangladesh to supply the growing economy with construction materials, which makes it really hard for regulators to keep up with new kilns that are constructed,” said co-lead author Nina Brooks, a postdoctoral associate at the University of Minnesota’s Institute for Social Research and Data Innovation who did the research while a PhD student at Stanford.
    While previous research has shown the potential to use machine learning and satellite observations for environmental regulation, most studies have focused on wealthy countries with dependable data on industrial locations and activities. To explore the feasibility in developing countries, the Stanford-led research focused on Bangladesh, where government regulators struggle to locate highly pollutive informal brick kilns, let alone enforce rules.
    A growing threat
    Bricks are key to development across South Asia, especially in regions that lack other construction materials, and the kilns that make them employ millions of people. However, their highly inefficient coal burning presents major health and environmental risks. In Bangladesh, brick kilns are responsible for 17 percent of the country’s total annual carbon dioxide emissions and — in Dhaka, the country’s most populous city — up to half of the small particulate matter considered especially dangerous to human lungs. It’s a significant contributor to the country’s overall air pollution, which is estimated to reduce Bangladeshis’ average life expectancy by almost two years.
    “Air pollution kills seven million people every year,” said study senior author Stephen Luby, a professor of infectious diseases at Stanford’s School of Medicine. “We need to identify the sources of this pollution, and reduce these emissions.”
    Bangladesh government regulators are attempting to manually map and verify the locations of brick kilns across the country, but the effort is incredibly time and labor intensive. It’s also highly inefficient because of the rapid proliferation of kilns. The work is also likely to suffer from inaccuracy and bias, as government data in low-income countries often does, according to the researchers. More

  • in

    New algorithm uses online learning for massive cell data sets

    The fact that the human body is made up of cells is a basic, well-understood concept. Yet amazingly, scientists are still trying to determine the various types of cells that make up our organs and contribute to our health.
    A relatively recent technique called single-cell sequencing is enabling researchers to recognize and categorize cell types by characteristics such as which genes they express. But this type of research generates enormous amounts of data, with datasets of hundreds of thousands to millions of cells.
    A new algorithm developed by Joshua Welch, Ph.D., of the Department of Computational Medicine and Bioinformatics, Ph.D. candidate Chao Gao and their team uses online learning, greatly speeding up this process and providing a way for researchers world-wide to analyze large data sets using the amount of memory found on a standard laptop computer. The findings are described in the journal Nature Biotechnology.
    “Our technique allows anyone with a computer to perform analyses at the scale of an entire organism,” says Welch. “That’s really what the field is moving towards.”
    The team demonstrated their proof of principle using data sets from the National Institute of Health’s Brain Initiative, a project aimed at understanding the human brain by mapping every cell, with investigative teams throughout the country, including Welch’s lab.
    Typically, explains Welch, for projects like this one, each single-cell data set that is submitted must be re-analyzed with the previous data sets in the order they arrive. Their new approach allows new datasets to the be added to existing ones, without reprocessing the older datasets. It also enables researchers to break up datasets into so-called mini-batches to reduce the amount of memory needed to process them.
    “This is crucial for the sets increasingly generated with millions of cells,” Welch says. “This year, there have been five to six papers with two million cells or more and the amount of memory you need just to store the raw data is significantly more than anyone has on their computer.”
    Welch likens the online technique to the continuous data processing done by social media platforms like Facebook and Twitter, which must process continuously-generated data from users and serve up relevant posts to people’s feeds. “Here, instead of people writing tweets, we have labs around the world performing experiments and releasing their data.”
    The finding has the potential to greatly improve efficiency for other ambitious projects like the Human Body Map and Human Cell Atlas. Says Welch, “Understanding the normal complement of cells in the body is the first step towards understanding how they go wrong in disease.”
    Story Source:
    Materials provided by Michigan Medicine – University of Michigan. Original written by Kelly Malcom. Note: Content may be edited for style and length. More