More stories

  • in

    What's the prevailing opinion on social media? Look at the flocks, says researcher

    A University at Buffalo communication researcher has developed a framework for measuring the slippery concept of social media public opinion.
    These collective views on a topic or issue expressed on social media, distinct from the conclusions determined through survey-based public opinion polling, have never been easy to determine. But the “murmuration” framework developed and tested by Yini Zhang, PhD, an assistant professor of communication in the UB College of Arts and Sciences, and her collaborators addresses challenges, like identifying online demographics and factoring for opinion manipulation, that are characteristic on these digital battlegrounds of public discourse.
    Murmuration identifies meaningful groups of social media actors based on the “who-follows-whom” relationship. The actors attract like-minded followers to form “flocks,” which serve as the units of analysis. As opinions form and shift in response to external events, the flocks’ unfolding opinions move like the fluid murmuration of airborne starlings.
    The framework and the findings from an analysis of social network structure and opinion expression from over 193,000 Twitter accounts, which followed more than 1.3 million other accounts, suggest that flock membership can predict opinion and that the murmuration framework reveals distinct patterns of opinion intensity. The researchers studied Twitter because of the ability to see who is following whom, information that is not publicly accessible on other platforms.
    The results, published in the Journal of Computer-Mediated Communication, further support the echo chamber tendencies prevalent on social media, while adding important nuance to existing knowledge.
    “By identifying different flocks and examining the intensity, temporal pattern and content of their expression, we can gain deeper insights far beyond where liberals and conservatives stand on a certain issue,” says Zhang, an expert in social media and political communication. “These flocks are segments of the population, defined not by demographic variables of questionable salience, like white women aged 18-29, but by their online connections and response to events.
    “As such, we can observe opinion variations within an ideological camp and opinions of people that might not be typically assumed to have an opinion on certain issues. We see the flocks as naturally occurring, responding to things as they happen, in ways that take a conversational element into consideration.”
    Zhang says it’s important not to confuse public opinion, as measured by survey-based polling methods, and social media public opinion.
    “Arguably, social media public opinion is twice removed from the general public opinion measured by surveys,” say Zhang. “First, not everyone uses social media. Second, among those who do, only a subset of them actually express opinions on social media. They tend to be strongly opinionated and thus more willing to express their views publicly.”
    Murmuration offers insights that can complement information gathered through survey-based polling. It also moves away from mining social media for text from specific tweets. Murmuration takes full advantage of social media’s dynamic aspect. When text is removed from its context, it becomes difficult to accurately determine questions about what led to the discussion, when it began, and how it evolved over time.
    “Murmuration can allow for research that makes better use of social media data to study public opinion as a form of social interaction and reveal underlying social dynamics,” says Zhang.
    Story Source:
    Materials provided by University at Buffalo. Original written by Bert Gambini. Note: Content may be edited for style and length. More

  • in

    Pivotal technique harnesses cutting-edge AI capabilities to model and map the natural environment

    Scientists have developed a pioneering new technique that harnesses the cutting-edge capabilities of AI to model and map the natural environment in intricate detail.
    A team of experts, including Charlie Kirkwood from the University of Exeter, has created a sophisticated new approach to modelling the Earth’s natural features in greater detail and accuracy.
    The new technique can recognise intricate features and aspects of the terrain far beyond the capabilities of more traditional methods and use these to generate enhanced-quality environmental maps.
    Crucially, the new system could also pave the way to unlocking new discoveries of the relationships within the natural environment, that may help tackle some of the greater climate and environment issues of the 21st century.
    The study is published in leading journal Mathematical Geosciences, as part of a special issue on geostatistics and machine learning.
    Modelling and mapping the environment is a lengthy, time consuming and expensive process. Cost limits the number of observations that can be obtained, which means that creating comprehensive spatially-continuous maps depends upon filling in the gaps between these observations. More

  • in

    Tiny battery-free devices float in the wind like dandelion seeds

    Wireless sensors can monitor how temperature, humidity or other environmental conditions vary across large swaths of land, such as farms or forests.
    These tools could provide unique insights for a variety of applications, including digital agriculture and monitoring climate change. One problem, however, is that it is currently time-consuming and expensive to physically place hundreds of sensors across a large area.
    Inspired by how dandelions use the wind to distribute their seeds, a University of Washington team has developed a tiny sensor-carrying device that can be blown by the wind as it tumbles toward the ground. This system is about 30 times as heavy as a 1 milligram dandelion seed but can still travel up to 100 meters in a moderate breeze, about the length of a football field, from where it was released by a drone. Once on the ground, the device, which can hold at least four sensors, uses solar panels to power its onboard electronics and can share sensor data up to 60 meters away.
    The team published these results March 16 in Nature.
    “We show that you can use off-the-shelf components to create tiny things. Our prototype suggests that you could use a drone to release thousands of these devices in a single drop. They’ll all be carried by the wind a little differently, and basically you can create a 1,000-device network with this one drop,” said senior author Shyam Gollakota, a UW professor in the Paul G. Allen School of Computer Science & Engineering. “This is amazing and transformational for the field of deploying sensors, because right now it could take months to manually deploy this many sensors.”
    Because the devices have electronics on board, it’s challenging to make the whole system as light as an actual dandelion seed. The first step was to develop a shape that would allow the system to take its time falling to the ground so that it could be tossed around by a breeze. The researchers tested 75 designs to determine what would lead to the smallest “terminal velocity,” or the maximum speed a device would have as it fell through the air. More

  • in

    Toward a quantum computer that calculates molecular energy

    Quantum computers are getting bigger, but there are still few practical ways to take advantage of their extra computing power. To get over this hurdle, researchers are designing algorithms to ease the transition from classical to quantum computers. In a new study in Nature, researchers unveil an algorithm that reduces the statistical errors, or noise, produced by quantum bits, or qubits, in crunching chemistry equations.
    Developed by Columbia chemistry professor David Reichman and postdoc Joonho Lee with researchers at Google Quantum AI, the algorithm uses up to 16 qubits on Sycamore, Google’s 53-qubit computer, to calculate ground state energy, the lowest energy state of a molecule. “These are the largest quantum chemistry calculations that have ever been done on a real quantum device,” Reichman said.
    The ability to accurately calculate ground state energy, will enable chemists to develop new materials, said Lee, who is also a visiting researcher at Google Quantum AI. The algorithm could be used to design materials to speed up nitrogen fixation for farming and hydrolysis for making clean energy, among other sustainability goals, he said.
    The algorithm uses a quantum Monte Carlo, a system of methods for calculating probabilities when there are a large number of random, unknown variables at play, like in a game of roulette. Here, the researchers used their algorithm to determine the ground state energy of three molecules: heliocide (H4), using eight qubits for the calculation; molecular nitrogen (N2), using 12 qubits; and solid diamond, using 16 qubits.
    Ground state energy is influenced by variables such as the number of electrons in a molecule, the direction in which they spin, and the paths they take as they orbit a nucleus. This electronic energy is encoded in the Schrodinger equation. Solving the equation on a classical computer becomes exponentially harder as molecules get bigger, although methods for estimating the solution have made the process easier. How quantum computers might circumvent the exponential scaling problem has been an open question in the field.
    In principle, quantum computers should be able to handle exponentially larger and more complex calculations, like those needed to solve the Schrodinger equation, because the qubits that make them up take advantage of quantum states. Unlike binary digits, or bits, made up of ones and zeros, qubits can exist in two states simultaneously. Qubits, however, are fragile and error-prone: the more qubits used, the less accurate the final answer. Lee’s algorithm harnesses the combined power of classical and quantum computers to solve chemistry equations more efficiently while minimizing the quantum computer’s mistakes.
    “It’s the best of both worlds,” Lee said. “We leveraged tools that we already had as well as tools that are considered state-of-the-art in quantum information science to refine quantum computational chemistry.”
    A classical computer can handle most of Lee’s quantum Monte Carlo simulation. Sycamore jumps in for the last, most computationally complex step: the calculation of the overlap between a trial wave function — a guess at the mathematical description of the ground state energy that can be implemented by the quantum computer — and a sample wave function, which is part of the Monte Carlo’s statistical process. This overlap provides a set of constraints, known as the boundary condition, to the Monte Carlo sampling, which ensures the statistical efficiency of the calculation.
    The prior record for solving ground state energy used 12 qubits and a method called the variational quantum eigensolver, or VQE. But VQE ignored the effects of interacting electrons, an important variable in calculating ground state energy that Lee’s quantum Monte Carlo algorithm now includes. Adding virtual correlation techniques from classic computers could help chemists tackle even larger molecules, Lee said.
    The hybrid classical-quantum calculations in this new work were found to be as accurate as some of the best classical methods. This suggests that problems could be solved more accurately and/or quickly with a quantum computer than without — a key milestone for quantum computing. Lee and his colleagues will continue to tweak their algorithm to make it more efficient, while engineers work to build better quantum hardware.
    “The feasibility of solving larger and more challenging chemical problems will only increase with time,” Lee said. “This gives us hope that quantum technologies that are being developed will be practically useful.”
    Story Source:
    Materials provided by Columbia University. Original written by Ellen Neff. Note: Content may be edited for style and length. More

  • in

    AI to predict antidepressant outcomes in youth

    Mayo Clinic researchers have taken the first step in using artificial intelligence (AI) to predict early outcomes with antidepressants in children and adolescents with major depressive disorder, in a study published in The Journal of Child Psychology and Psychiatry. This work resulted from a collaborative effort between the departments of Molecular Pharmacology and Experimental Therapeutics, and Psychiatry and Psychology, at Mayo Clinic, with support from Mayo Clinic’s Center for Individualized Medicine.
    “This preliminary work suggests that AI has promise for assisting clinical decisions by informing physicians on the selection, use and dosing of antidepressants for children and adolescents with major depressive disorder,” says Paul Croarkin, D.O., a Mayo Clinic psychiatrist and senior author of the study. “We saw improved predictions of treatment outcomes in samples of children and adolescents across two classes of antidepressants.”
    In the study, researchers identified variation in six depressive symptoms: difficulty having fun, social withdrawal, excessive fatigue, irritability, low self-esteem and depressed feelings.
    They assessed these symptoms with the Children’s Depression Rating Scale-Revised to predict outcomes to 10 to 12 weeks of antidepressant pharmacotherapy: The six symptoms predicted 10- to 12-week outcomes at four to six weeks in fluoxetine testing datasets, with an average accuracy of 73%. The same six symptoms predicted 10- to 12-week outcomes at four to six weeks in duloxetine testing datasets, with an average accuracy of 76%. In placebo-treated patients, predicting response and remission accuracy was significantly lower than for antidepressants at 67%.These outcomes show the potential of AI and patient data to ensure children and adolescents receive treatment that has the highest likelihood of delivering therapeutic benefits with minimized side effects, explains Arjun Athreya, Ph.D., a Mayo Clinic researcher and lead author of the study.
    “We designed the algorithm to mimic a clinician’s logic of treatment management at an interim time point based on their estimated guess of whether a patient will likely or not benefit from pharmacotherapy at the current dose,” says Dr. Athreya. “Hence, it was essential for me as a computer engineer to embed and observe the practice closely to not only understand the needs of the patient, but also how AI can be consumed and useful to the clinician to benefit the patient.”
    Next steps
    The research findings are a foundation for future work incorporating physiological information, brain-based measures and pharmacogenomic data for precision medicine approaches in treating youth with depression. This will improve the care of young patients with depression, and help clinicians initiate and dose antidepressants in patients who benefit most.
    “Technological advances are understudied tools that could enhance treatment approaches,” says Liewei Wang, M.D., Ph.D., the Bernard and Edith Waterman Director of the Pharmacogenomics Program and Director of the Center for Individualized Medicine at the Mayo Clinic. “Predicting outcomes in children and adolescents treated for depression is critical in managing what could become a lifelong disease burden.”
    Acknowledgments
    This work was supported by Mayo Clinic Foundation for Medical Education and Research; the National Science Foundation under award No. 2041339; and the National Institute of Mental Health under awards R01MH113700, R01MH124655 and R01AA027486. The content is solely the authors’ responsibility and does not necessarily represent the official views of the funding agencies. The authors have declared no competing or potential conflicts of interest.
    Story Source:
    Materials provided by Mayo Clinic. Original written by Colette Gallagher. Note: Content may be edited for style and length. More

  • in

    Nuclear reactor power levels can be monitored using seismic and acoustic data

    Seismic and acoustic data recorded 50 meters away from a research nuclear reactor could predict whether the reactor was in an on or off state with 98% accuracy, according to a new study published in Seismological Research Letters.
    By applying several machine learning models to the data, researchers at Oak Ridge National Laboratory could also predict when the reactor was transitioning between on and off, and estimate its power levels, with about 66% accuracy.
    The findings provide another tool for the international community to cooperatively verify and monitor nuclear reactor operations in a minimally invasive way, said the study’s lead author Chengping Chai, a geophysicist at Oak Ridge. “Nuclear reactors can be used for both benign and nefarious activities. Therefore, verifying that a nuclear reactor is operating as declared is of interest to the nuclear nonproliferation community.”
    Although seismic and acoustic data have long been used to monitor earthquakes and the structural properties of infrastructure such as buildings and bridges, some researchers now use the data to take a closer look at the movements associated with industrial processes. In this case, Chai and colleagues deployed seismic and acoustic sensors around the High Flux Isotope Reactor at Oak Ridge, a research reactor used to produced neutrons for studies in physics, chemistry, biology, engineering and materials science.
    The reactor’s power status is a thermal process, with a cooling tower that dissipates heat. “We found that seismo-acoustic sensors can record the mechanical signatures of vibrating equipment such as fans and pumps at the cooling tower at an accuracy enough to shed light into operational questions,” Chai said.
    The researchers then compared a number of machine learning algorithms to discover which were best at estimating the reactor’s power state from specific seismo-acoustic signals. The algorithms were trained with seismic-only, acoustic-only and both types of data collected over a year. The combined data produced the best results, they found.
    “The seismo-acoustic signals associated with different power levels show complicated patterns that are difficult to analyze with traditional techniques,” Chai explained. “The machine learning approaches are able to infer the complex relationship between different reactor systems and their seismo-acoustic fingerprint and use it to predict power levels.”
    Chai and colleagues detected some interesting signals during the course of their study, including the vibrations of a noisy pump in the reactor’s off state, which disappeared when the pump was replaced.
    Chai said it is a long-term and challenging goal to associate seismic and acoustic signatures with different industrial activities and equipment. For the High Flux Isotope Reactor, preliminary research shows that fans and pumps have different seismo-acoustic fingerprints, and that different fan speeds have their own unique signatures.
    “Some normal but less frequent activities such as yearly or incidental maintenance need to be distinguished in seismic and acoustic data,” Chai said. To better understand how these signatures relate to specific operations, “we need to study both the seismic and acoustic signatures of instruments and the background noise at various industrial facilities.”
    Story Source:
    Materials provided by Seismological Society of America. Note: Content may be edited for style and length. More

  • in

    Intensity control of projectors in parallel: A doorway to an augmented reality future

    A challenge to adopting augmented reality (AR) in wider applications is working with dynamic objects, owing to a delay between their movement and the projection of light onto their new position. But, Tokyo Tech scientists may have a workaround. They have developed a method that uses multiple projectors while reducing delay time. Their method could open the door to a future driven by AR, helping us live increasingly technology-centered lives.
    Technological advancements continue to redesign the way we interact with digital media, the world around us, and each other. Augmented reality (AR), which uses technology to alter the perception of objects in the real world, is unlocking unprecedented landscapes in entertainment, advertising, education, and across many other industries. The use of multiple projectors plays an important role in expanding the usage of AR, alongside a technique called projection mapping. However, an obstacle to the widespread adoption of AR is the application of this method to moving, or “dynamic,” targets without the loss of immersion in the AR space.
    This technique, known as dynamic projection mapping, relies on a blend of cameras and projectors that visually detect target surfaces and project onto them, respectively. A critical aspect is the need for high speed in information transfer and low “latency,” or delay between detection and projection. Any latency leads to a misalignment of the projector, which effects our perception, and reduces the effectiveness of the AR space.
    Other issues like changes in shadowing and target overlap are solved easily by using multiple projectors. However, the addition of new projectors correspondingly drives up the latency. This is a result of the need to calculate the intensity at every pixel simultaneously for every frame of a moving scene. Simply put, more projectors lead to longer and more complex calculations. The latency is a massive hurdle towards AR taking a true foothold in broader applications across society.
    Thankfully, a team of scientists at Tokyo Institute of Technology (Tokyo Tech), led by Associate Professor Yoshihiro Watanabe, might just have the necessary solution. They have developed a novel method to calculate the intensity of each pixel on a target in parallel, reducing the need for a single large optimization calculation. Their method relies on the principle that if pixels are small enough, they can be evaluated independently. While based on an approximation, their results, published in IEEE Transactions on Visualization and Computer Graphics, suggest that they could achieve the same quality of images as conventional, more computationally expensive methods, while drastically increasing the mapping speed and thereby reducing the latency.
    “Another advantage of our proposed method is, as there is no longer need for a single global calculation, it allows the use of multiple rendering computers connected through a network, each only controlling a single projector,” explains Dr. Watanabe. “Such a network system is easily customizable to incorporate more projectors, without major sacrifices to the latency.”
    This new method can allow large spaces with many projectors for efficient dynamic projection mapping, taking us a step closer to broader AR applications, as Dr. Watanabe describes: “The presented high-speed multi-projection is expected to be a major part of important base technologies that will advance spatial AR to derive more practical uses in our daily life.”
    Video: https://youtu.be/ltwSmsYnlK8
    Story Source:
    Materials provided by Tokyo Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Stackable 'holobricks' can make giant 3D images

    Researchers have developed a new method to display highly realistic holographic images using ‘holobricks’ that can be stacked together to generate large-scale holograms.
    The researchers, from the University of Cambridge and Disney Research, developed a holobrick proof-of-concept, which can tile holograms together to form a large seamless 3D image. This is the first time this technology has been demonstrated and opens the door for scalable holographic 3D displays. The results are reported in the journal Light: Science & Applications.
    As technology develops, people want high-quality visual experiences, from 2D high resolution TV to 3D holographic augmented or virtual reality, and large true 3D displays. These displays need to support a significant amount of data flow: for a 2D full HD display, the information data rate is about three gigabits per second (Gb/s), but a 3D display of the same resolution would require a rate of three terabits per second, which is not yet available.
    Holographic displays can reconstruct high quality images for a real 3D visual perception. They are considered the ultimate display technology to connect the real and virtual worlds for immersive experiences.
    “Delivering an adequate 3D experience using the current technology is a huge challenge,” said Professor Daping Chu from Cambridge’s Department of Engineering, who led the research. “Over the past ten years, we’ve been working with our industrial partners to develop holographic displays which allow the simultaneous realisation of large size and large field-of-view, which needs to be matched with a hologram with a large optical information content.”
    However, the information content of current holograms information is much greater than the display capabilities of current light engines, known as spatial light modulators, due to their limited space bandwidth product.
    For 2D displays, it’s standard practice to tile small size displays together to form one large display. The approach being explored here is similar, but for 3D displays, which has not been done before. “Joining pieces of 3D images together is not trivial, because the final image must be seen as seamless from all angles and all depths,” said Chu, who is also Director of the Centre for Advanced Photonics and Electronics (CAPE). “Directly tiling 3D images in real space is just not possible.”
    To address this challenge, the researchers developed the holobrick unit, based on coarse integrated holographic displays for angularly tiled 3D images, a concept developed at CAPE with Disney Research about seven years ago.
    Each of the holobricks uses a high-information bandwidth spatial light modulator for information delivery in conjunction with coarse integrated optics, to form the angularly tiled 3D holograms with large viewing areas and fields of view.
    Careful optical design makes sure the holographic fringe pattern fills the entire face of the holobrick, so that multiple holobricks can be seamlessly stacked to form a scalable spatially tiled holographic image 3D display, capable of both wide field-of-view angle and large size.
    The proof-of-concept developed by the researchers is made of two seamlessly tiled holobricks. Each full-colour brick is 1024×768 pixels, with a 40° field of view and 24 frames per second, to display tiled holograms for full 3D images.
    “There are still many challenges ahead to make ultra-large 3D displays with wide viewing angles, such as a holographic 3D wall,” said Chu. “We hope that this work can provide a promising way to tackle this issue based on the currently limited display capability of spatial light modulators.”
    Story Source:
    Materials provided by University of Cambridge. The original text of this story is licensed under a Creative Commons License. Note: Content may be edited for style and length. More