More stories

  • in

    Do spoilers harm movie box-office revenue?

    Researchers from Western University and University of Houston published a new paper in the Journal of Marketing that examines whether spoiler movie reviews harm box office revenue.
    The study, forthcoming in the Journal of Marketing, is titled “Do Spoilers Really Spoil? Using Topic Modeling to Measure the Effect of Spoiler Reviews on Box Office Revenue” and is authored by Jun Hyun (Joseph) Ryoo, Xin (Shane) Wang and Shijie Lu.
    “No spoilers!” say many directors. Their concern is that if publications or moviegoers reveal plotlines and surprises, the public won’t want to pay for the movie. But is that concern well-founded?
    To examine this question, the research team examined daily box office revenues for movies released between January 2013 and December 2017 in the United States. These movies were then matched with their respective reviews collected from Internet Movie Database (IMDb), the most popular movie review platform in the United States. The researchers also developed a measurement of spoiler intensity, or the degree of plot uncertainty resolved by reading spoilers in movie reviews. The study results indicate that spoiler intensity has a positive and significant relationship with box office revenue.
    Ryoo explains that “We postulate that uncertainty reduction is the driving mechanism behind this positive spoiling effect. If potential moviegoers are unsure about the quality of a movie, they are likely to benefit from the plot-related content of spoiler reviews when making their purchase decisions.” Consistent with this, the research reveals an inverted-U relationship between average ratings and spoiler intensity, which suggests that the positive spoiling effect is stronger for movies that receive moderate or mixed ratings compared to movies that receive either very high or very low ratings. The positive spoiling effect is also stronger for movies that receive less advertising. Advertising can serve an informative function for consumers and is seen as a credible signal of quality in the movie industry. Less advertising should therefore lead to greater uncertainty about movie quality for potential moviegoers. Wang adds “The positive spoiling effect is also stronger for movies with limited release, which is a strategy often employed by independent and arthouse studios associated with greater uncertainty in terms of artistic quality. And the positive spoiling effect declines over time, likely because consumers have greater uncertainty in the earlier periods of a movie’s life cycle.”
    This leads to several implications for stakeholders in the movie industry. Foremost among these is that online review platforms can potentially increase consumer welfare by using spoiler reviews. “The uncertainty-reduction mechanism suggests a spoiler-friendly review platform can help consumers make appropriate purchase decisions. We recommend that review platforms keep the warning labels on spoiler reviews because of the benefit of allowing consumers to self-select into the exposure to spoilers,” says Lu.

    Story Source:
    Materials provided by American Marketing Association. Original written by Matt Weingarden. Note: Content may be edited for style and length. More

  • in

    Final dance of unequal black hole partners

    Solving the equations of general relativity for colliding black holes is no simple matter.
    Physicists began using supercomputers to obtain solutions to this famously hard problem back in the 1960s. In 2000, with no solutions in sight, Kip Thorne, 2018 Nobel Laureate and one of the designers of LIGO, famously bet that there would be an observation of gravitational waves before a numerical solution was reached.
    He lost that bet when, in 2005, Carlos Lousto, then at The University of Texas at Brownsville, and his team generated a solution using the Lonestar supercomputer at the Texas Advanced Computing Center. (Concurrently, groups at NASA and Caltech derived independent solutions.)
    In 2015, when the Laser Interferometer Gravitational-Wave Observatory (LIGO) first observed such waves, Lousto was in shock.
    “It took us two weeks to realize this was really from nature and not from inputting our simulation as a test,” said Lousto, now a professor of mathematics at Rochester Institute of Technology (RIT). “The comparison with our simulations was so obvious. You could see with your bare eyes that it was the merger of two black holes.”
    Lousto is back again with a new numerical relativity milestone, this time simulating merging black holes where the ratio of the mass of the larger black hole to the smaller one is 128 to 1 — a scientific problem at the very limit of what is computational possible. His secret weapon: the Frontera supercomputer at TACC, the eighth most powerful supercomputer in the world and the fastest at any university.

    advertisement

    His research with collaborator James Healy, supported by the National Science Foundation (NSF), was published in Physical Review Letters this week. It may require decades to confirm the results experimentally, but nonetheless it serves as a computational achievement that will help drive the field of astrophysics forward.
    “Modeling pairs of black holes with very different masses is very computational demanding because of the need to maintain accuracy in a wide range of grid resolutions,” said Pedro Marronetti, program director for gravitational physics at NSF. “The RIT group has performed the world’s most advanced simulations in this area, and each of them takes us closer to understanding observations that gravitational-wave detectors will provide in the near future.”
    LIGO is only able to detect gravitational waves caused by small and intermediate mass black holes of roughly equal size. It will take observatories 100 times more sensitive to detect the type of mergers Lousto and Healy have modeled. Their findings show not only what the gravitational waves caused by a 128:1 merger would look like to an observer on Earth, but also characteristics of the ultimate merged black hole including its final mass, spin, and recoil velocity. These led to some surprises.
    “These merged black holes can have speeds much larger than previously known,” Lousto said. “They can travel at 5,000 kilometers per second. They kick out from a galaxy and wander around the universe. That’s another interesting prediction.”
    The researchers also computed the gravitational waveforms — the signal that would be perceived near Earth — for such mergers, including their peak frequency, amplitude, and luminosity. Comparing those values with predictions from existing scientific models, their simulations were within 2 percent of the expected results.

    advertisement

    Previously, the largest mass ratio that had ever been solved with high-precision was 16 to 1 — eight times less extreme than Lousto’s simulation. The challenge of simulating larger mass ratios is that it requires resolving the dynamics of the interacting systems at additional scales.
    Like computer models in many fields, Lousto uses a method called adaptive mesh refinement to get precise models of the dynamics of the interacting black holes. It involves putting the black holes, the space between them, and the distant observer (us) on a grid or mesh, and refining the areas of the mesh with greater detail where it is needed.
    Lousto’s team approached the problem with a methodology that he compares to Zeno’s first paradox. By halving and halving the mass ratio while adding internal grid refinement levels, they were able to go from 32:1 black hole mass ratios to 128:1 binary systems that undergo 13 orbits before merger. On Frontera, it required seven months of constant computation.
    “Frontera was the perfect tool for the job,” Lousto said. “Our problem requires high performance processors, communication, and memory, and Frontera has all three.”
    The simulation isn’t the end of the road. Black holes can have a variety of spins and configurations, which impact the amplitude and frequency of the gravitational waves their merger produces. Lousto would like to solve the equations 11 more times to get a good first range of possible “templates” to compare with future detections.
    The results will help the designers of future Earth- and space-based gravitational wave detectors plan their instruments. These include advanced, third generation ground based gravitational wave detectors and the Laser Interferometer Space Antenna (LISA), which is targeted for launch in the mid-2030s.
    The research may also help answer fundamental mysteries about black holes, such as how some can grow so big — millions of times the mass of the Sun.
    “Supercomputers help us answer these questions,” Lousto said. “And the problems inspire new research and pass the torch to the next generation of students.” More

  • in

    Swirl power: How gentle body movement will charge your mobile phone

    Researchers have found a way to produce nylon fibres that are smart enough to produce electricity from simple body movement, paving the way for smart clothes that will monitor our health through miniaturised sensors and charge our devices without any external power source.
    This discovery — a collaboration between the University of Bath, the Max Planck Institute for Polymer Research in Germany and the University of Coimbra in Portugal — is based on breakthrough work on solution-processed piezoelectric nylons led by Professor Kamal Asadi from the Department of Physics at Bath and his former PhD student Saleem Anwar.
    Piezoelectricity describes the phenomenon where mechanical energy is transformed into electric energy. To put it simply, when you tap on or distort a piezoelectric material, it generates a charge. Add a circuit and the charge can be taken away, stored in a capacitor for instance and then put to use — for example, to power your mobile phone.
    While wearing piezoelectric clothing, such as a shirt, even a simple movement like swinging your arms would cause sufficient distortions in the shirt’s fibres to generate electricity.
    Professor Asadi said: “There’s growing demand for smart, electronic textiles, but finding cheap and readily available fibres of electronic materials that are suitable for modern-day garments is a challenge for the textile industry.
    “Piezoelectric materials make good candidates for energy harvesting from mechanical vibrations, such as body motion, but most of these materials are ceramic and contain lead, which is toxic and makes their integration in wearable electronics or clothes challenging.”
    Scientists have been aware of the piezoelectric properties of nylon since the 1980s, and the fact that this material is lead-free and non-toxic has made it particularly appealing. However, the silky, human-made fabric often associated with cheap T-shirts and women’s stockings is “a very difficult material to handle,” according to Professor Asadi.

    advertisement

    “The challenge is to prepare nylon fibres that retain their piezoelectric properties,” he said.
    In its raw polymer form, nylon is a white powder that can be blended with other materials (natural or human-made) and then moulded into myriad products, from clothes and toothbrush bristles to food packaging and car parts. It’s when nylon is reduced to a particular crystal form that it becomes piezoelectric. The established method for creating these nylon crystals is to melt, rapidly cool and then stretch the nylon. However this process results in thick slabs (known as ‘films’) that are piezoelectric but not suited to clothing. The nylon would need to be stretched to a thread to be of woven into garments, or to a thin film to be used in wearable electronics.
    The challenge of producing thin piezoelectric nylon films was thought to be insurmountable, and initial enthusiasm for creating piezoelectric nylon garments turned to apathy, resulting in research in this area virtually grinding to a halt in the 1990s.
    On a whim, Professor Asadi and Mr Anwar — a textile engineering- took a completely new approach to producing piezoelectric nylon thin films. They dissolved the nylon powder in an acid solvent rather than by melting it. However, they found that the finished film contained solvent molecules that were locked inside the materials, thereby preventing formation of the piezoelectric phase.
    “We needed to find a way to remove the acid to make the nylon useable,” said Professor Asadi, who started this research at the Max Planck Institute for Polymer Research in Germany before moving to Bath in September.

    advertisement

    By chance, the pair discovered that by mixing the acid solution with the acetone (a chemical best known as a paint thinner or nail varnish remover), they were able to dissolve the nylon and then extract the acid efficiently, leaving the nylon film in a piezoelectric phase.
    “The acetone bonds very strongly to the acid molecules, so when the acetone is evaporated from nylon solution, it takes the acid with it. What you’re left with is nylon in its piezoelectric crystalline phase. The next step is to turn nylon into yarns and then integrate it into fabrics.”
    Developing piezoelectric fibres is a major step towards being able to produce electronic textiles with clear applications in the field of wearable electronics. The goal is to integrate electronic elements, such as sensors, in a fabric, and to generate power while we’re on the move. Most likely, the electricity harvested from the fibres of piezoelectric clothing would be stored in a battery nestled in a pocket. This battery would then connect to a device either via a cable or wirelessly.
    “In years to come, we could be using our T-shirts to power a device such as our mobile phone as we walk in the woods, or for monitoring our health,” said Professor Asadi. More

  • in

    phyloFlash: New software for fast and easy analysis of environmental microbes

    Researchers at the Max Planck Institute for Marine Microbiology in Bremen are developing a user-friendly method to reconstruct and analyze SSU rRNA from raw metagenome data.
    First the background: Microbiologists traditionally determine which organisms they are dealing with using the small subunit ribosomal RNA or in short SSU rRNA gene. This marker gene allows to identify almost any living creature, be it a bacterium or an animal, and thus assign it to its place in the tree of life. Once the position in the tree of life is known, specific DNA probes can be designed to make the organisms visible in an approach called FISH (fluorescence in situ hybridization). FISH has many applications, for example to sort cells, or to microscopically record their morphology or spatial position. This approach — which leads from DNA to gene to tree and probe to image — is called the “full-cycle rRNA approach.” To make the SSU rRNA measurable, it is usually amplified with polymerase chain reaction (PCR). Today, however, PCR is increasingly being replaced by so-called metagenomics, which record the entirety of all genes in a habitat. Rapid methodological advances now allow the fast and efficient production of large amounts of such metagenomic data. The analysis is performed using significantly shorter DNA sequence segments — much shorter than the SSU gene — which are then laboriously assembled and placed into so-called metagenomically assembled genomes (MAGs). The short gene snippets do not provide complete SSU rRNA, and even in many assemblies and MAGs we do not find this important marker gene. This makes it hard to molecularly identify organisms in metagenomes, to compare them to existing databases or even to visualize them specifically with FISH.
    phyloFlash provides remedy
    Researchers at the Max Planck Institute for Marine Microbiology in Bremen now present a method that closes this gap and makes it possible to reconstruct and analyze SSU rRNA from raw metagenome data. “This software called phyloFlash, which is freely available through GitHub, combines the full-cycle rRNA approach for identification and visualization of non-cultivated microorganisms with metagenomic analysis; both techniques are well established at the Max Planck Institute for Marine Microbiology in Bremen,” explains Harald Gruber-Vodicka, who chiefly developed the method. “phyloFlash comprises all necessary steps, from the preparation of the necessary genome database (in this case SILVA), data extraction and taxonomic classification, through assembly, to the linking of SSU rRNA sequences and MAGs.” In addition, the software is very user-friendly and both installation and application are largely automated.
    Especially suitable for simple communities
    Gruber-Vodicka and his colleague Brandon Seah — who are shared first authors of the publication now presenting phyloFlash in the journal mSystems — come from symbiosis research. The communities they are dealing with in this field of research are comparatively simple: Usually a host organism lives together with one or a handful of microbial symbionts. Such communities are particularly well suited for analysis with phyloFlash. “For example, we do a lot of research on the deep-sea mussel Bathymodiolus, which is home to several bacterial subtenants,” says Gruber-Vodicka. “With the help of this well-studied community, we were able to test whether and how reliably phyloFlash works.” And indeed, the new software reliably identified both the mussel and its various symbionts. Niko Leisch, also a symbiosis researcher at the Max Planck Institute for Marine Microbiology, tested phyloFlash on small marine roundworms. Analyses of various such nematodes showed that some of the species of these inconspicuous worms might be associated with symbionts. “These exciting glimpses underline the great potential of our simple and fast method,” Gruber-Vodicka points out.
    OpenSource and all-purpose
    phyloFlash is an OpenSource software. Extensive documentation and a very active community ensure its continuous testing and further development. “phyloFlash is certainly not only interesting for microbiologists,” emphasizes Gruber-Vodicka. “Already now, numerous scientists from diverse fields of research make use of our software. The simple installation was certainly helpful in this respect, as it lowers the threshold for use.” This easy access and interactive character is also particularly important to Brandon Seah, who now works at the Max Planck Institute for Developmental Biology: “The most satisfying thing for me about this project is to see other people using our software to drive their own research forward,” says Seah. ” From the beginning, we’ve added features and developed the software in response to user feedback. These users are not just colleagues down the hall, but also people from the other side of the world who have given it a try and gotten in touch with us online. It underlines how open-source is more productive and beneficial both for software development and for science.”
    The software phyloFlash at GitHub: https://github.com/HRGV/phyloFlash
    phyloFlash manual available at https://hrgv.github.io/phyloFlash/

    Story Source:
    Materials provided by Max Planck Institute for Marine Microbiology. Note: Content may be edited for style and length. More

  • in

    A new candidate material for quantum spin liquids

    In 1973, physicist and later Nobel laureate Philip W. Anderson proposed a bizarre state of matter: the quantum spin liquid (QSL). Unlike the everyday liquids we know, the QSL actually has to do with magnetism — and magnetism has to do with spin.
    Disordered electron spin produces QSLs
    What makes a magnet? It was a long-lasting mystery, but today we finally know that magnetism arises from a peculiar property of sub-atomic particles, like electrons. That property is called “spin,” and the best — yet grossly insufficient — way to think of it is like a child’s spinning-top toy.
    What is important for magnetism is that spin turns every one of a material’s billions of electrons into a tiny magnet with its own magnetic “direction” (think north and south pole of a magnet). But the electron spins aren’t isolated; they interact with each other in different ways until they stabilize to form various magnetic states, thereby granting the material they belong to magnetic properties.
    In a conventional magnet, the interacting spins stabilize, and the magnetic directions of each electron align. This results in a stable formation.
    But in what is known as a “frustrated” magnet, the electron spins can’t stabilize in the same direction. Instead, they constantly fluctuate like a liquid — hence the name “quantum spin liquid.”
    Quantum Spin Liquids in future technologies

    advertisement

    What is exciting about QSLs is that they can be used in a number of applications. Because they come in different varieties with different properties, QSLs can be used in quantum computing, telecommunications, superconductors, spintronics (a variation of electronics that uses electron spin instead of current), and a host of other quantum-based technologies.
    But before exploiting them, we first have to gain a solid understanding of QSL states. To do this, scientists have to find ways to produce QSLs on demand — a task that has proven difficult so far, with only a few materials on offer as QSL candidates.
    A complex material might solve a complex problem
    Publishing in PNAS, scientists led by Péter Szirmai and Bálint Náfrádi at László Forró’s lab at EPFL’s School of Basic Sciences have successfully produced and studied a QSL in a highly original material known as EDT-BCO. The system was designed and synthesized by the group of Patrick Batail at Université d’Angers (CNRS).
    The structure of EDT-BCO is what makes it possible to create a QSL. The electron spins in the EDT-BCO form triangularly organized dimers, each of which has a spin-1/2 magnetic moment which means that the electron must fully rotate twice to return to its initial configuration. The layers of spin-1/2 dimers are separated by a sublattice of carboxylate anions centred by a chiral bicyclooctane. The anions are called “rotors” because they have conformational and rotational degrees of freedom.
    The unique rotor component in a magnetic system makes the material special amongst QSL candidates, representing a new material family. “The subtle disorder provoked by the rotor components introduces a new handle upon the spin system,” says Szirmai.
    The scientists and their collaborators employed an arsenal of methods to explore the EDT-BCO as a QSL material candidate: density functional theory calculations, high-frequency electron spin resonance measurements (a trademark of Forró’s lab), nuclear magnetic resonance, and muon spin spectroscopy. All of these techniques explore the magnetic properties of EDT-BCO from different angles.
    All the techniques confirmed the absence of long-range magnetic order and the emergence of a QSL. In short, EDT-BCO officially joins the limited ranks of QSL materials and takes us a step further into the next generation of technologies. As Bálint Náfrádi puts it: “Beyond the superb demonstration of the QSL state, our work is highly relevant, because it provides a tool to obtain additional QSL materials via custom-designed functional rotor molecules.”

    Story Source:
    Materials provided by Ecole Polytechnique Fédérale de Lausanne. Original written by Sarah Perrin. Note: Content may be edited for style and length. More

  • in

    Blueprints for a cheaper single-molecule microscope

    A team of scientists and students from the University of Sheffield has designed and built a specialist microscope, and shared the build instructions to help make this equipment available to many labs across the world.
    The microscope, called the smfBox, is capable of single-molecule measurements allowing scientists to look at one molecule at a time rather than generating an average result from bulk samples and works just as well as commercially available instruments.
    This single-molecule method is currently only available at a few specialist labs throughout the world due to the cost of commercially available microscopes.
    Today (6 November 2020), the team has published a paper in the journal Nature Communications which provides all the build instructions and software needed to run the microscope, to help make this single-molecule method accessible to labs across the world.
    The interdisciplinary team spanning the University of Sheffield’s Departments of Chemistry and Physics, and the Central Laser Facility at the Rutherford Appleton Laboratory, spent a relatively modest £40,000 to build a piece of kit that would normally cost around £400,000 to buy.
    The microscope was built with simplicity in mind so that researchers interested in biological problems can use it with little training, and the lasers have been shielded in such a way that it can be used in normal lighting conditions, and is no more dangerous than a CD player.
    Dr Tim Craggs, the lead academic on the project from the University of Sheffield, said: “We wanted to democratise single-use molecule measurements to make this method available for many labs, not just a few labs throughout the world. This work takes what was a very expensive, specialist piece of kit, and gives every lab the blueprint and software to build it for themselves, at a fraction of the cost.
    “Many medical diagnostics are moving towards increased sensitivity, and there is nothing more sensitive than detecting single molecules. In fact, many new COVID tests currently under development work at this level. This instrument is a good starting point for further development towards new medical diagnostics.”
    The original smfBox was built by a team of academics and undergraduate students at the University of Sheffield.
    Ben Ambrose, the PhD lead on the project, said: “This project was an excellent opportunity to work with researchers at all levels, from undergraduates to scientists in national facilities. Between biophysicists and engineers, we have created a new and accessible platform to do some cutting edge science without breaking the bank. We are already starting to do some great work with this microscope ourselves, but I am excited to see what it will do in the hands of other labs who have already begun to build their own.”
    The Craggs Lab at the University of Sheffield has already used the smfBox in its research to investigate fundamental biological processes, such as DNA damage detection, where improved understanding in this field could lead to better therapies for diseases including cancer.

    Story Source:
    Materials provided by University of Sheffield. Note: Content may be edited for style and length. More

  • in

    Know when to unfold 'em: Applying particle physics methods to quantum computing

    Borrowing a page from high-energy physics and astronomy textbooks, a team of physicists and computer scientists at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) has successfully adapted and applied a common error-reduction technique to the field of quantum computing.
    In the world of subatomic particles and giant particle detectors, and distant galaxies and giant telescopes, scientists have learned to live, and to work, with uncertainty. They are often trying to tease out ultra-rare particle interactions from a massive tangle of other particle interactions and background “noise” that can complicate their hunt, or trying to filter out the effects of atmospheric distortions and interstellar dust to improve the resolution of astronomical imaging.
    Also, inherent problems with detectors, such as with their ability to record all particle interactions or to exactly measure particles’ energies, can result in data getting misread by the electronics they are connected to, so scientists need to design complex filters, in the form of computer algorithms, to reduce the margin of error and return the most accurate results.
    The problems of noise and physical defects, and the need for error-correction and error-mitigation algorithms, which reduce the frequency and severity of errors, are also common in the fledgling field of quantum computing, and a study published in the journal npj Quantum Information found that there appear to be some common solutions, too.
    Ben Nachman, a Berkeley Lab physicist who is involved with particle physics experiments at CERN as a member of Berkeley Lab’s ATLAS group, saw the quantum-computing connection while working on a particle physics calculation with Christian Bauer, a Berkeley Lab theoretical physicist who is a co-author of the study. ATLAS is one of the four giant particle detectors at CERN’s Large Hadron Collider, the largest and most powerful particle collider in the world.
    “At ATLAS, we often have to ‘unfold,’ or correct for detector effects,” said Nachman, the study’s lead author. “People have been developing this technique for years.”
    In experiments at the LHC, particles called protons collide at a rate of about 1 billion times per second. To cope with this incredibly busy, “noisy” environment and intrinsic problems related to the energy resolution and other factors associated with detectors, physicists use error-correcting “unfolding” techniques and other filters to winnow down this particle jumble to the most useful, accurate data.

    advertisement

    “We realized that current quantum computers are very noisy, too,” Nachman said, so finding a way to reduce this noise and minimize errors — error mitigation — is a key to advancing quantum computing. “One kind of error is related to the actual operations you do, and one relates to reading out the state of the quantum computer,” he noted — that first kind is known as a gate error, and the latter is called a readout error.
    The latest study focuses on a technique to reduce readout errors, called “iterative Bayesian unfolding” (IBU), which is familiar to the high-energy physics community. The study compares the effectiveness of this approach to other error-correction and mitigation techniques. The IBU method is based on Bayes’ theorem, which provides a mathematical way to find the probability of an event occurring when there are other conditions related to this event that are already known.
    Nachman noted that this technique can be applied to the quantum analog of classical computers, known as universal gate-based quantum computers.
    In quantum computing, which relies on quantum bits, or qubits, to carry information, the fragile state known as quantum superposition is difficult to maintain and can decay over time, causing a qubit to display a zero instead of a one — this is a common example of a readout error.
    Superposition provides that a quantum bit can represent a zero, a one, or both quantities at the same time. This enables unique computing capabilities not possible in conventional computing, which rely on bits representing either a one or a zero, but not both at once. Another source of readout error in quantum computers is simply a faulty measurement of a qubit’s state due to the architecture of the computer.

    advertisement

    In the study, researchers simulated a quantum computer to compare the performance of three different error-correction (or error-mitigation or unfolding) techniques. They found that the IBU method is more robust in a very noisy, error-prone environment, and slightly outperformed the other two in the presence of more common noise patterns. Its performance was compared to an error-correction method called Ignis that is part of a collection of open-source quantum-computing software development tools developed for IBM’s quantum computers, and a very basic form of unfolding known as the matrix inversion method.
    The researchers used the simulated quantum-computing environment to produce more than 1,000 pseudo-experiments, and they found that the results for the IBU method were the closest to predictions. The noise models used for this analysis were measured on a 20-qubit quantum computer called IBM Q Johannesburg.
    “We took a very common technique from high-energy physics, and applied it to quantum computing, and it worked really well — as it should,” Nachman said. There was a steep learning curve. “I had to learn all sorts of things about quantum computing to be sure I knew how to translate this and to implement it on a quantum computer.”
    He said he was also very fortunate to find collaborators for the study with expertise in quantum computing at Berkeley Lab, including Bert de Jong, who leads a DOE Office of Advanced Scientific Computing Research Quantum Algorithms Team and an Accelerated Research for Quantum Computing project in Berkeley Lab’s Computational Research Division.
    “It’s exciting to see how the plethora of knowledge the high-energy physics community has developed to get the most out of noisy experiments can be used to get more out of noisy quantum computers,” de Jong said.
    The simulated and real quantum computers used in the study varied from five qubits to 20 qubits, and the technique should be scalable to larger systems, Nachman said. But the error-correction and error-mitigation techniques that the researchers tested will require more computing resources as the size of quantum computers increases, so Nachman said the team is focused on how to make the methods more manageable for quantum computers with larger qubit arrays.
    Nachman, Bauer, and de Jong also participated in an earlier study that proposes a way to reduce gate errors, which is the other major source of quantum-computing errors. They believe that error correction and error mitigation in quantum computing may ultimately require a mix-and-match approach — using a combination of several techniques.
    “It’s an exciting time,” Nachman said, as the field of quantum computing is still young and there is plenty of room for innovation. “People have at least gotten the message about these types of approaches, and there is still room for progress.” He noted that quantum computing provided a “push to think about problems in a new way,” adding, “It has opened up new science potential.” More

  • in

    Nervous systems of insects inspire efficient future AI systems

    Zoologists at the University of Cologne studied the nervous systems of insects to investigate principles of biological brain computation and possible implications for machine learning and artificial intelligence. Specifically, they analysed how insects learn to associate sensory information in their environment with a food reward, and how they can recall this information later in order to solve complex tasks such as the search for food. The results suggest that the transformation of sensory information into memories in the brain can inspire future machine learning and artificial intelligence applications to solving complex tasks. The study has been published in the journal PNAS.
    Living organisms show remarkable abilities in coping with problems posed by complex and dynamic environments. They are able to generalize their experiences in order to rapidly adapt their behaviour when the environment changes. The zoologists investigated how the nervous system of the fruit fly controls its behaviour when searching for food. Using a computer model, they simulated and analysed the computations in the fruit fly’s nervous system in response to scents emanated from the food source. ‘We initially trained our model of the fly brain in exactly the same way as insects are trained in experiments. We presented a specific scent in the simulation together with a reward and a second scent without a reward. The model rapidly learns a robust representation of the rewarded scent after just a few scent presentations and is then able to find the source of this scent in a spatially complex and temporally dynamic environment,’ said computer scientist Dr Hannes Rapp, who created the model as part of his doctoral thesis at the UoC’s Institute of Zoology.
    The model created is thus capable to generalize from its memory and to apply what it has learned previously in a completely new and complex odour molecule landscape, while learning required only a very small database of training samples. ‘For our model, we exploit the special properties of biological information processing in nervous systems,’ explained Professor Dr Martin Nawrot, senior author of the study. ‘These are in particular a fast and parallel processing of sensory stimuli by means of brief nerve impulses as well as the formation of a distributed memory through the simultaneous modification of many synaptic contacts during the learning process.’ The theoretical principles underlying this model can also be used for artificial intelligence and autonomous systems. They enable an artificial agent to learn much more efficiently and to apply what it has learned in a changing environment.

    Story Source:
    Materials provided by University of Cologne. Note: Content may be edited for style and length. More