More stories

  • in

    Underwater robots to autonomously dock mid-mission to recharge and transfer data

    Robots can be amazing tools for search-and-rescue missions and environmental studies, but eventually they must return to a base to recharge their batteries and upload their data. That can be a challenge if your robot is an autonomous underwater vehicle (AUV) exploring deep ocean waters.
    Now, a Purdue University team has created a mobile docking system for AUVs, enabling them to perform longer tasks without the need for human intervention.
    The team also has published papers on ways to adapt this docking system for AUVs that will explore extraterrestrial lakes, such as those of Jupiter and Saturn’s moons.
    “My research focuses on persistent operation of robots in challenging environments,” said Nina Mahmoudian, an associate professor of mechanical engineering. “And there’s no more challenging environment than underwater.”
    Once a marine robot submerges in water, it loses the ability to transmit and receive radio signals, including GPS data. Some may use acoustic communication, but this method can be difficult and unreliable, especially for long-range transmissions. Because of this, underwater robots currently have a limited range of operation.
    “Typically these robots perform a pre-planned itinerary underwater,” Mahmoudian said. “Then they come to the surface and send out a signal to be retrieved. Humans have to go out, retrieve the robot, get the data, recharge the battery and then send it back out. That’s very expensive, and it limits the amount of time these robots can be performing their tasks.”
    Mahmoudian’s solution is to create a mobile docking station that underwater robots could return to on their own.

    advertisement

    “And what if we had multiple docks, which were also mobile and autonomous?” she said. “The robots and the docks could coordinate with each other, so that they could recharge and upload their data, and then go back out to continue exploring, without the need for human intervention. We’ve developed the algorithms to maximize these trajectories, so we get the optimum use of these robots.”
    A paper on the mission planning system that Mahmoudian and her team developed has been published in IEEE Robotics and Automation Letters. The researchers validated the method by testing the system on a short mission in Lake Superior.
    “What’s key is that the docking station is portable,” Mahmoudian said. “It can be deployed in a stationary location, but it can also be deployed on autonomous surface vehicles or even on other autonomous underwater vehicles. And it’s designed to be platform-agnostic, so it can be utilized with any AUV. The hardware and software work hand-in-hand.”
    Mahmoudian points out that systems like this already exist in your living room. “An autonomous vacuum, like a Roomba, does its vacuum cleaning, and when it runs out of battery, it autonomously returns to its dock to get recharged,” she said, “That’s exactly what we are doing here, but the environment is much more challenging.”
    If her system can successfully function in a challenging underwater environment, then Mahmoudian sees even greater horizons for this technology.
    “This system can be used anywhere,” she said. “Robots on land, air or sea will be able to operate indefinitely. Search-and-rescue robots will be able to explore much wider areas. They will go into the Arctic and explore the effects of climate change. They will even go into space.”
    Video: https://www.youtube.com/watch?v=_kS0_-qc_r0&_ga=2.99992349.282287155.1601990769-129101217.1578788059
    A patent on this mobile underwater docking station design has been issued. The patent was filed through the Secretary of the U.S. Navy. This work is funded by the National Science Foundation (grant 19078610) and the Office of Naval Research (grant N00014-20-1-2085).

    Story Source:
    Materials provided by Purdue University. Original written by Jared Pike. Note: Content may be edited for style and length. More

  • in

    Could megatesla magnetic fields be realized on Earth?

    Magnetic fields are used in various areas of modern physics and engineering, with practical applications ranging from doorbells to maglev trains. Since Nikola Tesla’s discoveries in the 19th century, researchers have strived to realize strong magnetic fields in laboratories for fundamental studies and diverse applications, but the magnetic strength of familiar examples are relatively weak. Geomagnetism is 0.3-0.5 gauss (G) and magnetic tomography (MRI) used in hospitals is about 1 tesla (T = 104 G). By contrast, future magnetic fusion and maglev trains will require magnetic fields on the kilotesla (kT = 107 G) order. To date, the highest magnetic fields experimentally observed are on the kT order.
    Recently, scientists at Osaka University discovered a novel mechanism called a “microtube implosion,” and demonstrated the generation of megatesla (MT = 1010G) order magnetic fields via particle simulations using a supercomputer. Astonishingly, this is three orders of magnitude higher than what has ever been achieved in a laboratory. Such high magnetic fields are expected only in celestial bodies like neutron stars and black holes.
    Irradiating a tiny plastic microtube one-tenth the thickness of a human hair by ultraintense laser pulses produces hot electrons with temperatures of tens of billion of degrees. These hot electrons, along with cold ions, expand into the microtube cavity at velocities approaching the speed of light. Pre-seeding with a kT-order magnetic field causes the imploding charged particles infinitesimally twisted due to Lorenz force. Such a unique cylindrical flow collectively produces unprecedentedly high spin currents of about 1015 ampere/cm2 on the target axis and consequently, generates ultrahigh magnetic fields on the MT order.
    The study conducted by Masakatsu Murakami and colleagues has confirmed that current laser technology can realize MT-order magnetic fields based on the concept. The present concept for generating MT-order magnetic fields will lead to pioneering fundamental research in numerous areas, including materials science, quantum electrodynamics (QED), and astrophysics, as well as other cutting-edge practical applications.

    Story Source:
    Materials provided by Osaka University. Note: Content may be edited for style and length. More

  • in

    How mobile apps grab our attention

    As part of an international collaboration, Aalto University researchers have shown that our common understanding of what attracts visual attention to screens, in fact, does not transfer to mobile applications. Despite the widespread use of mobile phones and tablets in our everyday lives, this is the first study to empirically test how users’ eyes follow commonly used mobile app elements.
    Previous work on what attracts visual attention, or visual saliency, has centered on desktop and web-interfaces.
    ‘Apps appear differently on a phone than on a desktop computer or browser: they’re on a smaller screen which simply fits fewer elements and, instead of a horizontal view, mobile devices typically use a vertical layout. Until now it was unclear how these factors would affect how apps actually attract our eyes,’ explains Aalto University Professor Antti Oulasvirta.
    In the study, the research team used a large set of representative mobile interfaces and eye tracking to see how users look at screenshots of mobile apps, for both Android and Apple iOS devices.
    According to previous thinking, our eyes should not only jump to bigger or brighter elements, but also stay there longer. Previous studies have also concluded that when we look at certain kinds of images, our attention is drawn to the centre of screens and also spread horizontally across the screen, rather than vertically. The researchers found these principles to have little effect on mobile interfaces.
    ‘It actually came as a surprise that bright colours didn’t affect how people fixate on app details. One possible reason is that the mobile interface itself is full of glossy and colourful elements, so everything on the screen can potentially catch your attention — it’s just how they’re designed. It seems that when everything is made to stand out, nothing pops out in the end,’ says lead author and Post-doctoral Researcher Luis Leiva.
    The study also confirms that some other design principles hold true for mobile apps. Gaze, for example, drifts to the top-left corner, as an indication of exploration or scanning. Text plays an important role, likely due to its role in relaying information; on first use, users thus tend to focus on text elements of a mobile app as parts of icons, labels and logos.
    Image elements drew visual attention more frequently than expected for the area they cover, though the average length of time users spent looking at images was similar to other app elements. Faces, too, attracted concentrated attention, though when accompanied by text, eyes wander much closer to the location of text.
    ‘Various factors influence where our visual attention goes. For photos, these factors include colour, edges, texture and motion. But when it comes to generated visual content, such as graphical user interfaces, design composition is a critical factor to consider,’ says Dr Hamed Tavakoli, who was also part of the Aalto University research team.
    The study was completed with international collaborators including IIT Goa (India), Yildiz Technical University (Turkey) and Huawei Technologies (China). The team will present the findings on 6 October 2020 at MobileHCI’20, the flagship conference on Human-Computer Interaction with mobile devices and services.

    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More

  • in

    Scientists find evidence of exotic state of matter in candidate material for quantum computers

    Using a novel technique, scientists working at the Florida State University-headquartered National High Magnetic Field Laboratory have found evidence for a quantum spin liquid, a state of matter that is promising as a building block for the quantum computers of tomorrow.
    Researchers discovered the exciting behavior while studying the so-called electron spins in the compound ruthenium trichloride. Their findings, published today in the journal Nature Physics , show that electron spins interact across the material, effectively lowering the overall energy. This type of behavior — consistent with a quantum spin liquid — was detected in ruthenium trichloride at high temperatures and in high magnetic fields.
    Spin liquids, first theorized in 1973, remain something of a mystery. Despite some materials showing promising signs for this state of matter, it is extremely challenging to definitively confirm its existence. However, there is great interest in them because scientists believe they could be used for the design of smarter materials in a variety of applications, such as quantum computing.
    This study provides strong support that ruthenium trichloride is a spin liquid, said physicist Kim Modic, a former graduate student who worked at the MagLab’s pulsed field facility and is now an assistant professor at the Institute of Science and Technology Austria.
    “I think this paper provides a fresh perspective on ruthenium trichloride and demonstrates a new way to look for signatures of spin liquids,” said Modic, the paper’s lead author.
    For decades, physicists have extensively studied the charge of an electron, which carries electricity, paving the way for advances in electronics, energy and other areas. But electrons also have a property called spin. Scientists want to also leverage the spin aspect of electrons for technology, but the universal behavior of spins is not yet fully understood.

    advertisement

    In simple terms, electrons can be thought of as spinning on an axis, like a top, oriented in some direction. In magnetic materials, these spins align with one another, either in the same or opposite directions. Called magnetic ordering, this behavior can be induced or suppressed by temperature or magnetic field. Once the magnetic order is suppressed, more exotic states of matter could emerge, such as quantum spin liquids.
    In the search for a spin liquid, the research team homed in on ruthenium trichloride. Its honeycomb-like structure, featuring a spin at each site, is like a magnetic version of graphene — another hot topic in condensed matter physics.
    “Ruthenium is much heavier than carbon, which results in strong interactions among the spins,” said MagLab physicist Arkady Shekhter, a co-author on the paper.
    The team expected those interactions would enhance magnetic frustration in the material. That’s a kind of “three’s company” scenario in which two spins pair up, leaving the third in a magnetic limbo, which thwarts magnetic ordering. That frustration, the team hypothesized, could lead to a spin liquid state. Their data ended up confirming their suspicions.
    “It seems like, at low temperatures and under an applied magnetic field, ruthenium trichloride shows signs of the behavior that we’re looking for,” Modic said. “The spins don’t simply orient themselves depending on the alignment of neighboring spins, but rather are dynamic — like swirling water molecules — while maintaining some correlation between them.”
    The findings were enabled by a new technique that the team developed called resonant torsion magnetometry, which precisely measures the behavior of electron spins in high magnetic fields and could lead to many other new insights about magnetic materials, Modic said.

    advertisement

    “We don’t really have the workhorse techniques or the analytical machinery for studying the excitations of electron spins, like we do for charge systems,” Modic said. “The methods that do exist typically require large sample sizes, which may not be available. Our technique is highly sensitive and works on tiny, delicate samples. This could be a game-changer for this area of research.”
    Modic developed the technique as a postdoctoral researcher and then worked with MagLab physicists Shekhter and Ross McDonald, another co-author on the paper, to measure ruthenium trichloride in high magnetic fields.
    Their technique involved mounting ruthenium trichloride samples onto a cantilever the size of a strand of hair. They repurposed a quartz tuning fork — similar to that in a quartz crystal watch — to vibrate the cantilever in a magnetic field. Instead of using it to tell time precisely, they measured the frequency of vibration to study the interaction between the spins in ruthenium trichloride and the applied magnetic field. They performed their measurements in two powerful magnets at the National MagLab.
    “The beauty of our approach is that it’s a relatively simple setup, which allowed us to carry out our measurements in both a 35-tesla resistive magnet and a 65-tesla pulsed field magnet,” Modic said.
    The next step in the research will be to study this system in the MagLab’s world-record 100-tesla pulsed magnet.
    “That high of a magnetic field should allow us to directly observe the suppression of the spin liquid state, which will help us learn even more about this compound’s inner workings,” Shekhter said. More

  • in

    New algorithm could unleash the power of quantum computers

    A new algorithm that fast forwards simulations could bring greater use ability to current and near-term quantum computers, opening the way for applications to run past strict time limits that hamper many quantum calculations.
    “Quantum computers have a limited time to perform calculations before their useful quantum nature, which we call coherence, breaks down,” said Andrew Sornborger of the Computer, Computational, and Statistical Sciences division at Los Alamos National Laboratory, and senior author on a paper announcing the research. “With a new algorithm we have developed and tested, we will be able to fast forward quantum simulations to solve problems that were previously out of reach.”
    Computers built of quantum components, known as qubits, can potentially solve extremely difficult problems that exceed the capabilities of even the most powerful modern supercomputers. Applications include faster analysis of large data sets, drug development, and unraveling the mysteries of superconductivity, to name a few of the possibilities that could lead to major technological and scientific breakthroughs in the near future.
    Recent experiments have demonstrated the potential for quantum computers to solve problems in seconds that would take the best conventional computer millennia to complete. The challenge remains, however, to ensure a quantum computer can run meaningful simulations before quantum coherence breaks down.
    “We use machine learning to create a quantum circuit that can approximate a large number of quantum simulation operations all at once,” said Sornborger. “The result is a quantum simulator that replaces a sequence of calculations with a single, rapid operation that can complete before quantum coherence breaks down.”
    The Variational Fast Forwarding (VFF) algorithm that the Los Alamos researchers developed is a hybrid combining aspects of classical and quantum computing. Although well-established theorems exclude the potential of general fast forwarding with absolute fidelity for arbitrary quantum simulations, the researchers get around the problem by tolerating small calculation errors for intermediate times in order to provide useful, if slightly imperfect, predictions.
    In principle, the approach allows scientists to quantum-mechanically simulate a system for as long as they like. Practically speaking, the errors that build up as simulation times increase limits potential calculations. Still, the algorithm allows simulations far beyond the time scales that quantum computers can achieve without the VFF algorithm.
    One quirk of the process is that it takes twice as many qubits to fast forward a calculation than would make up the quantum computer being fast forwarded. In the newly published paper, for example, the research group confirmed their approach by implementing a VFF algorithm on a two qubit computer to fast forward the calculations that would be performed in a one qubit quantum simulation.
    In future work, the Los Alamos researchers plan to explore the limits of the VFF algorithm by increasing the number of qubits they fast forward, and checking the extent to which they can fast forward systems. The research was published September 18, 2020 in the journal npj Quantum Information.

    Story Source:
    Materials provided by DOE/Los Alamos National Laboratory. Note: Content may be edited for style and length. More

  • in

    New shortcut enables faster creation of spin pattern in magnet

    Physicists have discovered a much faster approach to create a pattern of spins in a magnet. This ‘shortcut’ opens a new chapter in topology research. Interestingly, this discovery also offers an additional method to achieve more efficient magnetic data storage. The research will be published on 5 October in Nature Materials.
    Physicists previously demonstrated that laser light can create a pattern of magnetic spins. Now they have discovered a new route that enables this to be done much more quickly, in less than 300 picoseconds (a picosecond is one millionth of a millionth of a second). This is much faster than was previously thought possible.
    Useful for data storage: skyrmions
    Magnets consist of many small magnets, which are called spins. Normally, all the spins point in the same direction, which determines the north and south poles of the magnet. But the directions of the spins together sometimes form vortex-like configurations known as skyrmions.
    “These skyrmions in magnets could be used as a new type of data storage,” explains Johan Mentink, physicist at Radboud University. For a number of years, Radboud scientists have been looking for optimal ways to control magnetism with laser light and ultimately use it for more efficient data storage. In this technique, very short pulses of light are fired at a magnetic material. This reverses the magnetic spins in the material, which changes a bit from a 0 to a 1.
    “Once the magnetic spins take the vortex-like shape of a skyrmion, this configuration is hard to erase,” says Mentink. “Moreover, these skyrmions are only a few nanometers (one billionth of a meter) in size, so you can store a lot of data on a very small piece of material.”
    Shortcut
    The phase transition between these two states in a magnet — all the spins pointing in one direction to a skyrmion — is comparable to a road over a high mountain. The researchers have discovered that you can take a ‘shortcut’ through the mountain by heating the material very quickly with a laser pulse. Thereby, the threshold for the phase transition becomes lower for a very short time.
    A remarkable aspect of this new approach is that the material is first brought into a very chaotic state, in which the topology — which can be seen as the number of skyrmions in the material — fluctuates strongly. The researchers discovered this approach by combining X-rays generated by the European free electron laser in Hamburg with extremely advanced electron microscopy and spin dynamics simulations. “This research therefore involved an enormous team effort,” Mentink emphasises.
    New possibilities
    This fundamental discovery has opened a new chapter in topology research. Mentink expects that many more scientists will now start to look for similar ways to ‘take a shortcut through the mountain’ in other materials.
    This discovery also enables new approaches to create faster and more efficient data storage. There is an increasing need for this, for example due to the gigantic, energy-guzzling data centres that are required for massive data storage in the cloud. Magnetic skyrmions can provide a solution to this problem. Because they are very small and can be created very quickly with light, a lot of information can potentially be stored very quickly and efficiently on a small area.

    Story Source:
    Materials provided by Radboud University Nijmegen. Note: Content may be edited for style and length. More

  • in

    Deep learning gives drug design a boost

    When you take a medication, you want to know precisely what it does. Pharmaceutical companies go through extensive testing to ensure that you do.
    With a new deep learning-based technique created at Rice University’s Brown School of Engineering, they may soon get a better handle on how drugs in development will perform in the human body.
    The Rice lab of computer scientist Lydia Kavraki has introduced Metabolite Translator, a computational tool that predicts metabolites, the products of interactions between small molecules like drugs and enzymes.
    The Rice researchers take advantage of deep-learning methods and the availability of massive reaction datasets to give developers a broad picture of what a drug will do. The method is unconstrained by rules that companies use to determine metabolic reactions, opening a path to novel discoveries.
    “When you’re trying to determine if a compound is a potential drug, you have to check for toxicity,” Kavraki said. “You want to confirm that it does what it should, but you also want to know what else might happen.”
    The research by Kavraki, lead author and graduate student Eleni Litsa and Rice alumna Payel Das of IBM’s Thomas J. Watson Research Center, is detailed in the Royal Society of Chemistry journal Chemical Science.
    The researchers trained Metabolite Translator to predict metabolites through any enzyme, but measured its success against the existing rules-based methods that are focused on the enzymes in the liver. These enzymes are responsible for detoxifying and eliminating xenobiotics, like drugs, pesticides and pollutants. However, metabolites can be formed through other enzymes as well.

    advertisement

    “Our bodies are networks of chemical reactions,” Litsa said. “They have enzymes that act upon chemicals and may break or form bonds that change their structures into something that could be toxic, or cause other complications. Existing methodologies focus on the liver because most xenobiotic compounds are metabolized there. With our work, we’re trying to capture human metabolism in general.
    “The safety of a drug does not depend only on the drug itself but also on the metabolites that can be formed when the drug is processed in the body,” Litsa said.
    The rise of machine learning architectures that operate on structured data, such as chemical molecules, make the work possible, she said. Transformer was introduced in 2017 as a sequence translation method that has found wide use in language translation.
    Metabolite Translator is based on SMILES (for “simplified molecular-input line-entry system”), a notation method that uses plain text rather than diagrams to represent chemical molecules.
    “What we’re doing is exactly the same as translating a language, like English to German,” Litsa said.

    advertisement

    Due to the lack of experimental data, the lab used transfer learning to develop Metabolite Translator. They first pre-trained a Transformer model on 900,000 known chemical reactions and then fine-tuned it with data on human metabolic transformations.
    The researchers compared Metabolite Translator results with those from several other predictive techniques by analyzing known SMILES sequences of 65 drugs and 179 metabolizing enzymes. Though Metabolite Translator was trained on a general dataset not specific to drugs, it performed as well as commonly used rule-based methods that have been specifically developed for drugs. But it also identified enzymes that are not commonly involved in drug metabolism and were not found by existing methods.
    “We have a system that can predict equally well with rule-based systems, and we didn’t put any rules in our system that require manual work and expert knowledge,” Kavraki said. “Using a machine learning-based method, we are training a system to understand human metabolism without the need for explicitly encoding this knowledge in the form of rules. This work would not have been possible two years ago.”
    Kavraki is the Noah Harding Professor of Computer Science, a professor of bioengineering, mechanical engineering and electrical and computer engineering and director of Rice’s Ken Kennedy Institute. Rice University and the Cancer Prevention and Research Institute of Texas supported the research. More

  • in

    Efficient pollen identification

    From pollen forecasting, honey analysis and climate-related changes in plant-pollinator interactions, analysing pollen plays an important role in many areas of research. Microscopy is still the gold standard, but it is very time consuming and requires considerable expertise. In cooperation with Technische Universität (TU) Ilmenau, scientists from the Helmholtz Centre for Environmental Research (UFZ) and the German Centre for Integrative Biodiversity Research (iDiv) have now developed a method that allows them to efficiently automate the process of pollen analysis. Their study has been published in the specialist journal New Phytologist.
    Pollen is produced in a flower’s stamens and consists of a multitude of minute pollen grains, which contain the plant’s male genetic material necessary for its reproduction. The pollen grains get caught in the tiny hairs of nectar-feeding insects as they brush past and are thus transported from flower to flower. Once there, in the ideal scenario, a pollen grain will cling to the sticky stigma of the same plant species, which may then result in fertilisation. “Although pollinating insects perform this pollen delivery service entirely incidentally, its value is immeasurably high, both ecologically and economically,” says Dr. Susanne Dunker, head of the working group on imaging flow cytometry at the Department for Physiological Diversity at UFZ and iDiv. “Against the background of climate change and the accelerating loss of species, it is particularly important for us to gain a better understanding of these interactions between plants and pollinators.” Pollen analysis is a critical tool in this regard.
    Each species of plant has pollen grains of a characteristic shape, surface structure and size. When it comes to identifying and counting pollen grains — measuring between 10 and 180 micrometres — in a sample, microscopy has long been considered the gold standard. However, working with a microscope requires a great deal of expertise and is very time-consuming. “Although various approaches have already been proposed for the automation of pollen analysis, these methods are either unable to differentiate between closely related species or do not deliver quantitative findings about the number of pollen grains contained in a sample,” continues UFZ biologist Dr. Dunker. Yet it is precisely this information that is critical to many research subjects, such as the interaction between plants and pollinators.
    In their latest study, Susanne Dunker and her team of researchers have developed a novel method for the automation of pollen analysis. To this end they combined the high throughput of imaging flow cytometry — a technique used for particle analysis — with a form of artificial intelligence (AI) known as deep learning to design a highly efficient analysis tool, which makes it possible to both accurately identify the species and quantify the pollen grains contained in a sample. Imaging flow cytometry is a process that is primarily used in the medical field to analyse blood cells but is now also being repurposed for pollen analysis. “A pollen sample for examination is first added to a carrier liquid, which then flows through a channel that becomes increasingly narrow,” says Susanne Dunker, explaining the procedure. “The narrowing of the channel causes the pollen grains to separate and line up as if they are on a string of pearls, so that each one passes through the built-in microscope element on its own and images of up to 2,000 individual pollen grains can be captured per second.” Two normal microscopic images are taken plus ten fluorescence microscopic images per grain of pollen. When excited with light radiated at certain wavelengths by a laser, the pollen grains themselves emit light. “The area of the colour spectrum in which the pollen fluoresces — and at which precise location — is sometimes very specific. This information provides us with additional traits that can help identify the individual plant species,” reports Susanne Dunker. In the deep learning process, an algorithm works in successive steps to abstract the original pixels of an image to a greater and greater degree in order to finally extract the species-specific characteristics. “Microscopic images, fluorescence characteristics and high throughput have never been used in combination for pollen analysis before — this really is an absolute first.” Where the analysis of a relatively straightforward sample takes, for example, four hours under the microscope, the new process takes just 20 minutes. UFZ has therefore applied for a patent for the novel high-throughput analysis method, with its inventor, Susanne Dunker, receiving the UFZ Technology Transfer Award in 2019.
    The pollen samples examined in the study came from 35 species of meadow plants, including yarrow, sage, thyme and various species of clover such as white, mountain and red clover. In total, the researchers prepared around 430,000 images, which formed the basis for a data set. In cooperation with TU Ilmenau, this data set was then transferred using deep learning into a highly efficient tool for pollen identification. In subsequent analyses, the researchers tested the accuracy of their new method, comparing unknown pollen samples from the 35 plant species against the data set. “The result was more than satisfactory — the level of accuracy was 96 per cent,” says Susanne Dunker. Even species that are difficult to distinguish from one another, and indeed present experts with a challenge under the microscope, could be reliably identified. The new method is therefore not only extremely fast but also highly precise.
    In the future, the new process for automated pollen analysis will play a key role in answering critical research questions about interactions between plants and pollinators. How important are certain pollinators like bees, flies and bumblebees for particular plant species? What would be the consequences of losing a species of pollinating insect or a plant? “We are now able to evaluate pollen samples on a large scale, both qualitatively and- at the same time — quantitatively. We are constantly expanding our pollen data set of insect-pollinated plants for that purpose,” comments Susanne Dunker. She aims to expand the data set to include at least those 500 plant species whose pollen is significant as a food source for honeybees. More