More stories

  • in

    How scientists are accelerating chemistry discoveries with automation

    A new automated workflow developed by scientists at Lawrence Berkeley National Laboratory (Berkeley Lab) has the potential to allow researchers to analyze the products of their reaction experiments in real time, a key capability needed for future automated chemical processes.
    The developed workflow — which applies statistical analysis to process data from nuclear magnetic resonance (NMR) spectroscopy — could help speed the discovery of new pharmaceutical drugs, and accelerate the development of new chemical reactions.
    The Berkeley Lab scientists who developed the groundbreaking technique say that the workflow can quickly identify the molecular structure of products formed by chemical reactions that have never been studied before. They recently reported their findings in the Journal of Chemical Information and Modeling.
    In addition to drug discovery and chemical reaction development, the workflow could also help researchers who are developing new catalysts. Catalysts are substances that facilitate a chemical reaction in the production of useful new products like renewable fuels or biodegradable plastics.
    “What excites people the most about this technique is its potential for real-time reaction analysis, which is an integral part of automated chemistry,” said first author Maxwell C. Venetos, a former researcher in Berkeley Lab’s Materials Sciences Division and former graduate student researcher in materials sciences at UC Berkeley. He completed his doctoral studies last year. “Our workflow really allows you to start pursuing the unknown. You are no longer constrained by things that you already know the answer to.”
    The new workflow can also identify isomers, which are molecules with the same chemical formula but different atomic arrangements. This could greatly accelerate synthetic chemistry processes in pharmaceutical research, for example. “This workflow is the first of its kind where users can generate their own library, and tune it to the quality of that library, without relying on an external database,” Venetos said.
    Advancing new applications
    In the pharmaceutical industry, drug developers currently use machine-learning algorithms to virtually screen hundreds of chemical compounds to identify potential new drug candidates that are more likely to be effective against specific cancers and other diseases. These screening methods comb through online libraries or databases of known compounds (or reaction products) and match them with likely drug “targets” in cell walls.

    But if a drug researcher is experimenting with molecules so new that their chemical structures don’t yet exist in a database, they must typically spend days in the lab to sort out the mixture’s molecular makeup: First, by running the reaction products through a purification machine, and then using one of the most useful characterization tools in a synthetic chemist’s arsenal, an NMR spectrometer, to identify and measure the molecules in the mixture one at a time.
    “But with our new workflow, you could feasibly do all of that work within a couple of hours,” Venetos said. The time-savings come from the workflow’s ability to rapidly and accurately analyze the NMR spectra of unpurified reaction mixtures that contain multiple compounds, a task that is impossible through conventional NMR spectral analysis methods.
    “I’m very excited about this work as it applies novel data-driven methods to the age-old problem of accelerating synthesis and characterization,” said senior author Kristin Persson, a faculty senior scientist in Berkeley Lab’s Materials Sciences Division and UC Berkeley professor of materials science and engineering who also leads the Materials Project.
    Experimental results
    In addition to being much faster than benchtop purification methods, the new workflow has the potential to be just as accurate. NMR simulation experiments performed using the National Energy Research Scientific Computing Center (NERSC) at Berkeley Lab with support from the Materials Project showed that the new workflow can correctly identify compound molecules in reaction mixtures that produce isomers, and also predict the relative concentrations of those compounds.
    To ensure high statistical accuracy, the research team used a sophisticated algorithm known as Hamiltonian Monte Carlo Markov Chain (HMCMC) to analyze the NMR spectra. They also performed advanced theoretical calculations based on a method called density-functional theory.

    Venetos designed the automated workflow as open source so that users can run it on an ordinary desktop computer. That convenience will come in handy for anyone from industry or academia.
    The technique sprouted from conversations between the Persson group and experimental collaborators Masha Elkin and Connor Delaney, former postdoctoral researchers in the John Hartwig group at UC Berkeley. Elkin is now a professor of chemistry at the Massachusetts Institute of Technology, and Delaney a professor of chemistry at the University of Texas at Dallas.
    “In chemistry reaction development, we are constantly spending time to figure out what a reaction made and in what ratio,” said John Hartwig, a senior faculty scientist in Berkeley Lab’s Chemical Sciences Division and UC Berkeley professor of chemistry. “Certain NMR spectrometry methods are precise, but if one is deciphering the contents of a crude reaction mixture containing a bunch of unknown potential products, those methods are far too slow to have as part of a high-throughput experimental or automated workflow. And that’s where this new capability to predict the NMR spectrum could help,” he said.
    Now that they’ve demonstrated the automated workflow’s potential, Persson and team hope to incorporate it into an automated laboratory that analyzes the NMR data of thousands or even millions of new chemical reactions at a time.
    Other authors on the paper include Masha Elkin, Connor Delaney, and John Hartwig at UC Berkeley.
    NERSC is a DOE Office of Science user facility at Berkeley Lab.
    The work was supported by the U.S. Department of Energy’s Office of Science, the U.S. National Science Foundation, and the National Institutes of Health. More

  • in

    Scientists release state-of-the-art spike-sorting software Kilosort4

    How do researchers make sense of the mountains of data collected from recording the simultaneous activity of hundreds of neurons? Neuroscientists all over the world rely on Kilosort, software that enables them to tease apart spikes from individual neurons to understand how the brain’s cells and circuits work together to process information.
    Now, researchers at HHMI’s Janelia Research Campus, led by Group Leader Marius Pachitariu, have released Kilosort4, an updated version of the popular spike-sorting software that has improved processing, requires less manual work, and is more accurate and easier to use than previous versions.
    “Over the past eight years, I’ve been refining the algorithm to make it more and more human-independent so people can use it out of the box,” Pachitariu says.
    Kilosort has become indispensable for many neuroscientists, but it may never have been developed if Pachitariu hadn’t decided he wanted to try something new.
    Pachitariu’s PhD work was in computational neuroscience and machine learning, but he yearned to work on more real-world applications, and he almost left academia for industry after he completed his PhD. Instead, Pachitariu opted for a postdoc in the joint lab of Kenneth Harris and Matteo Carandini at University College London where he could do more experimental neuroscience.
    The lab was then part of a consortium testing a probe called Neuropixels, developed at HHMI’s Janelia Research Campus. Pachitariu had no idea how to use the probes, which record activity from hundreds of neurons simultaneously, but he knew how to develop algorithms to keep up with the enormous amount of data his labmates were generating.
    In the first year of his postdoc, Pachitariu developed the initial version of Kilosort. The software, which was 50 times faster than previous approaches, allowed researchers to process the millions of data points generated by the Neuropixels probes. Eight years later, the probes and the software are staples in neuroscience labs worldwide, allowing researchers to identify and classify the spikes of individual neurons.

    In 2017, Pachitariu became a group leader at Janelia, where he and his team seek to understand how thousands of neurons work together to enable animals to think, decide, and act. These days, Pachitariu spends most of his time doing experiments and analyzing data, but he still finds time to work on improving Kilosort. The newly released Kilosort4 is the best in its class, outperforming other algorithms and correctly identifying even hard-to-detect neurons, according to the researchers.
    Pachitariu says it is much easier to squeeze in work on projects like Kilosort at Janelia than at other institutions where he would have to spend time writing grants and teaching.
    “Every now and then, I can put a few months into spearheading a new version and writing new code,” he says.
    Pachitariu says he also enjoys refining Kilosort, which allows him to use the core set of skills he developed during his PhD work. More

  • in

    Proof-of-principle demonstration of 3-D magnetic recording

    Research groups from NIMS, Seagate Technology, and Tohoku University have made a breakthrough in the field of hard disk drives (HDD) by demonstrating the feasibility of multi-level recording using a three-dimensional magnetic recording medium to store digital information. The research groups have shown that this technology can be used to increase the storage capacity of HDDs, which could lead to more efficient and cost-effective data storage solutions in the future.
    Data centers are increasingly storing vast amounts of data on hard disk drives (HDDs) that use perpendicular magnetic recording (PMR) to store information at areal densities of around 1.5 Tbit/in². However, to transition to higher areal densities, a high anisotropy magnetic recording medium consisting of FePt grains combined with heat-assisted laser writing is required. This method, known as heat-assisted magnetic recording (HAMR), is capable of sustaining areal recording densities of up to 10 Tbit/in². Furthermore, densities of larger than 10 Tbit/in² are possible based on a new principle demonstrated by storing multiple recording levels of 3 or 4 compared with the binary level used in HDD technology.
    In this study, we succeeded in arranging the FePt recording layers three dimensionally, by fabricating lattice-matched, FePt/Ru/FePt multilayer films, with Ru as a spacer layer. Measurements of the magnetization show the two FePt layers have different Curie temperatures. This means that three-dimensional recording becomes possible by adjusting the laser power when writing. In addition, we have demonstrated the principle of 3D recording through recording simulations, using a media model that mimics the microstructure and magnetic properties of the fabricated media.
    The three-dimensional magnetic recording method can increase recording capacity by stacking recording layers in three dimensions. This means that more digital information can be stored with fewer HDDs, leading to energy savings for data centers. In the future, we plan to develop processes to reduce the size of FePt grains, to improve the orientation and magnetic anisotropy, and to stack more FePt layers to realize a media structure suitable for practical use as a high-density HDD. More

  • in

    Innovative sensing platform unlocks ultrahigh sensitivity in conventional sensors

    Optical sensors serve as the backbone of numerous scientific and technological endeavors, from detecting gravitational waves to imaging biological tissues for medical diagnostics. These sensors use light to detect changes in properties of the environment they’re monitoring, including chemical biomarkers and physical properties like temperature. A persistent challenge in optical sensing has been enhancing sensitivity to detect faint signals amid noise.
    New research from Lan Yang, the Edwin H. & Florence G. Skinner Professor in the Preston M. Green Department of Electrical & Systems Engineering in the McKelvey School of Engineering at Washington University in St. Louis, unlocks the power of exceptional points (EPs) for advanced optical sensing. In a study published April 5 in Science Advances, Yang and first author Wenbo Mao, a doctoral student in Yang’s lab, showed that these unique EPs — specific conditions in systems where extraordinary optical phenomena can occur — can be deployed on conventional sensors to achieve a striking sensitivity to environmental perturbations.
    Yang and Mao developed an EP-enhanced sensing platform that overcomes the limitations of previous approaches. Unlike traditional methods that require modifications to the sensor itself, their innovative system features an EP control unit that can plug into physically separated external sensors. This configuration allows EPs to be tuned solely through adjustments to the control unit, allowing for ultrahigh sensitivity without the need for complex modifications to the sensor.
    “We’ve implemented a novel platform that can impart EP enhancement to conventional optical sensors,” Yang said. “This system represents a revolutionary extension of EP-enhanced sensing, significantly expanding its applicability and universality. Any phase-sensitive sensor can acquire improved sensitivity and reduced detection limit by connecting to this configuration. Simply by tuning the control unit, this EP configuration can adapt to various sensing scenarios, such as environmental detection, health monitoring and biomedical imaging.”
    By decoupling the sensing and control functions, Yang and Mao have effectively skirted the stringent physical requirements for operating sensors at EPs that have so far hindered their widespread adoption. This clears the way for EP enhancement to be applied to a wide range of conventional sensors — including ring resonators, thermal and magnetic sensors, and sensors that pick up vibrations or detect perturbations in biomarkers — vastly improving the detection limit of sensors scientists are already using. With the control unit set to an EP, the sensor can operate differently — not at an EP — and still reap the benefits of EP enhancement.
    As a proof-of-concept, Yang’s team tested a system’s detection limit, or ability to detect weak perturbations over system noise. They demonstrated a six-fold reduction in the detection limit of a sensor using their EP-enhanced configuration compared to the conventional sensor.
    “With this work, we’ve shown that we can significantly enhance our ability to detect perturbations that have weak signals,” Mao said. “We’re now focused on bringing that theory to broad applications. I’m specifically focused on medical applications, especially working to enhance magnetic sensing, which could be used to improve MRI technology. Currently, MRIs require a whole room with careful temperature control. Our EP platform could be used to enhance magnetic sensing to enable portable, bedside MRI.” More

  • in

    Can language models read the genome? This one decoded mRNA to make better vaccines

    The same class of artificial intelligence that made headlines coding software and passing the bar exam has learned to read a different kind of text — the genetic code.
    That code contains instructions for all of life’s functions and follows rules not unlike those that govern human languages. Each sequence in a genome adheres to an intricate grammar and syntax, the structures that give rise to meaning. Just as changing a few words can radically alter the impact of a sentence, small variations in a biological sequence can make a huge difference in the forms that sequence encodes.
    Now Princeton University researchers led by machine learning expertMengdi Wang are using language models to home in on partial genome sequences and optimize those sequences to study biology and improve medicine. And they are already underway.
    In a paper published April 5 in the journal Nature Machine Intelligence, the authors detail a language model that used its powers of semantic representation to design a more effective mRNA vaccine such as those used to protect against COVID-19.
    Found in Translation
    Scientists have a simple way to summarize the flow of genetic information. They call it the central dogma of biology. Information moves from DNA to RNA to proteins. Proteins create the structures and functions of living cells.
    Messenger RNA, or mRNA, converts the information into proteins in that final step, called translation. But mRNA is interesting. Only part of it holds the code for the protein. The rest is not translated but controls vital aspects of the translation process.

    Governing the efficiency of protein production is a key mechanism by which mRNA vaccines work. The researchers focused their language model there, on the untranslated region, to see how they could optimize efficiency and improve vaccines.
    After training the model on a small variety of species, the researchers generated hundreds of new optimized sequences and validated those results through lab experiments. The best sequences outperformed several leading benchmarks for vaccine development, including a 33% increase in the overall efficiency of protein production.
    Increasing protein production efficiency by even a small amount provides a major boost for emerging therapeutics, according to the researchers. Beyond COVID-19, mRNA vaccines promise to protect against many infectious diseases and cancers.
    Wang, a professor ofelectrical and computer engineering and the principal investigator in this study, said the model’s success also pointed to a more fundamental possibility. Trained on mRNA from a handful of species, it was able to decode nucleotide sequences and reveal something new about gene regulation. Scientists believe gene regulation, one of life’s most basic functions, holds the key to unlocking the origins of disease and disorder. Language models like this one could provide a new way to probe.
    Wang’s collaborators include researchers from the biotech firm RVAC Medicines as well as the Stanford University School of Medicine.
    The Language of Disease
    The new model differs in degree, not kind, from the large language models that power today’s AI chat bots. Instead of being trained on billions of pages of text from the internet, their model was trained on a few hundred thousand sequences. The model also was trained to incorporate additional knowledge about the production of proteins, including structural and energy-related information.

    The research team used the trained model to create a library of 211 new sequences. Each was optimized for a desired function, primarily an increase in the efficiency of translation. Those proteins, like the spike protein targeted by COVID-19 vaccines, drive the immune response to infectious disease.
    Previous studies have created language models to decode various biological sequences, including proteins and DNA, but this was the first language model to focus on the untranslated region of mRNA. In addition to a boost in overall efficiency, it was also able to predict how well a sequence would perform at a variety of related tasks.
    Wang said the real challenge in creating this language model was in understanding the full context of the available data. Training a model requires not only the raw data with all its features but also the downstream consequences of those features. If a program is designed to filter spam from email, each email it trains on would be labeled “spam” or “not spam.” Along the way, the model develops semantic representations that allow it to determine what sequences of words indicate a “spam” label. Therein lies the meaning.
    Wang said looking at one narrow dataset and developing a model around it was not enough to be useful for life scientists. She needed to do something new. Because this model was working at the leading edge of biological understanding, the data she found was all over the place.
    “Part of my dataset comes from a study where there are measures for efficiency,” Wang said. “Another part of my dataset comes from another study [that] measured expression levels. We also collected unannotated data from multiple resources.” Organizing those parts into one coherent and robust whole — a multifaceted dataset that she could use to train a sophisticated language model — was a massive challenge.
    “Training a model is not only about putting together all those sequences, but also putting together sequences with the labels that have been collected so far. This had never been done before.” More

  • in

    Chemical reactions can scramble quantum information as well as black holes

    If you were to throw a message in a bottle into a black hole, all of the information in it, down to the quantum level, would become completely scrambled. Because in black holes this scrambling happens as quickly and thoroughly as quantum mechanics allows. They are generally considered nature’s ultimate information scramblers.
    New research from Rice University theorist Peter Wolynes and collaborators at the University of Illinois Urbana-Champaign, however, has shown that molecules can be as formidable at scrambling quantum information as black holes. Combining mathematical tools from black hole physics and chemical physics, they have shown that quantum information scrambling takes place in chemical reactions and can nearly reach the same quantum mechanical limit as it does in black holes. The work is published online in the Proceedings of the National Academy of Sciences.
    “This study addresses a long-standing problem in chemical physics, which has to do with the question of how fast quantum information gets scrambled in molecules,” Wolynes said. “When people think about a reaction where two molecules come together, they think the atoms only perform a single motion where a bond is made or a bond is broken.
    “But from the quantum mechanical point of view, even a very small molecule is a very complicated system. Much like the orbits in the solar system, a molecule has a huge number of possible styles of motion — things we call quantum states. When a chemical reaction takes place, quantum information about the quantum states of the reactants becomes scrambled, and we want to know how information scrambling affects the reaction rate.”
    To better understand how quantum information is scrambled in chemical reactions, the scientists borrowed a mathematical tool typically used in black hole physics known as out-of-time-order correlators, or OTOCs.
    “OTOCs were actually invented in a very different context about 55 years ago, when they were used to look at how electrons in superconductors are affected by disturbances from an impurity,” Wolynes said. “They’re a very specialized object that is used in the theory of superconductivity. They were next used by physicists in the 1990s studying black holes and string theory.”
    OTOCs measure how much tweaking one part of a quantum system at some instant in time will affect the motions of the other parts — providing insight into how quickly and effectively information can spread throughout the molecule. They are the quantum analog of Lyapunov exponents, which measure unpredictability in classical chaotic systems.

    “How quickly an OTOC increases with time tells you how quickly information is being scrambled in the quantum system, meaning how many more random looking states are getting accessed,” said Martin Gruebele, a chemist at Illinois Urbana-Champaign and co-author on the study who is a part of the joint Rice-Illinois Center for Adapting Flaws as Features funded by the National Science Foundation. “Chemists are very conflicted about scrambling in chemical reactions, because scrambling is necessary to get to the reaction goal, but it also messes up your control over the reaction.
    “Understanding under what circumstances molecules scramble information and under what circumstances they don’t potentially gives us a handle on actually being able to control the reactions better. Knowing OTOCs basically allows us to set limits on when this information is really disappearing out of our control and conversely when we could still harness it to have controlled outcomes.”
    In classical mechanics, a particle must have enough energy to overcome an energy barrier for a reaction to occur. However, in quantum mechanics, there’s the possibility that particles can “tunnel” through this barrier even if they don’t possess sufficient energy. The calculation of OTOCs showed that chemical reactions with a low activation energy at low temperatures where tunneling dominates can scramble information at nearly the quantum limit, like a black hole.
    Nancy Makri, also a chemist at Illinois Urbana-Champaign, used path integral methods she has developed to study what happens when the simple chemical reaction model is embedded in a larger system, which could be a large molecule’s own vibrations or a solvent, and tends to suppress chaotic motion.
    “In a separate study, we found that large environments tend to make things more regular and suppress the effects that we’re talking about,” Makri said. “So we calculated the OTOC for a tunneling system interacting with a large environment, and what we saw was that the scrambling was quenched — a big change in the behavior.”
    One area of practical application for the research findings is to place limits on how tunneling systems can be used to build qubits for quantum computers. One needs to minimize information scrambling between interacting tunneling systems to improve the reliability of quantum computers. The research could also be relevant for light-driven reactions and advanced materials design.
    “There’s potential for extending these ideas to processes where you wouldn’t just be tunneling in one particular reaction, but where you’d have multiple tunneling steps, because that’s what’s involved in, for example, electron conduction in a lot of the new soft quantum materials like perovskites that are being used to make solar cells and things like that,” Gruebele said.
    Wolynes is Rice’s D.R. Bullard-Welch Foundation Professor of Science, a professor of chemistry, f biochemistry and cell biology, physics and astronomy and materials science and nanoengineering and co-director of its Center for Theoretical Biological Physics, which is funded by the National Science Foundation. Co-authors Gruebele is the James R. Eiszner Endowed Chair in Chemistry; Makri is the Edward William and Jane Marr Gutgsell Professor and professor of chemistry and physics; Chenghao Zhang was a graduate student in physics at Illinois Urbana-Champaign and is now a postdoc at Pacific Northwest National Lab; and Sohang Kundu recently received his Ph.D. in chemistry from the University of Illinois and is currently a postdoc at Columbia University.
    The research was supported by the National Science Foundation (1548562, 2019745, 1955302) and the Bullard-Welch Chair at Rice (C-0016). More

  • in

    Progress in quantum physics: Researchers tame superconductors

    Superconductors are materials that can conduct electricity without electrical resistance — making them the ideal base material for electronic components in MRI machines, magnetic levitation trains and even particle accelerators. However, conventional superconductors are easily disturbed by magnetism. An international group of researchers has now succeeded in building a hybrid device consisting of a stable proximitized-superconductor enhanced by magnetism and whose function can be specifically controlled.
    They combined the superconductor with a special semiconductor material known as a topological insulator. “Topological insulators are materials that conduct electricity on their surface but not inside. This is due to their unique topological structure, i.e. the special arrangement of the electrons,” explains Professor Charles Gould, a physicist at the Institute for Topological Insulators at the University of Würzburg (JMU). “The exciting thing is that we can equip topological insulators with magnetic atoms so that they can be controlled by a magnet.”
    The superconductors and topological insulators were coupled to form a so-called Josephson junction, a connection between two superconductors separated by a thin layer of non-superconducting material. “This allowed us to combine the properties of superconductivity and semiconductors,” says Gould. “So we combine the advantages of a superconductor with the controllability of the topological insulator. Using an external magnetic field, we can now precisely control the superconducting properties. This is a true breakthrough in quantum physics!”
    Superconductivity Meets Magnetism
    The special combination creates an exotic state in which superconductivity and magnetism are combined — normally these are opposite phenomena that rarely coexist. This is known as the proximity-induced Fulde-Ferrell-Larkin-Ovchinnikov (p-FFLO) state. The new “superconductor with a control function” could be important for practical applications, such as the development of quantum computers. Unlike conventional computers, quantum computers are based not on bits but on quantum bits (qubits), which can assume not just two but several states simultaneously.
    “The problem is that quantum bits are currently very unstable because they are extremely sensitive to external influences, such as electric or magnetic fields,” says physicist Gould. “Our discovery could help stabilise quantum bits so that they can be used in quantum computers in the future.” More

  • in

    New privacy-preserving robotic cameras obscure images beyond human recognition

    From robotic vacuum cleaners and smart fridges to baby monitors and delivery drones, the smart devices being increasingly welcomed into our homes and workplaces use vision to take in their surroundings, taking videos and images of our lives in the process.
    In a bid to restore privacy, researchers at the Australian Centre for Robotics at the University of Sydney and the Centre for Robotics (QCR) at Queensland University of Technology have created a new approach to designing cameras that process and scramble visual information before it is digitised so that it becomes obscured to the point of anonymity.
    Known as sighted systems, devices like smart vacuum cleaners form part of the “internet-of-things” — smart systems that connect to the internet. They can be at risk of being hacked by bad actors or lost through human error, their images and videos at risk of being stolen by third parties, sometimes with malicious intent.
    Acting as a “fingerprint,” the distorted images can still be used by robots to complete their tasks but do not provide a comprehensive visual representation that compromises privacy.
    “Smart devices are changing the way we work and live our lives, but they shouldn’t compromise our privacy and become surveillance tools,” said Adam Taras, who completed the research as part of his Honours thesis.
    “When we think of ‘vision’ we think of it like a photograph, whereas many of these devices don’t require the same type of visual access to a scene as humans do. They have a very narrow scope in terms of what they need to measure to complete a task, using other visual signals, such as colour and pattern recognition,” he said.
    The researchers have been able to segment the processing that normally happens inside a computer within the optics and analogue electronics of the camera, which exists beyond the reach of attackers.

    “This is the key distinguishing point from prior work which obfuscated the images inside the camera’s computer — leaving the images open to attack,” said Dr Don Dansereau, Taras’ supervisor at the Australian Centre for Robotics. “We go one level beyond to the electronics themselves, enabling a greater level of protection.”
    The researchers tried to hack their approach but were unable to reconstruct the images in any recognisable format. They have opened this task to the research community at large, challenging others to hack their method.
    “If these images were to be accessed by a third party, they would not be able to make much of them, and privacy would be preserved,” said Taras.
    Dr Dansereau said privacy was increasingly becoming a concern as more devices today come with built-in cameras, and with the possible increase in new technologies in the near future like parcel drones, which travel into residential areas to make deliveries.
    “You wouldn’t want images taken inside your home by your robot vacuum cleaner leaked on the dark web, nor would you want a delivery drone to map out your backyard. It is too risky to allow services linked to the web to capture and hold onto this information,” said Dr Dansereau.
    The approach could also be used to make devices that work in places where privacy and security are a concern, such as warehouses, hospitals, factories, schools and airports.
    The researchers hope to next build physical camera prototypes to demonstrate the approach in practice.
    “Current robotic vision technology tends to ignore the legitimate privacy concerns of end-users. This is a short-sighted strategy that slows down or even prevents the adoption of robotics in many applications of societal and economic importance. Our new sensor design takes privacy very seriously, and I hope to see it taken up by industry and used in many applications,” said Professor Niko Suenderhauf, Deputy Director of the QCR, who advised on the project.
    Professor Peter Corke, Distinguished Professor Emeritus and Adjunct Professor at the QCR who also advised on the project said: “Cameras are the robot equivalent of a person’s eyes, invaluable for understanding the world, knowing what is what and where it is. What we don’t want is the pictures from those cameras to leave the robot’s body, to inadvertently reveal private or intimate details about people or things in the robot’s environment.” More