More stories

  • in

    New programmable smart fabric responds to temperature and electricity

    A new smart material developed by researchers at the University of Waterloo is activated by both heat and electricity, making it the first ever to respond to two different stimuli.
    The unique design paves the way for a wide variety of potential applications, including clothing that warms up while you walk from the car to the office in winter and vehicle bumpers that return to their original shape after a collision.
    Inexpensively made with polymer nano-composite fibres from recycled plastic, the programmable fabric can change its colour and shape when stimuli are applied.
    “As a wearable material alone, it has almost infinite potential in AI, robotics and virtual reality games and experiences,” said Dr. Milad Kamkar, a chemical engineering professor at Waterloo. “Imagine feeling warmth or a physical trigger eliciting a more in-depth adventure in the virtual world.”
    The novel fabric design is a product of the happy union of soft and hard materials, featuring a combination of highly engineered polymer composites and stainless steel in a woven structure.
    Researchers created a device similar to a traditional loom to weave the smart fabric. The resulting process is extremely versatile, enabling design freedom and macro-scale control of the fabric’s properties.
    The fabric can also be activated by a lower voltage of electricity than previous systems, making it more energy-efficient and cost-effective. In addition, lower voltage allows integration into smaller, more portable devices, making it suitable for use in biomedical devices and environment sensors.
    “The idea of these intelligent materials was first bred and born from biomimicry science,” said Kamkar, director of the Multi-scale Materials Design (MMD) Centre at Waterloo.
    “Through the ability to sense and react to environmental stimuli such as temperature, this is proof of concept that our new material can interact with the environment to monitor ecosystems without damaging them.”
    The next step for researchers is to improve the fabric’s shape-memory performance for applications in the field of robotics. The aim is to construct a robot that can effectively carry and transfer weight to complete tasks. More

  • in

    Better superconductors with palladium

    It is one of the most exciting races in modern physics: How can we produce the best superconductors that remain superconducting even at the highest possible temperatures and ambient pressure? In recent years, a new era of superconductivity has begun with the discovery of nickelates. These superconductors are based on nickel, which is why many scientists speak of the “nickel age of superconductivity research.” In many respects, nickelates are similar to cuprates, which are based on copper and were discovered in the 1980s.
    But now a new class of materials is coming into play: In a cooperation between TU Wien and universities in Japan, it was possible to simulate the behaviour of various materials more precisely on the computer than before. There is a “Goldilocks zone” in which superconductivity works particularly well. And this zone is reached neither with nickel nor with copper, but with palladium. This could usher in a new “age of palladates” in superconductivity research. The results have now been published in the scientific journal Physical Review Letters.
    The search for higher transition temperatures
    At high temperatures, superconductors behave very similar to other conducting materials. But when they are cooled below a certain “critical temperature,” they change dramatically: their electrical resistance disappears completely and suddenly they can conduct electricity without any loss. This limit, at which a material changes between a superconducting and a normally conducting state, is called the “critical temperature.”
    “We have now been able to calculate this “critical temperature” for a whole range of materials. With our modelling on high-performance computers, we were able to predict the phase diagram of nickelate superconductivity with a high degree of accuracy, as the experiments then showed later,” says Prof. Karsten Held from the Institute of Solid State Physics at TU Wien.
    Many materials become superconducting only just above absolute zero (-273.15°C), while others retain their superconducting properties even at much higher temperatures. A superconductor that still remains superconducting at normal room temperature and normal atmospheric pressure would fundamentally revolutionise the way we generate, transport and use electricity. However, such a material has not yet been discovered. Nevertheless, high-temperature superconductors, including those from the cuprate class, play an important role in technology — for example, in the transmission of large currents or in the production of extremely strong magnetic fields.
    Copper? Nickel? Or Palladium?
    The search for the best possible superconducting materials is difficult: there are many different chemical elements that come into question. You can put them together in different structures, you can add tiny traces of other elements to optimise superconductivity. “To find suitable candidates, you have to understand on a quantum-physical level how the electrons interact with each other in the material,” says Prof. Karsten Held.
    This showed that there is an optimum for the interaction strength of the electrons. The interaction must be strong, but also not too strong. There is a “golden zone” in between that makes it possible to achieve the highest transition temperatures.
    Palladates as the optimal solution
    This golden zone of medium interaction can be reached neither with cuprates nor with nickelates — but one can hit the bull’s eye with a new type of material: so-called palladates. “Palladium is directly one line below nickel in the periodic table. The properties are similar, but the electrons there are on average somewhat further away from the atomic nucleus and each other, so the electronic interaction is weaker,” says Karsten Held.
    The model calculations show how to achieve optimal transition temperatures for palladium data. “The computational results are very promising,” says Karsten Held. “We hope that we can now use them to initiate experimental research. If we have a whole new, additional class of materials available with palladates to better understand superconductivity and to create even better superconductors, this could bring the entire research field forward.” More

  • in

    Cheaper method for making woven displays and smart fabrics — of any size or shape

    Researchers have developed next-generation smart textiles — incorporating LEDs, sensors, energy harvesting, and storage — that can be produced inexpensively, in any shape or size, using the same machines used to make the clothing we wear every day.
    The international team, led by the University of Cambridge, have previously demonstrated that woven displays can be made at large sizes, but these earlier examples were made using specialised manual laboratory equipment. Other smart textiles can be manufactured in specialised microelectronic fabrication facilities, but these are highly expensive and produce large volumes of waste.
    However, the team found that flexible displays and smart fabrics can be made much more cheaply, and more sustainably, by weaving electronic, optoelectronic, sensing and energy fibre components on the same industrial looms used to make conventional textiles. Their results, reported in the journal Science Advances, demonstrate how smart textiles could be an alternative to larger electronics in sectors including automotive, electronics, fashion and construction.
    Despite recent progress in the development of smart textiles, their functionality, dimensions and shapes have been limited by current manufacturing processes.
    “We could make these textiles in specialised microelectronics facilities, but these require billions of pounds of investment,” said Dr Sanghyo Lee from Cambridge’s Department of Engineering, the paper’s first author. “In addition, manufacturing smart textiles in this way is highly limited, since everything has to be made on the same rigid wafers used to make integrated circuits, so the maximum size we can get is about 30 centimetres in diameter.”
    “Smart textiles have also been limited by their lack of practicality,” said Dr Luigi Occhipinti, also from the Department of Engineering, who co-led the research. “You think of the sort of bending, stretching and folding that normal fabrics have to withstand, and it’s been a challenge to incorporate that same durability into smart textiles.”
    Last year, some of the same researchers showed that if the fibres used in smart textiles were coated with materials that can withstand stretching, they could be compatible with conventional weaving processes. Using this technique, they produced a 46-inch woven demonstrator display.

    Now, the researchers have shown that smart textiles can be made using automated processes, with no limits on their size or shape. Multiple types of fibre devices, including energy storage devices, light-emitting diodes, and transistors were fabricated, encapsulated, and mixed with conventional fibres, either synthetic or natural, to build smart textiles by automated weaving. The fibre devices were interconnected by an automated laser welding method with electrically conductive adhesive.
    The processes were all optimised to minimise damage to the electronic components, which in turn made the smart textiles durable enough to withstand the stretching of an industrial weaving machine. The encapsulation method was developed to consider the functionality of the fibre devices, and the mechanical force and thermal energy were investigated systematically to achieve the automated weaving and laser-based interconnection, respectively.
    The research team, working in partnership with textile manufacturers, were able to produce test patches of smart textiles of roughly 50×50 centimetres, although this can be scaled up to larger dimensions and produced in large volumes.
    “These companies have well-established manufacturing lines with high throughput fibre extruders and large weaving machines that can weave a metre square of textiles automatically,” said Lee. “So when we introduce the smart fibres to the process, the result is basically an electronic system that is manufactured exactly the same way other textiles are manufactured.”
    The researchers say it could be possible for large, flexible displays and monitors to be made on industrial looms, rather than in specialised electronics manufacturing facilities, which would make them far cheaper to produce. Further optimisation of the process is needed, however.
    “The flexibility of these textiles is absolutely amazing,” said Occhipinti. “Not just in terms of their mechanical flexibility, but the flexibility of the approach, and to deploy sustainable and eco-friendly electronics manufacturing platforms that contribute to the reduction of carbon emissions and enable real applications of smart textiles in buildings, car interiors and clothing. Our approach is quite unique in that way.”
    The research was supported in part by the European Union and UK Research and Innovation. More

  • in

    Nanowire networks learn and remember like a human brain

    An international team led by scientists at the University of Sydney has demonstrated nanowire networks can exhibit both short- and long-term memory like the human brain.
    The research has been published today in the journal Science Advances, led by Dr Alon Loeffler, who received his PhD in the School of Physics, with collaborators in Japan.
    “In this research we found higher-order cognitive function, which we normally associate with the human brain, can be emulated in non-biological hardware,” Dr Loeffler said.
    “This work builds on our previous research in which we showed how nanotechnology could be used to build a brain-inspired electrical device with neural network-like circuitry and synapse-like signalling.
    “Our current work paves the way towards replicating brain-like learning and memory in non-biological hardware systems and suggests that the underlying nature of brain-like intelligence may be physical.”
    Nanowire networks are a type of nanotechnology typically made from tiny, highly conductive silver wires that are invisible to the naked eye, covered in a plastic material, which are scattered across each other like a mesh. The wires mimic aspects of the networked physical structure of a human brain.

    Advances in nanowire networks could herald many real-world applications, such as improving robotics or sensor devices that need to make quick decisions in unpredictable environments.
    “This nanowire network is like a synthetic neural network because the nanowires act like neurons, and the places where they connect with each other are analogous to synapses,” senior author Professor Zdenka Kuncic, from the School of Physics, said.
    “Instead of implementing some kind of machine learning task, in this study Dr Loeffler has actually taken it one step further and tried to demonstrate that nanowire networks exhibit some kind of cognitive function.”
    To test the capabilities of the nanowire network, the researchers gave it a test similar to a common memory task used in human psychology experiments, called the N-Back task.
    For a person, the N-Back task might involve remembering a specific picture of a cat from a series of feline images presented in a sequence. An N-Back score of 7, the average for people, indicates the person can recognise the same image that appeared seven steps back.

    When applied to the nanowire network, the researchers found it could ‘remember’ a desired endpoint in an electric circuit seven steps back, meaning a score of 7 in an N-Back test.
    “What we did here is manipulate the voltages of the end electrodes to force the pathways to change, rather than letting the network just do its own thing. We forced the pathways to go where we wanted them to go,” Dr Loefflersaid.
    “When we implement that, its memory had much higher accuracy and didn’t really decrease over time, suggesting that we’ve found a way to strengthen the pathways to push them towards where we want them, and then the network remembers it.
    “Neuroscientists think this is how the brain works, certain synaptic connections strengthen while others weaken, and that’s thought to be how we preferentially remember some things, how we learn and so on.”
    The researcherssaid when the nanowire network is constantly reinforced, it reaches a point where that reinforcement is no longer needed because the information is consolidated into memory.
    “It’s kind of like the difference between long-term memory and short-term memory in our brains,” Professor Kuncic said.
    “If we want to remember something for a long period of time, we really need to keep training our brains to consolidate that, otherwise it just kind of fades away over time.
    “One task showed that the nanowire network can store up to seven items in memory at substantially higher than chance levels without reinforcement training and near-perfect accuracy with reinforcement training.” More

  • in

    ChatGPT is still no match for humans when it comes to accounting

    Last month, OpenAI launched its newest AI chatbot product, GPT-4. According to the folks at OpenAI, the bot, which uses machine learning to generate natural language text, passed the bar exam with a score in the 90th percentile, passed 13 of 15 AP exams and got a nearly perfect score on the GRE Verbal test.
    Inquiring minds at BYU and 186 other universities wanted to know how OpenAI’s tech would fare on accounting exams. So, they put the original version, ChatGPT, to the test. The researchers say that while it still has work to do in the realm of accounting, it’s a game changer that will change the way everyone teaches and learns — for the better.
    “When this technology first came out, everyone was worried that students could now use it to cheat,” said lead study author David Wood, a BYU professor of accounting. “But opportunities to cheat have always existed. So for us, we’re trying to focus on what we can do with this technology now that we couldn’t do before to improve the teaching process for faculty and the learning process for students. Testing it out was eye-opening.”
    Since its debut in November 2022, ChatGPT has become the fastest growing technology platform ever, reaching 100 million users in under two months. In response to intense debate about how models like ChatGPT should factor into education, Wood decided to recruit as many professors as possible to see how the AI fared against actual university accounting students.
    His co-author recruiting pitch on social media exploded: 327 co-authors from 186 educational institutions in 14 countries participated in the research, contributing 25,181 classroom accounting exam questions. They also recruited undergrad BYU students (including Wood’s daughter, Jessica) to feed another 2,268 textbook test bank questions to ChatGPT. The questions covered accounting information systems (AIS), auditing, financial accounting, managerial accounting and tax, and varied in difficulty and type (true/false, multiple choice, short answer, etc.).
    Although ChatGPT’s performance was impressive, the students performed better. Students scored an overall average of 76.7%, compared to ChatGPT’s score of 47.4%. On a 11.3% of questions, ChatGPT scored higher than the student average, doing particularly well on AIS and auditing. But the AI bot did worse on tax, financial, and managerial assessments, possibly because ChatGPT struggled with the mathematical processes required for the latter type.
    When it came to question type, ChatGPT did better on true/false questions (68.7% correct) and multiple-choice questions (59.5%), but struggled with short-answer questions (between 28.7% and 39.1%). In general, higher-order questions were harder for ChatGPT to answer. In fact, sometimes ChatGPT would provide authoritative written descriptions for incorrect answers, or answer the same question different ways.
    “It’s not perfect; you’re not going to be using it for everything,” said Jessica Wood, currently a freshman at BYU. “Trying to learn solely by using ChatGPT is a fool’s errand.”
    The researchers also uncovered some other fascinating trends through the study, including: ChatGPT doesn’t always recognize when it is doing math and makes nonsensical errors such as adding two numbers in a subtraction problem, or dividing numbers incorrectly. ChatGPT often provides explanations for its answers, even if they are incorrect. Other times, ChatGPT’s descriptions are accurate, but it will then proceed to select the wrong multiple-choice answer. ChatGPT sometimes makes up facts. For example, when providing a reference, it generates a real-looking reference that is completely fabricated. The work and sometimes the authors do not even exist.That said, authors fully expect GPT-4 to improve exponentially on the accounting questions posed in their study, and the issues mentioned above. What they find most promising is how the chatbot can help improve teaching and learning, including the ability to design and test assignments, or perhaps be used for drafting portions of a project.
    “It’s an opportunity to reflect on whether we are teaching value-added information or not,” said study coauthor and fellow BYU accounting professor Melissa Larson. “This is a disruption, and we need to assess where we go from here. Of course, I’m still going to have TAs, but this is going to force us to use them in different ways.” More

  • in

    Reinforcement learning: From board games to protein design

    Scientists have successfully applied reinforcement learning to a challenge in molecular biology.
    The team of researchers developed powerful new protein design software adapted from a strategy proven adept at board games like Chess and Go. In one experiment, proteins made with the new approach were found to be more effective at generating useful antibodies in mice.
    The findings, reported April 21 in Science, suggest that this breakthrough may soon lead to more potent vaccines. More broadly, the approach could lead to a new era in protein design.
    “Our results show that reinforcement learning can do more than master board games. When trained to solve long-standing puzzles in protein science, the software excelled at creating useful molecules,” said senior author David Baker, professor of biochemistry at the UW School of Medicine in Seattle and a recipient of the 2021 Breakthrough Prize in Life Sciences.
    “If this method is applied to the right research problems,” he said, “it could accelerate progress in a variety of scientific fields.”
    The research is a milestone in tapping artificial intelligence to conduct protein science research. The potential applications are vast, from developing more effective cancer treatments to creating new biodegradable textiles.

    Reinforcement learning is a type of machine learning in which a computer program learns to make decisions by trying different actions and receiving feedback. Such an algorithm can learn to play chess, for example, by testing millions of different moves that lead to victory or defeat on the board. The program is designed to learn from these experiences and become better at making decisions over time.
    To make a reinforcement learning program for protein design, the scientists gave the computer millions of simple starting molecules. The software then made ten thousand attempts at randomly improving each toward a predefined goal. The computer lengthened the proteins or bent them in specific ways until it learned how to contort them into desired shapes.
    Isaac D. Lutz, Shunzhi Wang, and Christoffer Norn, all members of the Baker Lab, led the research. Their team’s Science manuscript is titled “Top-down design of protein architectures with reinforcement learning.”
    “Our approach is unique because we use reinforcement learning to solve the problem of creating protein shapes that fit together like pieces of a puzzle,” explained co-lead author Lutz, a doctoral student at the UW Medicine Institute for Protein Design. “This simply was not possible using prior approaches and has the potential to transform the types of molecules we can build.”
    As part of this study, the scientists manufactured hundreds of AI-designed proteins in the lab. Using electron microscopes and other instruments, they confirmed that many of the protein shapes created by the computer were indeed realized in the lab.

    “This approach proved not only accurate but also highly customizable. For example, we asked the software to make spherical structures with no holes, small holes, or large holes. Its potential to make all kinds of architectures has yet to be fully explored,” said co-lead author Shunzhi Wang, a postdoctoral scholar at the UW Medicine Institute for Protein Design.
    The team concentrated on designing new nano-scale structures composed of many protein molecules. This required designing both the protein components themselves and the chemical interfaces that allow the nano-structures to self-assemble.
    Electron microscopy confirmed that numerous AI-designed nano-structures were able to form in the lab. As a measure of how accurate the design software had become, the scientists observed many unique nano-structures in which every atom was found to be in the intended place. In other words, the deviation between the intended and realized nano-structure was on average less than the width of a single atom. This is called atomically accurate design.
    The authors foresee a future in which this approach could enable them and others to create therapeutic proteins, vaccines, and other molecules that could not have been made using prior methods.
    Researchers from the UW Medicine Institute for Stem Cell and Regenerative Medicine used primary cell models of blood vessel cells to show that the designed protein scaffolds outperformed previous versions of the technology. For example, because the receptors that help cells receive and interpret signals were clustered more densely on the more compact scaffolds, they were more effective at promoting blood vessel stability.
    Hannele Ruohola-Baker, a UW School of Medicine professor of biochemistry and one of the study’s authors, spoke to the implications of the investigation for regenerative medicine: “The more accurate the technology becomes, the more it opens up potential applications, including vascular treatments for diabetes, brain injuries, strokes, and other cases where blood vessels are at risk. We can also imagine more precise delivery of factors that we use to differentiate stem cells into various cell types, giving us new ways to regulate the processes of cell development and aging.”
    This work was funded by the National Institutes of Health (P30 GM124169, S10OD018483, 1U19AG065156-01, T90 DE021984, 1P01AI167966); Open Philanthropy Project and The Audacious Project at the Institute for Protein Design; Novo Nordisk Foundation (NNF170C0030446); Microsoft; and Amgen. Research was in part conducted at the Advanced Light Source, a national user facility operated by Lawrence Berkeley National Laboratory on behalf of the Department of Energy
    News release written by Ian Haydon, UW Medicine Institute for Protein Design. More

  • in

    AI system can generate novel proteins that meet structural design targets

    MIT researchers are using artificial intelligence to design new proteins that go beyond those found in nature.
    They developed machine-learning algorithms that can generate proteins with specific structural features, which could be used to make materials that have certain mechanical properties, like stiffness or elasticity. Such biologically inspired materials could potentially replace materials made from petroleum or ceramics, but with a much smaller carbon footprint.
    The researchers from MIT, the MIT-IBM Watson AI Lab, and Tufts University employed a generative model, which is the same type of machine-learning model architecture used in AI systems like DALL-E 2. But instead of using it to generate realistic images from natural language prompts, like DALL-E 2 does, they adapted the model architecture so it could predict amino acid sequences of proteins that achieve specific structural objectives.
    In a paper to be published in Chem, the researchers demonstrate how these models can generate realistic, yet novel, proteins. The models, which learn biochemical relationships that control how proteins form, can produce new proteins that could enable unique applications, says senior author Markus Buehler, the Jerry McAfee Professor in Engineering and professor of civil and environmental engineering and of mechanical engineering.
    For instance, this tool could be used to develop protein-inspired food coatings, which could keep produce fresh longer while being safe for humans to eat. And the models can generate millions of proteins in a few days, quickly giving scientists a portfolio of new ideas to explore, he adds.
    “When you think about designing proteins nature has not discovered yet, it is such a huge design space that you can’t just sort it out with a pencil and paper. You have to figure out the language of life, the way amino acids are encoded by DNA and then come together to form protein structures. Before we had deep learning, we really couldn’t do this,” says Buehler, who is also a member of the MIT-IBM Watson AI Lab.

    Joining Buehler on the paper are lead author Bo Ni, a postdoc in Buehler’s Laboratory for Atomistic and Molecular Mechanics; and David Kaplan, the Stern Family Professor of Engineering and professor of bioengineering at Tufts.
    Adapting new tools for the task
    Proteins are formed by chains of amino acids, folded together in 3D patterns. The sequence of amino acids determines the mechanical properties of the protein. While scientists have identified thousands of proteins created through evolution, they estimate that an enormous number of amino acid sequences remain undiscovered.
    To streamline protein discovery, researchers have recently developed deep learning models that can predict the 3D structure of a protein for a set of amino acid sequences. But the inverse problem — predicting a sequence of amino acid structures that meet design targets — has proven even more challenging.
    A new advent in machine learning enabled Buehler and his colleagues to tackle this thorny challenge: attention-based diffusion models.

    Attention-based models can learn very long-range relationships, which is key to developing proteins because one mutation in a long amino acid sequence can make or break the entire design, Buehler says. A diffusion model learns to generate new data through a process that involves adding noise to training data, then learning to recover the data by removing the noise. They are often more effective than other models at generating high-quality, realistic data that can be conditioned to meet a set of target objectives to meet a design demand.
    The researchers used this architecture to build two machine-learning models that can predict a variety of new amino acid sequences which form proteins that meet structural design targets.
    “In the biomedical industry, you might not want a protein that is completely unknown because then you don’t know its properties. But in some applications, you might want a brand-new protein that is similar to one found in nature, but does something different. We can generate a spectrum with these models, which we control by tuning certain knobs,” Buehler says.
    Common folding patterns of amino acids, known as secondary structures, produce different mechanical properties. For instance, proteins with alpha helix structures yield stretchy materials while those with beta sheet structures yield rigid materials. Combining alpha helices and beta sheets can create materials that are stretchy and strong, like silks.
    The researchers developed two models, one that operates on overall structural properties of the protein and one that operates at the amino acid level. Both models work by combining these amino acid structures to generate proteins. For the model that operates on the overall structural properties, a user inputs a desired percentage of different structures (40 percent alpha-helix and 60 percent beta sheet, for instance). Then the model generates sequences that meet those targets. For the second model, the scientist also specifies the order of amino acid structures, which gives much finer-grained control.
    The models are connected to an algorithm that predicts protein folding, which the researchers use to determine the protein’s 3D structure. Then they calculate its resulting properties and check those against the design specifications.
    Realistic yet novel designs
    They tested their models by comparing the new proteins to known proteins that have similar structural properties. Many had some overlap with existing amino acid sequences, about 50 to 60 percent in most cases, but also some entirely new sequences. The level of similarity suggests that many of the generated proteins are synthesizable, Buehler adds.
    To ensure the predicted proteins are reasonable, the researchers tried to trick the models by inputting physically impossible design targets. They were impressed to see that, instead of producing improbable proteins, the models generated the closest synthesizable solution.
    “The learning algorithm can pick up the hidden relationships in nature. This gives us confidence to say that whatever comes out of our model is very likely to be realistic,” Ni says.
    Next, the researchers plan to experimentally validate some of the new protein designs by making them in a lab. They also want to continue augmenting and refining the models so they can develop amino acid sequences that meet more criteria, such as biological functions.
    “For the applications we are interested in, like sustainability, medicine, food, health, and materials design, we are going to need to go beyond what nature has done. Here is a new design tool that we can use to create potential solutions that might help us solve some of the really pressing societal issues we are facing,” Buehler says.
    This research was supported, in part, by the MIT-IBM Watson AI Lab, the U.S. Department of Agriculture, the U.S. Department of Energy, the Army Research Office, the National Institutes of Health, and the Office of Naval Research. More

  • in

    Quantum entanglement could make accelerometers and dark matter sensors more accurate

    The “spooky action at a distance” that once unnerved Einstein may be on its way to being as pedestrian as the gyroscopes that currently measure acceleration in smartphones.
    Quantum entanglement significantly improves the precision of sensors that can be used to navigate without GPS, according to a new study in Nature Photonics.
    “By exploiting entanglement, we improve both measurement sensitivity and how quickly we can make the measurement,” said Zheshen Zhang, associate professor of electrical and computer engineering at the University of Michigan and co-corresponding author of the study. The experiments were done at the University of Arizona, where Zhang was working at the time.
    Optomechanical sensors measure forces that disturb a mechanical sensing device that moves in response. That motion is then measured with light waves. In this experiment, the sensors were membranes, which act like drum heads that vibrate after experiencing a push. Optomechanical sensors can function as accelerometers, which can be used for inertial navigation on a planet that doesn’t have GPS satellites or within a building as a person navigates different floors.
    Quantum entanglement could make optomechanical sensors more accurate than inertial sensors currently in use. It could also enable optomechanical sensors to look for very subtle forces, such as identifying the presence of dark matter. Dark matter is invisible matter believed to account for five times more of the mass in the universe than what we can sense with light. It would tug on the sensor with gravitational force.
    Here’s how entanglement improves optomechanical sensors:
    Optomechanical sensors rely on two synchronized laser beams. One of them is reflected from a sensor, and any movement in the sensor changes the distance that the light travels on its way to the detector. That difference in distance traveled shows up when the second wave overlaps with the first. If the sensor is still, the two waves are perfectly aligned. But if the sensor is moving, they create an interference pattern as the peaks and troughs of their waves cancel each other out in places. That pattern reveals the size and speed of vibrations in the sensor.

    Usually in interferometry systems, the further the light travels, the more accurate the system becomes. The most sensitive interferometry system on the planet, the Laser Interferometer Gravitational-Wave Observatory, sends light on 8-kilometer journeys. But that’s not going to fit in a smartphone.
    To enable high accuracy in miniaturized optomechanical sensors, Zhang’s team explored quantum entanglement. Rather than splitting the light once so that it bounced off a sensor and a mirror, they split each beam a second time so that the light bounced off two sensors and two mirrors. Dalziel Wilson, an assistant professor of optical sciences at the University of Arizona, along with his doctoral students Aman Agrawal and Christian Pluchar, built the membrane devices. These membranes, just 100 nanometers — or 0.0001 millimeters — thick, move in response to very small forces.
    Doubling the sensors improves the accuracy, as the membranes should be vibrating in sync with each other, but the entanglement adds an extra level of coordination. Zhang’s group created the entanglement by “squeezing” the laser light. In quantum mechanical objects, such as the photons that make up light, there is a fundamental limit on how well the position and momentum of a particle can be known. Because photons are also waves, this translates to the phase of the wave (where it is in its oscillation) and its amplitude (how much energy it carries).
    “Squeezing redistributes the uncertainty, so that the squeezed component is known more precisely, and the anti-squeezed component carries more of the uncertainty. We squeezed the phase because that is what we needed to know for our measurement,” said Yi Xia, a recent Ph.D. graduate from Zhang’s lab at the University of Arizona and co-corresponding author of the paper.
    In squeezed light, the photons are more closely related to one another. Zhang contrasted what happens when the photons go through a beam splitter with cars coming to a fork in the freeway.

    “You have three cars going one way and three cars going the other way. But in quantum superposition, each car goes both ways. Now the cars on the left are entangled with the cars on the right,” he said.
    Because the fluctuations in the two entangled beams are linked, the uncertainties in their phase measurements are correlated. As a result, with some mathematical wizardry, the team was able to get measurements that are 40% more precise than with two unentangled beams, and they can do it 60% faster. What’s more, the precision and speed is expected to rise in proportion to the number of sensors.
    “It is envisioned that an array of entanglement-enhanced sensors will offer orders-of-magnitude performance gain over existing sensing technology to enable the detection of particles beyond the present physical model, opening the door to a new world that is yet to be observed,” said Zhang.
    The team’s next steps are to miniaturize the system. Already, they can put a squeezed-light source on a chip that is just half a centimeter to a side. They expect to have a prototype chip with the squeezed-light source, beam splitters, waveguides and inertial sensors within a year or two.
    The study was funded by the Office of Naval Research, National Science Foundation, Department of Energy and Defense Advanced Research Projects Agency. More