More stories

  • in

    NeuWS camera answers ‘holy grail problem’ in optical imaging

    Engineers from Rice University and the University of Maryland have created full-motion video technology that could potentially be used to make cameras that peer through fog, smoke, driving rain, murky water, skin, bone and other media that reflect scattered light and obscure objects from view.
    “Imaging through scattering media is the ‘holy grail problem’ in optical imaging at this point,” said Rice’s Ashok Veeraraghavan, co-corresponding author of an open-access study published today in Science Advances. “Scattering is what makes light — which has lower wavelength, and therefore gives much better spatial resolution — unusable in many, many scenarios. If you can undo the effects of scattering, then imaging just goes so much further.”
    Veeraraghavan’s lab collaborated with the research group of Maryland co-corresponding author Christopher Metzler to create a technology they named NeuWS, which is an acronym for “neural wavefront shaping,” the technology’s core technique.
    “If you ask people who are working on autonomous driving vehicles about the biggest challenges they face, they’ll say, ‘Bad weather. We can’t do good imaging in bad weather.'” Veeraraghavan said. “They are saying ‘bad weather,’ but what they mean, in technical terms, is light scattering. If you ask biologists about the biggest challenges in microscopy, they’ll say, ‘We can’t image deep tissue in vivo.’ They’re saying ‘deep tissue’ and ‘in vivo,’ but what they actually mean is that skin and other layers of tissue they want to see through, are scattering light. If you ask underwater photographers about their biggest challenge, they’ll say, ‘I can only image things that are close to me.’ What they actually mean is light scatters in water, and therefore doesn’t go deep enough for them to focus on things that are far away.
    “In all of these circumstances, and others, the real technical problem is scattering,” Veeraraghavan said.
    He said NeuWS could potentially be used to overcome scattering in those scenarios and others.

    “This is a big step forward for us, in terms of solving this in a way that’s potentially practical,” he said. “There’s a lot of work to be done before we can actually build prototypes in each of those application domains, but the approach we have demonstrated could traverse them.”
    Conceptually, NeuWS is based on the principle that light waves are complex mathematical quantities with two key properties that can be computed for any given location. The first, magnitude, is the amount of energy the wave carries at the location, and the second is phase, which is the wave’s state of oscillation at the location. Metzler and Veeraraghavan said measuring phase is critical for overcoming scattering, but it is impractical to measure directly because of the high-frequency of optical light.
    So they instead measure incoming light as “wavefronts” — single measurements that contain both phase and intensity information — and use backend processing to rapidly decipher phase information from several hundred wavefront measurements per second.
    “The technical challenge is finding a way to rapidly measure phase information,” said Metzler, an assistant professor of computer science at Maryland and “triple Owl” Rice alum who earned his Ph.D., masters and bachelors degrees in electrical and computer engineering from Rice in 2019, 2014 and 2013 respectively. Metzler was at Rice University during the development of an earlier iteration of wavefront-processing technology called WISH that Veeraraghavan and colleagues published in 2020.
    “WISH tackled the same problem, but it worked under the assumption that everything was static and nice,” Veeraraghavan said. “In the real world, of course, things change all of the time.”
    With NeuWS, he said, the idea is to not only undo the effects of scattering, but to undo them fast enough so the scattering media itself doesn’t change during the measurement.

    “Instead of measuring the state of oscillation itself, you measure its correlation with known wavefronts,” Veeraraghavan said. “You take a known wavefront, you interfere that with the unknown wavefront and you measure the interference pattern produced by the two. That is the correlation between those two wavefronts.”
    Metzler used the analogy of looking at the North Star at night through a haze of clouds. “If I know what the North Star is supposed to look like, and I can tell it is blurred in a particular way, then that tells me how everything else will be blurred.”
    Veerarghavan said, “It’s not a comparison, it’s a correlation, and if you measure at least three such correlations, you can uniquely recover the unknown wavefront.”
    State-of-the-art spatial light modulators can make several hundred such measurements per minute, and Veeraraghavan, Metzler and colleagues showed they could use a modulator and their computational method to capture video of moving objects that were obscured from view by intervening scattering media.
    “This is the first step, the proof-of principle that this technology can correct for light scattering in real time,” said Rice’s Haiyun Guo, one of the study’s lead authors and a Ph.D. student in Veeraraghavan’s research group.
    In one set of experiments, for example, a microscope slide containing a printed image of an owl or a turtle was spun on a spindle and filmed by an overhead camera. Light-scattering media were placed between the camera and target slide, and the researchers measured NeuWS ability to correct for light-scattering. Examples of scattering media included onion skin, slides coated with nail polish, slices of chicken breast tissue and light-diffusing films. For each of these, the experiments showed NeuWS could correct for light scattering and produce clear video of the spinning figures.
    “We developed algorithms that allow us to continuously estimate both the scattering and the scene,” Metzler said. “That’s what allows us to do this, and we do it with mathematical machinery called neural representation that allows it to be both efficient and fast.”
    NeuWS rapidly modulates light from incoming wavefronts to create several slightly altered phase measurements. The altered phases are then fed directly into a 16,000 parameter neural network that quickly computes the necessary correlations to recover the wavefront’s original phase information.
    “The neural networks allow it to be faster by allowing us to design algorithms that require fewer measurements,” Veeraraghavan said.
    Metzler said, “That’s actually the biggest selling point. Fewer measurements, basically, means we need much less capture time. It’s what allows us to capture video rather than still frames.”
    The research was supported by the Air Force Office of Scientific Research (FA9550- 22-1-0208), the National Science Foundation (1652633, 1730574, 1648451) and the National Institutes of Health (DE032051), and partial funding for open access was provided by the University of Maryland Libraries’ Open Access Publishing Fund. More

  • in

    GPT-3 informs and disinforms us better

    A recent study conducted by researchers at the University of Zurich delved into the capabilities of AI models, specifically focusing on OpenAI’s GPT-3, to determine their potential risks and benefits in generating and disseminating (dis)information. Led by postdoctoral researchers Giovanni Spitale and Federico Germani, alongside Nikola Biller-Andorno, director of the Institute of Biomedical Ethics and History of Medicine (IBME), University of Zurich, the study involving 697 participants sought to evaluate whether individuals could differentiate between disinformation and accurate information presented in the form of tweets. Furthermore, the researchers aimed to determine if participants could discern whether a tweet was written by a genuine Twitter user or generated by GPT-3, an advanced AI language model. The topics covered included climate change, vaccine safety, the COVID-19 pandemic, flat earth theory, and homeopathic treatments for cancer.
    AI-powered systems could generate large-scale disinformation campaigns
    On the one hand, GPT-3 demonstrated the ability to generate accurate and, compared to tweets from real Twitter users, more easily comprehensible information. However, the researchers also discovered that the AI language model had an unsettling knack for producing highly persuasive disinformation. In a concerning twist, participants were unable to reliably differentiate between tweets created by GPT-3 and those written by real Twitter users. “Our study reveals the power of AI to both inform and mislead, raising critical questions about the future of information ecosystems,” says Federico Germani.
    These findings suggest that information campaigns created by GPT-3, based on well-structured prompts and evaluated by trained humans, would prove more effective for instance in a public health crisis which requires fast and clear communication to the public. The findings also raise significant concerns regarding the threat of AI perpetuating disinformation, particularly in the context of the rapid and widespread dissemination of misinformation and disinformation during a crisis or public health event. The study reveals that AI-powered systems could be exploited to generate large-scale disinformation campaigns on potentially any topic, jeopardizing not only public health but also the integrity of information ecosystems vital for functioning democracies.
    Proactive regulation highly recommended
    As the impact of AI on information creation and evaluation becomes increasingly pronounced, the researchers call on policymakers to respond with stringent, evidence-based and ethically informed regulations to address the potential threats posed by these disruptive technologies and ensure the responsible use of AI in shaping our collective knowledge and well-being. “The findings underscore the critical importance of proactive regulation to mitigate the potential harm caused by AI-driven disinformation campaigns,” says Nikola Biller-Andorno. “Recognizing the risks associated with AI-generated disinformation is crucial for safeguarding public health and maintaining a robust and trustworthy information ecosystem in the digital age.”
    Transparent research using open science best practice
    The study adhered to open science best practices throughout the entire pipeline, from pre-registration to dissemination. Giovanni Spitale, who is also an UZH Open Science Ambassador, states: “Open science is vital for fostering transparency and accountability in research, allowing for scrutiny and replication. In the context of our study, it becomes even more crucial as it enables stakeholders to access and evaluate the data, code, and intermediate materials, enhancing the credibility of our findings and facilitating informed discussions on the risks and implications of AI-generated disinformation.” Interested parties can access these resources through the OSF repository: https://osf.io/9ntgf/. More

  • in

    Geologists are using artificial intelligence to predict landslides

    A new technique developed by UCLA geologists that uses artificial intelligence to better predict where and why landslides may occur could bolster efforts to protect lives and property in some of the world’s most disaster-prone areas.
    The new method, described in a paper published in the journal Communications Earth & Environment, improves the accuracy and interpretability of AI-based machine-learning techniques, requires far less computing power and is more broadly applicable than traditional predictive models.
    The approach would be particularly valuable in places like California, the researchers say, where drought, wildfires and earthquakes create the perfect recipe for landslide disasters and where the situation is to expected to get worse as climate change brings stronger and wetter storms.
    Many factors influence where a landslide will occur, including the shape of the terrain, its slope and drainage areas, the material properties of soil and bedrock, and environmental conditions like climate, rainfall, hydrology and ground motion resulting from earthquakes. With so many variables, predicting when and where a chunk of earth is likely to lose its grip is as much an art as a science.
    Geologists have traditionally estimated an area’s landslide risk by incorporating these factors into physical and statistical models. With enough data, such models can achieve reasonably accurate predictions, but physical models are time- and resource-intensive and can’t be applied over broad areas, while statistical models give little insight into how they assess various risk factors to arrive at their predictions.
    Using artificial intelligence to predict landslides
    In recent years, researchers have trained AI machine-learning models known as deep neural networks, or DNNs, to predict landslides. When fed reams of landslide-related variables and historical landslide information, these large, interconnected networks of algorithms can very quickly process and “learn” from this data to make highly accurate predictions.

    Yet despite their advantages in processing time and learning power, as with statistical models, DNNs do not “show their work,” making it difficult for researchers to interpret their predictions and to know which causative factors to target in attempting to prevent possible landslides in the future.
    “DNNs will deliver a percentage likelihood of a landslide that may be accurate, but we are unable to figure out why and which specific variables were most important in causing the landslide,” said Kevin Shao, a doctoral student in Earth, planetary and space sciences and co-first author of the journal paper.
    The problem, said co-first author Khalid Youssef, a former student of biomedical engineering and postdoctoral researcher at UCLA, is that the various network layers of DNNs constantly feed into one another during the learning process, and untangling their analysis is impossible. The UCLA researchers’ new method aimed to address that.
    “We sought to enable a clear separation of the results from the different data inputs, which would make the analysis far more useful in determining which factors are the most important contributors to natural disasters,” he said.
    Youssef and Shao teamed with co-corresponding authors Seulgi Moon, a UCLA associate professor of Earth, planetary and space sciences, and Louis Bouchard, a UCLA professor of chemistry and bioengineering, to develop an approach that could decouple the analytic power of DNNs from their complex adaptive nature in order to deliver more actionable results.

    Their method uses a type of AI called a superposable neural network, or SNN, in which the different layers of the network run alongside each other — retaining the ability to assess the complex relationships between data inputs and output results — but only converging at the very end to yield the prediction.
    The researchers fed the SNN data about 15 geospatial and climatic variables relevant to the eastern Himalaya mountains. The region was selected because the majority of human losses due to landslides occur in Asia, with a substantial portion in the Himalayas. The SNN model was able to predict landslide susceptibility for Himalayan areas with an accuracy rivaling that of DNNs, but most importantly, the researchers could tease apart the variables to see which ones played bigger roles in producing the results.
    “Similar to how autopsies are required to determine the cause of death, identifying the exact trigger for a landslide will always require field measurements and historical records of soil, hydrologic and climate conditions, such as rainfall amount and intensity, which can be hard to obtain in remote places like the Himalayas,” Moon said. “Nonetheless, our new AI prediction model can identify key variables and quantify their contributions to landslide susceptibility.”
    The researchers’ new AI program also requires far fewer computer resources than DNNs and can run effectively with relatively little computing power.
    “The SNN is so small it can run on an Apple Watch, as opposed to DNNs, which require powerful computer servers to train,” Bouchard said.
    The team plans to extend their work to other landslide-prone regions of the world. In California, for example, where landslide risk is exacerbated by frequent wildfires and earthquakes, and in similar areas, the new system may help develop early warning systems that account for a multitude of signals and predict a range of other surface hazards, including floods. More

  • in

    Fiber optic smart pants offer a low-cost way to monitor movements

    With an aging global population comes a need for new sensor technologies that can help clinicians and caregivers remotely monitor a person’s health. New smart pants based on fiber optic sensors could help by offering a nonintrusive way to track a person’s movements and issue alerts if there are signs of distress.
    “Our polymer optical fiber smart pants can be used to detect activities such as sitting, squatting, walking or kicking without inhibiting natural movements,” said research team leader Arnaldo Leal-Junior from the Federal University of Espirito Santo in Brazil. “This approach avoids the privacy issues that come with image-based systems and could be useful for monitoring aging patients at home or measuring parameters such as range of motion in rehabilitation clinics.”
    In the Optica Publishing Group journal Biomedical Optics Express, the researchers describe the new smart pants, which feature transparent optical fibers directly integrated into the textile. They also developed a portable signal acquisition unit that can be placed inside the pants pocket.
    “This research shows that it is possible to develop low-cost wearable sensing systems using optical devices,” said Leal-Junior. “We also demonstrate that new machine learning algorithms can be used to extend the sensing capabilities of smart textiles and possibly enable the measurement of new parameters.”
    Creating fiber optic pants
    The research is part of a larger project focused on the development of photonic textiles for low-cost wearable sensors. Although devices like smartwatches can tell if a person is moving, the researchers wanted to develop a technology that could detect specific types of activity without hindering movement in any way. They did this by incorporating intensity variation polymer optical fiber sensors directly into fabric that was then used to create pants.

    The sensors were based on polymethyl methacrylate optical fibers that are 1 mm in diameter. The researchers created sensitive areas in the fibers by removing small sections of the outer cladding fiber core. When the fiber bends due to movement, this will cause a change in optical power traveling through the fiber and can be used to identify what type of physical modification was applied to the sensitive area of the fiber.
    By creating these sensitive fiber areas in various locations, the researchers created a multiplexed sensor system with 30 measurement points on each leg. They also developed a new machine learning algorithm to classify different types of activities and to classify gait parameters based on the sensor data.
    Classifying activities
    To test their prototype, the researchers had volunteers wear the smart pants and perform specific activities: slow walking, fast walking, squatting, sitting on a chair, sitting on the floor, front kicking and back kicking. The sensing approach achieved 100% accuracy in classifying these activities.
    “Fiber optic sensors have several advantages, including the fact that they are immune to electric or magnetic interference and can be easily integrated into different clothing accessories due to their compactness and flexibility,” said Leal-Junior. “Basing the device on a multiplexed optical power variation sensor also makes the sensing approach low-cost and highly reliable.”
    The researchers are now working to connect the signal acquisition unit to the cloud, which would enable the data to be accessed remotely. They also plan to test the smart textile in home settings.
    This work was performed in LabSensors/LabTel in the scope of assistive technologies framework funded by Brazilian agencies FINEP and CNPq. More

  • in

    ‘Electronic skin’ from bio-friendly materials can track human vital signs with ultrahigh precision

    Queen Mary University and University of Sussex researchers have used materials inspired by molecular gastronomy to create smart wearables that surpassed similar devices in terms of strain sensitivity. They integrated graphene into seaweed to create nanocomposite microcapsules for highly tunable and sustainable epidermal electronics. When assembled into networks, the tiny capsules can record muscular, breathing, pulse, and blood pressure measurements in real-time with ultrahigh precision.
    Currently much of the research on nanocomposite-based sensors is related to non-sustainable materials. This means that these devices contribute to plastic waste when they are no longer in use. A new study, published on 28 June in Advanced Functional Materials, shows for the first time that it is possible to combine molecular gastronomy concepts with biodegradable materials to create such devices that are not only environmentally friendly, but also have the potential to outperform the non-sustainable ones.
    Scientists used seaweed and salt, two very commonly used materials in the restaurant industry, to create graphene capsules made up of a solid seaweed/graphene gel layer surrounding a liquid graphene ink core. This technique is similar to how Michelin star restaurants serve capsules with a solid seaweed/raspberry jam layer surrounding a liquid jam core.
    Unlike the molecular gastronomy capsules though, the graphene capsules are very sensitive to pressure; so, when squeezed or compressed, their electrical properties change dramatically. This means that they can be utilised as highly efficient strain sensors and can facilitate the creation of smart wearable skin-on devices for high precision, real-time biomechanical and vital signs measurements.
    Dr Dimitrios Papageorgiou, Lecturer in Materials Science at Queen Mary University of London, said: “By introducing a ground-breaking fusion of culinary artistry and cutting-edge nanotechnology, we harnessed the extraordinary properties of newly-created seaweed-graphene microcapsules that redefine the possibilities of wearable electronics. Our discoveries offer a powerful framework for scientists to reinvent nanocomposite wearable technologies for high precision health diagnostics, while our commitment to recyclable and biodegradable materials is fully aligned with environmentally conscious innovation.”
    This research can now be used as a blueprint by other labs to understand and manipulate the strain sensing properties of similar materials, pushing the concept of nano-based wearable technologies to new heights.
    The environmental impact of plastic waste has had a profound effect on our livelihoods and there is a need for future plastic-based epidermal electronics to trend towards more sustainable approaches. The fact that these capsules are made using recyclable and biodegradable materials could impact the way we think about wearable sensing devices and the effect of their presence.
    Dr Papageorgiou said: “We are also very proud of the collaborative effort between Dr Conor Boland’s group from University of Sussex and my group from Queen Mary University of London that fuelled this ground-breaking research. This partnership exemplifies the power of scientific collaboration, bringing together diverse expertise to push the boundaries of innovation.” More

  • in

    Research breakthrough could be significant for quantum computing future

    Scientists using one of the world’s most powerful quantum microscopes have made a discovery that could have significant consequences for the future of computing.
    Researchers at the Macroscopic Quantum Matter Group laboratory in University College Cork (UCC) have discovered a spatially modulating superconducting state in a new and unusual superconductor Uranium Ditelluride (UTe2). This new superconductor may provide a solution to one of quantum computing’s greatest challenges.
    Their finding has been published in the  journal Nature.
    Lead author Joe Carroll, a PhD researcher working with UCC Prof. of Quantum Physics Séamus Davis, explains the subject of the paper.
    “Superconductors are amazing materials which have many strange and unusual properties. Most famously they allow electricity to flow with zero resistance. That is, if you pass a current through them they don’t start to heat up, in fact, they don’t dissipate any energy despite carrying a huge current. They can do this because instead of individual electrons moving through the metal we have pairs of electrons which bind together. These pairs of electrons together form macroscopic quantum mechanical fluid.”
    “What our team found was that some of the electron pairs form a new crystal structure embedded in this background fluid. These types of states were first discovered by our group in 2016 and are now called Electron Pair-Density Waves. These Pair Density Waves are a new form of superconducting matter the properties of which we are still discovering.”
    “What is particularly exciting for us and the wider community is that UTe2 appears to be a new type of superconductor. Physicists have been searching for a material like it for nearly 40 years. The pairs of electrons appear to have intrinsic angular momentum. If this is true, then what we have detected is the first Pair-Density Wave composed of these exotic pairs of electrons.”

    When asked about the practical implications of this work Mr. Carroll explained:
    “There are indications that UTe2 is a special type of superconductor that could have huge consequences for quantum computing.”
    “Typical, classical, computers use bits to store and manipulate information. Quantum computers rely on quantum bits or qubits to do the same. The problem facing existing quantum computers is that each qubit must be in a superposition with two different energies — just as Schrödinger’s cat could be called both ‘dead’ and ‘alive’. This quantum state is very easily destroyed by collapsing into the lowest energy state — ‘dead’ — thereby cutting off any useful computation.
    “This places huge limits on the application of quantum computers. However, since its discovery five years ago there has been a huge amount of research on UTe2 with evidence pointing to it being a superconductor which may be used as a basis for topological quantum computing. In such materials there is no limit on the lifetime of the qubit during computation opening up many new ways for more stable and useful quantum computers. In fact, Microsoft have already invested billions of dollars into topological quantum computing so this is a well-established theoretical science already.” he said.
    “What the community has been searching for is a relevant topological superconductor; UTe2 appears to be that.”
    “What we’ve discovered then provides another piece to the puzzle of UTe2. To make applications using materials like this we must understand their fundamental superconducting properties. All of modern science moves step by step. We are delighted to have contributed to the understanding of a material which could bring us closer to much more practical quantum computers.”
    Congratulating the research team at the Macroscopic Quantum Matter Group Laboratory in University College Cork, Professor John F. Cryan, Vice President Research and Innovation said:
    “This important discovery will have significant consequences for the future of quantum computing. In the coming weeks, the University will launch UCC Futures — Future Quantum and Photonics and research led by Professor Seamus Davis and the Macroscopic Quantum Matter Group, with the use of one of the world’s most powerful microscopes, will play a crucial role in this exciting initiative.” More

  • in

    Quantum computers could break the internet. Here’s how to save it

    Keeping secrets is hard. Kids know it. Celebrities know it. National security experts know it, too.

    And it’s about to get even harder.

    There’s always someone who wants to get at the juicy details we’d rather keep hidden. Yet at every moment, untold volumes of private information are zipping along internet cables and optical fibers. That information’s privacy relies on encryption, a way to mathematically scramble data to prevent any snoops from deciphering it — even with the help of powerful computers.

    But the mathematical basis of these techniques is under threat from a foe that has, until recently, seemed hypothetical: quantum computers.

    In the 1990s, scientists realized that these computers could exploit the weird physics of the minuscule realm of atoms and electrons to perform certain types of calculations out of reach for standard computers. That means that once the quantum machines are powerful enough, they could crack the mathematical padlocks on encrypted data, laying bare the world’s secrets.

    Today’s quantum computers are far too puny to defeat current security measures. But with more powerful quantum machines being regularly rolled out by the likes of IBM and Google, scientists, governments and others are beginning to take action. Experts are spreading the word that it’s time to prepare for a milestone some are calling Y2Q. That’s the year that quantum computers will gain the ability to crack the encoding schemes that keep electronic communications secure.

    “If that encryption is ever broken,” says mathematician Michele Mosca, “it would be a systemic catastrophe.”

    Y2Q is coming. What does it mean?

    Encryption pervades digital life — safeguarding emails, financial and medical data, online shopping transactions and more. Encryption is also woven into a plethora of physical devices that transmit information, from cars to robot vacuums to baby monitors. Encryption even secures infrastructure such as power grids. The tools Y2Q threatens are everywhere. “The stakes are just astronomically high,” says Mosca, of the University of Waterloo in Canada, who is also CEO of the cybersecurity company evolutionQ.

    The name Y2Q alludes to the infamous Y2K bug, which threatened to create computer havoc in the year 2000 because software typically used only two digits to mark the year (SN: 1/2/99, p. 4). Y2Q is a similarly systemic issue, but in many ways, it’s not a fair comparison. The fix for Y2Q is much more complex than changing how dates are represented, and computers are now even more inextricably entwined into society than two decades ago. Plus, no one knows when Y2Q will arrive.

    Confronted with the Y2Q threat, cryptography — the study and the practice of techniques used to encode information — is facing an overhaul. Scientists and mathematicians are now working urgently to prepare for that unknown date by devising new ways of encrypting data that won’t be susceptible to quantum decoding. An effort headed by the U.S. National Institute of Standards and Technology, or NIST, aims to release new standards for such post-quantum cryptography algorithms next year.

    Meanwhile, a longer-term effort takes a can’t-beat-’em-join-’em approach: using quantum technology to build a more secure, quantum internet. Scientists around the world are building networks that shuttle quantum information back and forth between cities, chasing the dream of communication that theoretically could be immune to hacking.

    How public-key cryptography works

    If you want to share a secret message with someone, you can encrypt it, garbling the information in such a way that it’s possible to decode it later.

    Schoolkids might do this with a simple cipher: For example, replace the letter A with the number 1, B with 2 and so on. Anyone who knows this secret key used to encrypt the message can later decode the message and read it — whether it’s the intended recipient or another sneaky classmate.

    It’s a simplified example of what’s called symmetric-key cryptography: The same key is used to encode and decode a message. In a more serious communication, the key would be much more complex — essentially impossible for anyone to guess. But in both cases, the same secret key is used to encode and decode.

    This strategy was used in cryptography for millennia, says computer scientist Peter Schwabe of the Max Planck Institute for Security and Privacy in Bochum, Germany. “It was either used in a military context or it was used between lovers that were not supposed to love each other.”

    But in the globally connected modern world, symmetric-key cryptography has a problem. How do you get the secret key to someone on the other side of the planet, someone you’ve never met, without anyone else getting their hands on it?

    To solve this quandary, in the 1970s cryptographers devised public-key cryptography, which uses special mathematical tricks to solve the symmetric-key conundrum. It uses two different, mathematically related keys. A public key is used to encrypt messages, and a mathematically related private key decodes them. Say Alice wants to send a message to Bob. She looks up his public key and uses it to scramble her communication. Only Bob, with his private key, can decode it. To any snoops that intercept the message, it’s meaningless.

    Public-key techniques are also used to create digital signatures. These signatures verify that someone online really is who they say they are, so you know you’re really downloading that new app from Apple, not some nefarious impersonator. Only the owner of a private key can sign the message, but anyone can use the public key to verify its authenticity.

    The public-key cryptography that permeates the internet is directly vulnerable to full-scale quantum computers. What’s more, symmetric-key cryptography often relies on public-key cryptography to share the secret key needed to communicate. That puts the majority of internet security under threat.

    Why quantum computers will threaten public-key cryptography

    If public-key encryption keeps your data hidden away under the floorboards, then to read that information, you need to build a way in. You have to be able to access the data with your private key. “There’s got to be a secret door somewhere in there, where if I knock the right way, it opens up,” Mosca says.

    Constructing such a trapdoor demands special mathematical tactics, based on operations that are easy to perform in one direction but hard in the opposite direction. Multiplying two prime numbers together is quick work for a computer, even if the numbers are very large. But it’s much more time-consuming for a computer to calculate the primes from their product. For large enough numbers, it’s impossible to do in a practical amount of time with a standard computer.

    The challenge of finding the prime factors of a large number is behind one of the main types of public-key encryption used today, known as RSA. A hacker using a classical computer wouldn’t be able to deduce the private key from the public key. Another math problem, known as the discrete logarithm problem, is a similar one-way street.

    These two mathematical problems underlie nearly all of the public-key cryptography in use today. But a sufficiently powerful quantum computer would blow their trapdoors wide open. “All of those public-key algorithms are vulnerable to an attack that can only be carried out by a quantum computer,” says mathematician Angela Robinson of NIST, in Gaithersburg, Md. “Our whole digital world is relying on quantum-vulnerable algorithms.”

    This vulnerability came to light in 1994, when mathematician Peter Shor, now at MIT, came up with an algorithm that would allow quantum computers to solve both of these math problems. In quantum machines, the bits, called qubits, can take on values of 0 and 1 simultaneously, a state known as a superposition. And qubits can be linked with one another through the quantum connection called entanglement, enabling new tactics like Shor’s (SN: 7/8/17 & 7/22/17, p. 34).

    “Back then, that was an interesting theoretical paper. Quantum computers were a distant dream,” says mathematician Dustin Moody of NIST, “but it wasn’t a practical threat.” Since then, there’s been a quantum computing boom (SN: 7/8/17 & 7/22/17, p. 28).

    The machines are being built using qubits made from various materials — from individual atoms to flecks of silicon to superconductors (which conduct electricity without resistance) — but all calculate according to quantum rules. IBM’s superconducting quantum computer Osprey, for example, has 433 qubits. That’s up from the five qubits of the computer IBM unveiled in 2016. The company plans to roll out one with more than a thousand qubits this year.

    That’s still far from the Y2Q threshold: To break RSA encryption, a quantum computer would need 20 million qubits, researchers reported in 2021 in Quantum.

    Mosca estimates that in the next 15 years, there’s about a 50 percent chance of a quantum computer powerful enough to break standard public-key encryption. That may seem like a long time, but experts estimate that previous major cryptography overhauls have taken around 15 years. “This is not a Tuesday patch,” Mosca says.

    The threat is even more pressing because the data we send today could be vulnerable to quantum computers that don’t exist yet. Hackers could harvest encrypted information now, and later decode it once a powerful quantum computer becomes available, Mosca says. “It’s just bad news if we don’t get ahead of this.”

    New algorithms could safeguard our security

    Getting ahead of the problem is the aim of Moody, Robinson and others who are part of NIST’s effort to select and standardize post-quantum encryption and digital signatures. Such techniques would have to thwart hackers using quantum machines, while still protecting from classical hacks.

    After NIST put out a call for post-quantum algorithms in 2016, the team received dozens of proposed schemes. The researchers sorted through the candidates, weighing considerations including the level of security provided and the computational resources needed for each. Finally, in July 2022, NIST announced four schemes that had risen to the top. Once the final standards for those algorithms are ready in 2024, organizations can begin making the post-quantum leap. Meanwhile, NIST continues to consider additional candidates.

    In parallel with NIST’s efforts, others are endorsing the post-quantum endeavor. In May 2022, the White House put out a memo setting 2035 as the goal for U.S. government agencies to go post-quantum. In November, Google announced it is already using post-quantum cryptography in internal communications.

    Several of the algorithms selected by NIST share a mathematical basis — a technique called lattice-based cryptography. It relies on a problem involving describing a lattice, or a grid of points, using a set of arrows, or vectors.

    In math, a lattice is described by a set of vectors used to produce it. Consider Manhattan. Even if you’d never seen a map of the city, you could roughly reproduce its grid using two arrows, one the length and direction of an avenue block and the other matching a street block. Discounting the city’s quirks, such as variations in block lengths, you’d just place arrows end-to-end until you’ve mapped out the whole grid.

    But there are more complicated sets of vectors that can reproduce the city’s grid. Picture two arrows starting, for example, at Washington Square Park in lower Manhattan, with one pointing to Times Square in Midtown and the other to a neighboring landmark, the Empire State Building. Properly chosen, two such vectors could also be used — with more difficulty — to map out the city’s grid.

    A math problem called the shortest vector problem asks: Given a set of long vectors that generate a lattice, what is the shortest vector that can be used as part of a set to produce the grid? If all you knew about the city was the location of those three landmarks, it’d be quite a task to back out the shortest vector corresponding to the city’s blocks.

    Now, picture doing that not for a 2-D map, but in hundreds of dimensions. That’s a problem thought to be so difficult that no computer, quantum or classical, could do it in a reasonable amount of time.

    The difficulty of that problem is what underlies the strength of several post-quantum cryptography algorithms. In lattice-based cryptography, a short vector is used to create the private key, and the long vectors produce the public key.

    Other post-quantum schemes NIST considered are based on different math problems. To choose among the options, NIST mathematicians’ chief consideration was the strength of each algorithm’s security. But none of these algorithms are definitively proved to be secure against quantum computers, or even classical ones. One algorithm originally considered by NIST, called SIKE, was later broken. It took just 10 minutes to crack on a standard computer, researchers reported in April in Advances in Cryptology – EUROCRYPT 2023.

    Although it might seem like a failure, the SIKE breakdown can be considered progress. The faith in the security of cryptographic algorithms comes from a trial by fire. “The more [that] smart people try to break something and fail, the more confidence we can get that it’s actually hard to break it,” Schwabe says. Some algorithms must perish in the process.

    A quantum internet could bolster security

    Quantum physics taketh away, but also, it gives. A different quantum technique can allow communication with mathematically proved security. That means a future quantum internet could, theoretically at least, be fully safe from both quantum and classical hacks.

    By transmitting photons — particles of light — and measuring their properties upon arrival, it’s possible to generate a shared private key that is verifiably safe from eavesdroppers.

    This quantum key distribution, or QKD, relies on a principle of quantum physics called the no-cloning theorem. Essentially, it’s impossible to copy quantum information. Any attempt to do so will alter the original information, revealing that someone was snooping. “Someone who was trying to learn that information would basically leave a fingerprint behind,” says quantum engineer Nolan Bitner of Argonne National Laboratory in Lemont, Ill.

    This quirk of quantum physics allows two people to share a secret key and, by comparing notes, determine whether the key has been intercepted along the way. If those comparisons don’t match as expected, someone was eavesdropping. The communicators discard their key and start over. If there is no sign of foul play, they can safely use their shared secret key to encrypt their communication and send it over the standard internet, certain of its security. It’s a quantum solution to the quandary of how two parties can share secret keys without ever meeting. There’s no need for a mathematical trapdoor that might be vulnerable to an undiscovered tactic.

    But QKD can’t be done over normal channels. It requires quantum networks, in which photons are created, sent zipping along optical fibers and are manipulated at the other end.

    Such networks already snake through select cities in the world. One threads through Chicago suburbs from the University of Chicago to Argonne lab and Fermilab in Batavia, for a total of 200 kilometers. In China, an extensive network connects cities along a more than 2,000-kilometer backbone that wends from Beijing to Shanghai, along with two quantum satellites that beam photons through the air. A quantum network crisscrosses South Korea, and another links several U.K. cities. There are networks in Tokyo and the Netherlands — the list goes on, with more to come.

    A quantum network in China extends more than 2,000 kilometers from Beijing to Shanghai and includes a quantum satellite that beams photons to ground stations in Xinglong and Nanshan. Other quantum networks are being built and tested around the world.Y.-A. CHEN ET AL/NATURE 2021, ADAPTED BY C. CHANG

    A quantum network in China extends more than 2,000 kilometers from Beijing to Shanghai and includes a quantum satellite that beams photons to ground stations in Xinglong and Nanshan. Other quantum networks are being built and tested around the world.Y.-A. CHEN ET AL/NATURE 2021, ADAPTED BY C. CHANG

    Many of these networks are test-beds used by researchers to study the technology outside of a lab. But some are getting real-world use. Banks use China’s network, and South Korea’s links government agencies. Companies such as ID Quantique, based in Switzerland, offer commercial QKD devices.

    QKD’s security is mathematically proven, but quantum networks can fall short of that guarantee in practice. The difficulty of creating, transmitting, detecting and storing quantum particles can open loopholes. Devices and networks must be painstakingly designed and tested to ensure a hacker can’t game the system.

    And one missing component in particular is holding quantum networks back. “The number one device is quantum memory,” says quantum physicist Xiongfeng Ma of Tsinghua University in Beijing. When sending quantum information over long distances through fibers, particles can easily get lost along the way. For distances greater than about 100 kilometers, that makes quantum communication impractical without the use of way stations that amplify the signal. Such way stations temporarily convert data into classical, rather than quantum, information. That classical step means hackers could target these “trusted nodes” undetected, marring QKD’s pristine security. And it limits what quantum maneuvers the networks can do.

    It’s not possible to create pairs of particles that are entangled over long distances in a network like this. But special stations sprinkled throughout the network, called quantum repeaters, could solve the problem by storing information in a quantum memory. To create far-flung entangled particles, scientists could first entangle sets of particles over short distances, storing them in quantum memories at each quantum repeater. Performing certain operations on the entangled particles could leapfrog that entanglement to other particles farther apart. By repeating this process, particles could be entangled across extended distances.

    But, thanks in part to quantum particles’ tendency to be easily perturbed by outside influences, scientists have yet to develop a practical quantum repeater. “When that does appear, it’s likely to catalyze global quantum networks,” says David Awschalom, a physicist at the University of Chicago. Not only will such technologies allow longer distances and better security for QKD, but they will also enable more complicated tasks, like entangling distant quantum computers to allow them to work together.

    A European effort called the Quantum Internet Alliance aims to build a network with quantum repeaters by the end of 2029, creating a backbone stretching over 500 kilometers, in addition to two metropolitan-scale networks. The effort is “super challenging,” says physicist and computer scientist Stephanie Wehner of Delft University of Technology in the Netherlands. “We are on a moon shot mission.” Eventually, scientists envision a global quantum internet.

    Awschalom imagines the networks becoming accessible to all. “Wouldn’t it be great to be able to go to a public library and be able to get onto a quantum network?”

    A link between a ground station (red and green lasers shown in this time-lapse image) and the quantum satellite Micius shows the potential for long-distance secure communications. The satellite beams photons to the ground station, in Xinglong, China.JIN LIWANG/XINHUA/ALAMY LIVE NEWS

    What does the future of cryptography look like?

    QKD and post-quantum cryptography are complementary. “In order to overcome the threat of the quantum computers we need both,” says physicist Nicolas Gisin of the University of Geneva and cofounder of ID Quantique. When people are exchanging information that doesn’t require the utmost security — say, using a mobile phone to post cat memes on Reddit — post-quantum cryptography will be more practical, as it doesn’t demand a to-and-fro of individual quantum particles. But “there are really situations where we want to make sure that the security is going to last … for several decades, and post-quantum cryptography cannot guarantee that,” Gisin says.

    Eventually, quantum techniques could allow for even more advanced types of security, such as blind quantum computing. In that scheme, a user could compute something on a remote quantum computer without anyone being able to determine what they’re computing. A technique called covert quantum communication would allow users to communicate securely while hiding that they were exchanging messages at all. And device-independent QKD would ensure security even if the devices used to communicate are potentially flawed (SN: 8/27/22, p. 10).

    The appeal of such extreme secrecy, of course, depends upon whether you’re the secret-keeper or the snoop. In the United States, government agencies like the FBI, CIA and the National Security Agency have argued that encryption makes it difficult to eavesdrop on criminals or terrorists. The agencies have a history of advocating for back doors that would let them in on encrypted communications — or building in secret back doors.

    But quantum techniques, done properly, can prevent anyone from intercepting secrets, even powerful government agencies.

    “It’s interesting to think about a world where, in principle, one might imagine perfect security,” Awschalom says. “Is that a good thing or is that a bad thing?” More

  • in

    Boy fly meets girl fly meets AI: Training an AI to recognize fly mating identifies a gene for mating positions

    A research group at the Graduate School of Science, Nagoya University in Japan has used artificial intelligence to determine that Piezo, a channel that receives mechanical stimuli, plays a role in controlling the mating posture of male fruit flies (Drosophila melanogaster). Inhibition of Piezo led the flies to adopt an ineffective mating posture that decreased their reproductive performance. Their findings were reported in iScience.
    Most previous studies of animal mating have been limited to behavioral studies, limiting our understanding of this essential process. Since many animals adopt a fixed posture during copulation, maintaining an effective mating position is vital for reproductive success. In fruit flies, the male mounts the female and maintains this posture for at least until he transfers sufficient sperm to fertilize the female, which occurs about 8 minutes after copulation initiation. The Nagoya University research group realized that some factor was involved in maintaining this copulation posture.
    A likely contender is Piezo. Piezo is a family of transmembrane proteins found in bristle cells, the sensitive cells in male genitals. Piezo is activated when a mechanical force is applied to a cell membrane, allowing ions to flow through the channel and generate an electrical signal. This signal triggers cellular responses, including the release of neurotransmitters in neurons and the contraction of muscle cells. Such feedback helps a fly maintain his mating position.
    After identifying that the piezo gene is involved in the mating of fruit flies, Professor Azusa Kamikouchi (she/her), Assistant Professor Ryoya Tanaka (he/him), and student Hayato M. Yamanouchi (he/him) used optogenetics to further explore the neural mechanism of this phenomenon. This technique combines genetic engineering and optical science to create genetically modified neurons that can be inactivated with light of specific wavelengths. When the light was turned on during mating, the neuron was silenced. This allowed the researchers to manipulate the activity of piezo-expressing neurons.
    “This step proved to be a big challenge for us,” Kamikouchi said. “Using optogenetics, specific neurons are silenced only when exposed to photostimulation. However, our interest was silencing neural activity during copulation. Therefore, we had to make sure that the light was only turned on during mating. However, if the experimenter manually turned the photostimulation on in response to the animal’s copulation, they needed to observe the animal throughout the experiment. Waiting around for fruit flies to mate is incredibly time-consuming.”
    The observation problem led the group to establish an experimental deep learning system that could recognize copulation. By training the AI to recognize when sexual intercourse was occurring, they could automatically control photostimulation. This allowed them to discover that when piezo-expressing neurons were inhibited, males adopted a wonky, largely ineffective mating posture. As one might expect, the males that showed difficulty in adopting an appropriate sexual position had fewer offspring. They concluded that a key role of the piezo gene was helping the male shift his axis in response to the female for maximum mating success.
    “Piezo proteins have been implicated in a variety of physiological processes, including touch sensation, hearing, blood pressure regulation, and bladder function,” said Kamikouchi. “Now our findings suggest that reproduction can be added to the list. Since mating is an important behavior for reproduction that is widely conserved in animals, understanding its control mechanism will lead to a greater understanding of the reproductive system of animals in general.”
    Kamikouchi is enthusiastic about the use of AI in such research. “With the recent development of informatics, experimental systems and analysis methods have advanced dramatically,” she concludes. “In this research, we have succeeded in creating a device that automatically detects mating using machine learning-based real-time analysis and controls photostimulation necessary for optogenetics. To investigate the neural mechanisms that control animal behavior, it is important to conduct experiments in which neural activity is manipulated only when an individual exhibits a specific behavior. The method established in this study can be applied not only to the study of mating in fruit flies but also to various behaviors in other animals. It should make a significant contribution to the promotion of neurobiological research.” More