More stories

  • in

    The quantum internet just went live on Verizon’s network

    In a first-of-its-kind experiment, engineers at the University of Pennsylvania brought quantum networking out of the lab and onto commercial fiber-optic cables using the same Internet Protocol (IP) that powers today’s web. Reported in Science, the work shows that fragile quantum signals can run on the same infrastructure that carries everyday online traffic. The team tested their approach on Verizon’s campus fiber-optic network.
    The Penn team’s tiny “Q-chip” coordinates quantum and classical data and, crucially, speaks the same language as the modern web. That approach could pave the way for a future “quantum internet,” which scientists believe may one day be as transformative as the dawn of the online era.
    Quantum signals rely on pairs of “entangled” particles, so closely linked that changing one instantly affects the other. Harnessing that property could allow quantum computers to link up and pool their processing power, enabling advances like faster, more energy-efficient AI or designing new drugs and materials beyond the reach of today’s supercomputers.
    Penn’s work shows, for the first time on live commercial fiber, that a chip can not only send quantum signals but also automatically correct for noise, bundle quantum and classical data into standard internet-style packets, and route them using the same addressing system and management tools that connect everyday devices online.
    “By showing an integrated chip can manage quantum signals on a live commercial network like Verizon’s, and do so using the same protocols that run the classical internet, we’ve taken a key step toward larger-scale experiments and a practical quantum internet,” says Liang Feng, Professor in Materials Science and Engineering (MSE) and in Electrical and Systems Engineering (ESE), and the Science paper’s senior author.
    The Challenges of Scaling the Quantum Internet
    Erwin Schrodinger, who coined the term “quantum entanglement,” famously related the concept to a cat hidden in a box. If the lid is closed, and the box also contains radioactive material, the cat could be alive or dead. One way to interpret the situation is that the cat is both alive and dead. Only opening the box confirms the cat’s state.

    That paradox is roughly analogous to the unique nature of quantum particles. Once measured, they lose their unusual properties, which makes scaling a quantum network extremely difficult.
    “Normal networks measure data to guide it towards the ultimate destination,” says Robert Broberg, a doctoral student in ESE and coauthor of the paper. “With purely quantum networks, you can’t do that, because measuring the particles destroys the quantum state.”
    Coordinating Classical and Quantum Signals
    To get around this obstacle, the team developed the “Q-Chip” (short for “Quantum-Classical Hybrid Internet by Photonics”) to coordinate “classical” signals, made of regular streams of light, and quantum particles. “The classical signal travels just ahead of the quantum signal,” says Yichi Zhang, a doctoral student in MSE and the paper’s first author. “That allows us to measure the classical signal for routing, while leaving the quantum signal intact.”
    In essence, the new system works like a railway, pairing regular light locomotives with quantum cargo. “The classical ‘header’ acts like the train’s engine, while the quantum information rides behind in sealed containers,” says Zhang. “You can’t open the containers without destroying what’s inside, but the engine ensures the whole train gets where it needs to go.”
    Because the classical header can be measured, the entire system can follow the same “IP” or “Internet Protocol” that governs today’s internet traffic. “By embedding quantum information in the familiar IP framework, we showed that a quantum internet could literally speak the same language as the classical one,” says Zhang. “That compatibility is key to scaling using existing infrastructure.”
    Adapting Quantum Technology to the Real World

    One of the greatest challenges to transmitting quantum particles on commercial infrastructure is the variability of real-world transmission lines. Unlike laboratory environments, which can maintain ideal conditions, commercial networks frequently encounter changes in temperature, thanks to weather, as well as vibrations from human activities like construction and transportation, not to mention seismic activity.
    To counteract this, the researchers developed an error-correction method that takes advantage of the fact that interference to the classical header will affect the quantum signal in a similar fashion. “Because we can measure the classical signal without damaging the quantum one,” says Feng, “we can infer what corrections need to be made to the quantum signal without ever measuring it, preserving the quantum state.”
    In testing, the system maintained transmission fidelities above 97%, showing that it could overcome the noise and instability that usually destroy quantum signals outside the lab. And because the chip is made of silicon and fabricated using established techniques, it could be mass produced, making the new approach easy to scale.
    “Our network has just one server and one node, connecting two buildings, with about a kilometer of fiber-optic cable installed by Verizon between them,” says Feng. “But all you need to do to expand the network is fabricate more chips and connect them to Philadelphia’s existing fiber-optic cables.”
    The Future of the Quantum Internet
    The main barrier to scaling quantum networks beyond a metro area is that quantum signals cannot yet be amplified without destroying their entanglement.
    While some teams have shown that “quantum keys,” special codes for ultra-secure communication, can travel long distances over ordinary fiber, those systems use weak coherent light to generate random numbers that cannot be copied, a technique that is highly effective for security applications but not sufficient to link actual quantum processors.
    Overcoming this challenge will require new devices, but the Penn study provides an important early step: showing how a chip can run quantum signals over existing commercial fiber using internet-style packet routing, dynamic switching and on-chip error mitigation that work with the same protocols that manage today’s networks.
    “This feels like the early days of the classical internet in the 1990s, when universities first connected their networks,” says Broberg. “That opened the door to transformations no one could have predicted. A quantum internet has the same potential.”
    This study was conducted at the University of Pennsylvania School of Engineering and Applied Science and was supported by the Gordon and Betty Moore Foundation (GBMF12960 and DOI 10.37807), Office of Naval Research (N00014-23-1-2882), National Science Foundation (DMR-2323468), Olga and Alberico Pompa endowed professorship, and PSC-CUNY award (ENHC-54-93).
    Additional co-authors include Alan Zhu, Gushi Li and Jonathan Smith of the University of Pennsylvania, and Li Ge of the City University of New York. More

  • in

    Scientists unveil breakthrough pixel that could put holograms on your smartphone

    New research from the University of St Andrews paves the way for holographic technology, with the potential to transform smart devices, communication, gaming and entertainment.
    In a study published recently in Light, Science and Application, researchers from the school of Physics and Astronomy created a new optoelectronic device from the combined use of Holographic Metasurfaces (HMs) and Organic Light Emmitting Diodes (OLEDs).
    Until now, holograms have are created using lasers, however researchers have found  that using OLEDs and HMs gives a simpler and more compact approach that is potentially cheaper and easier to apply, overcoming the main barriers to hologram technology being used more widely.
    Organic light-emitting diodes are thin film devices widely used to make the colored pixels in mobile phone displays and some TVs. As a flat and surface-emitting light source, OLEDs are also used in emerging applications such as optical wireless communications, biophotonics and sensing, where the ability to integrate with other technologies makes them good candidates to realize miniaturized light-based platforms.
    A holographic metasurface is a thin, flat array of tiny structures called meta-atoms – the size of roughly a thousand of the width of a strand of hair – they are designed to manipulate light’s properties. They can make holograms and their uses span diverse fields, such as data storage, anti-counterfeiting, optical displays, high numerical aperture lenses – for example optical microscopy, and sensing.
    This, however, is the first time both have been used together to produce the basic building block of a holographic display.
    Researchers found that when each meta- atom is carefully shaped to control the properties of the beam of light that goes through it, it behaves as a pixel of the HM. When light goes through the HM, at each pixel, the properties of the light are slightly modified.

    Thanks to these modifications, it is possible to create a pre-designed image on the other side, exploiting the principle of light interference, whereby light waves create complicated patterns when they interact with each other.
    Professor Ifor Samuel, from the School of Physics and Astronomy, said: “We are excited to demonstrate this new direction for OLEDs.  By combining OLEDs with metasurfaces, we also open a new way of generating holograms and shaping light.”
    Andrea Di Falco, professor in nano-photonics at the School of Physics and Astronomy, said: “Holographic metasurfaces are one of the most versatile material platforms to control light. With this work, we have removed one of the technological barriers that prevent the adoption of metamaterials in everyday applications. This breakthrough will enable a step change in the architecture of holographic displays for emerging applications, for example, in virtual and augmented reality.”
    Professor Graham Turnbull, from the School of Physics and Astronomy, said: “OLED displays normally need thousands of pixels to create a simple picture. This new approach allows a complete image to be projected from a single OLED pixel!”
    Until now, researchers could only make very simple shapes with OLEDs, which limited their usability in some applications. However, this breakthrough provides a path toward a miniaturized and highly integrated metasurface display. More

  • in

    Scientists brew “quantum ink” to power next-gen night vision

    Manufacturers of infrared cameras face a growing problem: the toxic heavy metals in today’s infrared detectors are increasingly banned under environmental regulations, forcing companies to choose between performance and compliance.
    This regulatory pressure is slowing the broader adoption of infrared detectors across civilian applications, just as demand in fields like autonomous vehicles, medical imaging and national security is accelerating.
    In a paper published in ACS Applied Materials & Interfaces, researchers at NYU Tandon School of Engineering reveal a potential solution that uses environmentally friendly quantum dots to detect infrared light without relying on mercury, lead, or other restricted materials.
    The researchers use colloidal quantum dots which upends the age-old, expensive, and tedious processing of infrared detectors. Traditional devices are fabricated through slow, ultra-precise methods that place atoms almost one by one across the pixels of a detector — much like assembling a puzzle piece by piece under a microscope.
    Colloidal quantum dots are instead synthesized entirely in solution, more like brewing ink, and can be deposited using scalable coating techniques similar to those used in roll-to-roll manufacturing for packaging or newspapers. This shift from painstaking assembly to solution-based processing dramatically reduces manufacturing costs and opens the door to widespread commercial applications.
    “The industry is facing a perfect storm where environmental regulations are tightening just as demand for infrared imaging is exploding,” said Ayaskanta Sahu, associate professor in the Department of Chemical and Biomolecular Engineering (CBE) at NYU Tandon and the study’s senior author. “This creates real bottlenecks for companies trying to scale up production of thermal imaging systems.”
    Another challenge the researchers addressed was making the quantum dot ink conductive enough to relay signals from incoming light. They achieved this using a technique called solution-phase ligand exchange, which tailors the quantum dot surface chemistry to enhance performance in electronic devices. Unlike traditional fabrication methods that often leave cracked or uneven films, this solution-based process yields smooth, uniform coatings in a single step — ideal for scalable manufacturing.

    The resulting devices show remarkable performance: they respond to infrared light on the microsecond timescale — for comparison, the human eye blinks at speeds hundreds of times slower — and they can detect signals as faint as a nanowatt of light.
    “What excites me is that we can take a material long considered too difficult for real devices and engineer it to be more competitive,” said graduate researcher Shlok J. Paul, lead author on the study. “With more time this material has the potential to shine deeper in the infrared spectrum where few materials exist for such tasks.”
    This work adds to earlier research from the same lead researchers that developed new transparent electrodes using silver nanowires. Those electrodes remain highly transparent to infrared light while efficiently collecting electrical signals, addressing one component of the infrared camera system.
    Combined with their earlier transparent electrode work, these developments address both major components of infrared imaging systems. The quantum dots provide environmentally compliant sensing capability, while the transparent electrodes handle signal collection and processing.
    This combination addresses challenges in large-area infrared imaging arrays, which require high-performance detection across wide areas and signal readout from millions of individual detector pixels. The transparent electrodes allow light to reach the quantum dot detectors while providing electrical pathways for signal extraction.
    “Every infrared camera in a Tesla or smartphone needs detectors that meet environmental standards while remaining cost-effective,” Sahu said. “Our approach could help make these technologies much more accessible.”
    The performance still falls short of the best heavy-metal-based detectors in some measurements. However, the researchers expect continued advances in quantum dot synthesis and device engineering could reduce this gap.
    In addition to Sahu and Paul, the paper’s authors are Letian Li, Zheng Li, Thomas Kywe, and Ana Vataj, all from NYU Tandon CBE. The work was supported by the Office of Naval Research and the Defense Advanced Research Projects Agency. More

  • in

    Caltech’s massive 6,100-qubit array brings the quantum future closer

    Quantum computers will need large numbers of qubits to tackle challenging problems in physics, chemistry, and beyond. Unlike classical bits, qubits can exist in two states at once — a phenomenon called superposition. This quirk of quantum physics gives quantum computers the potential to perform certain complex calculations better than their classical counterparts, but it also means the qubits are fragile. To compensate, researchers are building quantum computers with extra, redundant qubits to correct any errors. That is why robust quantum computers will require hundreds of thousands of qubits.
    Now, in a step toward this vision, Caltech physicists have created the largest qubit array ever assembled: 6,100 neutral-atom qubits trapped in a grid by lasers. Previous arrays of this kind contained only hundreds of qubits.
    This milestone comes amid a rapidly growing race to scale up quantum computers. There are several approaches in development, including those based on superconducting circuits, trapped ions, and neutral atoms, as used in the new study.
    “This is an exciting moment for neutral-atom quantum computing,” says Manuel Endres, professor of physics at Caltech. “We can now see a pathway to large error-corrected quantum computers. The building blocks are in place.” Endres is the principal investigator of the research published on September 24 in Nature. Three Caltech graduate students led the study: Hannah Manetsch, Gyohei Nomura, and Elie Bataille.
    The team used optical tweezers — highly focused laser beams — to trap thousands of individual cesium atoms in a grid. To build the array of atoms, the researchers split a laser beam into 12,000 tweezers, which together held 6,100 atoms in a vacuum chamber. “On the screen, we can actually see each qubit as a pinpoint of light,” Manetsch says. “It’s a striking image of quantum hardware at a large scale.”
    A key achievement was showing that this larger scale did not come at the expense of quality. Even with more than 6,000 qubits in a single array, the team kept them in superposition for about 13 seconds — nearly 10 times longer than what was possible in previous similar arrays — while manipulating individual qubits with 99.98 percent accuracy. “Large scale, with more atoms, is often thought to come at the expense of accuracy, but our results show that we can do both,” Nomura says. “Qubits aren’t useful without quality. Now we have quantity and quality.”
    The team also demonstrated that they could move the atoms hundreds of micrometers across the array while maintaining superposition. The ability to shuttle qubits is a key feature of neutral-atom quantum computers that enables more efficient error correction compared with traditional, hard-wired platforms like superconducting qubits.

    Manetsch compares the task of moving the individual atoms while keeping them in a state of superposition to balancing a glass of water while running. “Trying to hold an atom while moving is like trying to not let the glass of water tip over. Trying to also keep the atom in a state of superposition is like being careful to not run so fast that water splashes over,” she says.
    The next big milestone for the field is implementing quantum error correction at the scale of thousands of physical qubits, and this work shows that neutral atoms are a strong candidate to get there. “Quantum computers will have to encode information in a way that’s tolerant to errors, so we can actually do calculations of value,” Bataille says. “Unlike in classical computers, qubits can’t simply be copied due to the so-called no-cloning theorem, so error correction has to rely on more subtle strategies.”
    Looking ahead, the researchers plan to link the qubits in their array together in a state of entanglement, where particles become correlated and behave as one. Entanglement is a necessary step for quantum computers to move beyond simply storing information in superposition; entanglement will allow them to begin carrying out full quantum computations. It is also what gives quantum computers their ultimate power — the ability to simulate nature itself, where entanglement shapes the behavior of matter at every scale. The goal is clear: to harness entanglement to unlock new scientific discoveries, from revealing new phases of matter to guiding the design of novel materials and modeling the quantum fields that govern space-time.
    “It’s exciting that we are creating machines to help us learn about the universe in ways that only quantum mechanics can teach us,” Manetsch says.
    The new study, “A tweezer array with 6100 highly coherent atomic qubits,” was funded by the Gordon and Betty Moore Foundation, the Weston Havens Foundation, the National Science Foundation via its Graduate Research Fellowship Program and the Institute for Quantum Information and Matter (IQIM) at Caltech, the Army Research Office, the U.S. Department of Energy including its Quantum Systems Accelerator, the Defense Advanced Research Projects Agency, the Air Force Office for Scientific Research, the Heising-Simons Foundation, and the AWS Quantum Postdoctoral Fellowship. Other authors include Caltech’s Kon H. Leung, the AWS Quantum senior postdoctoral scholar research associate in physics, as well as former Caltech postdoctoral scholar Xudong Lv, now at the Chinese Academy of Sciences. More

  • in

    AI-powered smart bandage heals wounds 25% faster

    As a wound heals, it goes through several stages: clotting to stop bleeding, immune system response, scabbing, and scarring.
    A wearable device called “a-Heal,” designed by engineers at the University of California, Santa Cruz, aims to optimize each stage of the process. The system uses a tiny camera and AI to detect the stage of healing and deliver a treatment in the form of medication or an electric field. The system responds to the unique healing process of the patient, offering personalized treatment.
    The portable, wireless device could make wound therapy more accessible to patients in remote areas or with limited mobility. Initial preclinical results, published in the journal npj Biomedical Innovations, show the device successfully speeds up the healing process.
    Designing a-Heal
    A team of UC Santa Cruz and UC Davis researchers, sponsored by the DARPA-BETR program and led by UC Santa Cruz Baskin Engineering Endowed Chair and Professor of Electrical and Computer Engineering (ECE) Marco Rolandi, designed a device that combines a camera, bioelectronics, and AI for faster wound healing. The integration in one device makes it a “closed-loop system” — one of the firsts of its kind for wound healing as far as the researchers are aware.
    “Our system takes all the cues from the body, and with external interventions, it optimizes the healing progress,” Rolandi said.
    The device uses an onboard camera, developed by fellow Associate Professor of ECE Mircea Teodorescu and described in a Communications Biology study, to take photos of the wound every two hours. The photos are fed into a machine learning (ML) model, developed by Associate Professor of Applied Mathematics Marcella Gomez, which the researchers call the “AI physician” running on a nearby computer.

    “It’s essentially a microscope in a bandage,” Teodorescu said. “Individual images say little, but over time, continuous imaging lets AI spot trends, wound healing stages, flag issues, and suggest treatments.”
    The AI physician uses the image to diagnose the wound stage and compares that to where the wound should be along a timeline of optimal wound healing. If the image reveals a lag, the ML model applies a treatment: either medicine, delivered via bioelectronics; or an electric field, which can enhance cell migration toward wound closure.
    The treatment topically delivered through the device is fluoxetine, a selective serotonin reuptake inhibitor which controls serotonin levels in the wound and improves healing by decreasing inflammation and increasing wound tissue closure. The dose, determined by preclinical studies by the Isseroff group at UC Davis group to optimize healing, is administered by bioelectronic actuators on the device, developed by Rolandi. An electric field, optimized to improve healing and developed by prior work of the UC Davis’ Min Zhao and Roslyn Rivkah Isseroff, is also delivered through the device.
    The AI physician determines the optimal dosage of medication to deliver and the magnitude of the applied electric field. After the therapy has been applied for a certain period of time, the camera takes another image, and the process starts again.
    While in use, the device transmits images and data such as healing rate to a secure web interface, so a human physician can intervene manually and fine-tune treatment as needed. The device attaches directly to a commercially available bandage for convenient and secure use.
    To assess the potential for clinical use, the UC Davis team tested the device in preclinical wound models. In these studies, wounds treated with a-Heal followed a healing trajectory about 25% faster than standard of care. These findings highlight the promise of the technology not only for accelerating closure of acute wounds, but also for jump-starting stalled healing in chronic wounds.

    AI reinforcement
    The AI model used for this system, which was led by Assistant Professor of Applied Mathematics Marcella Gomez, uses a reinforcement learning approach, described in a study in the journal Bioengineering, to mimic the diagnostic approach used by physicians.
    Reinforcement learning is a technique in which a model is designed to fulfill a specific end goal, learning through trial and error how to best achieve that goal. In this context, the model is given a goal of minimizing time to wound closure, and is rewarded for making progress toward that goal. It continually learns from the patient and adapts its treatment approach.
    The reinforcement learning model is guided by an algorithm that Gomez and her students created called Deep Mapper, described in a preprint study, which processes wound images to quantify the stage of healing in comparison to normal progression, mapping it along the trajectory of healing. As time passes with the device on a wound, it learns a linear dynamic model of the past healing and uses that to forecast how the healing will continue to progress.
    “It’s not enough to just have the image, you need to process that and put it into context. Then, you can apply the feedback control,” Gomez said.
    This technique makes it possible for the algorithm to learn in real-time the impact of the drug or electric field on healing, and guides the reinforcement learning model’s iterative decision making on how to adjust the drug concentration or electric-field strength.
    Now, the research team is exploring the potential for this device to improve healing of chronic and infected wounds.
    Additional publications related to this work can be found linked here.
    This research was supported by the Defense Advanced Research Projects Agency and the Advanced Research Projects Agency for Health. More

  • in

    AI breakthrough finds life-saving insights in everyday bloodwork

    Routine blood samples, such as those taken daily at any hospital and tracked over time, could help predict the severity of an injury and even provide insights into mortality after spinal cord damage, according to a recent University of Waterloo study.
    The research team utilized advanced analytics and machine learning, a type of artificial intelligence, to assess whether routine blood tests could serve as early warning signs for spinal cord injury patient outcomes.
    More than 20 million people worldwide were affected by spinal cord injury in 2019, with 930,000 new cases each year, according to the World Health Organization. Traumatic spinal cord injury often requires intensive care and is characterized by variable clinical presentations and recovery trajectories, complicating diagnosis and prognosis, especially in emergency departments and intensive care units.
    “Routine blood tests could offer doctors important and affordable information to help predict risk of death, the presence of an injury and how severe it might be,” said Dr. Abel Torres Espín, a professor in Waterloo’s School of Public Health Sciences.
    The researchers sampled hospital data from more than 2,600 patients in the U.S. They used machine learning to analyze millions of data points and discover hidden patterns in common blood measurements, such as electrolytes and immune cells, taken during the first three weeks after a spinal cord injury.
    They found that these patterns could help forecast recovery and injury severity, even without early neurological exams, which are not always reliable as they depend on a patient’s responsiveness.
    “While a single biomarker measured at a single time point can have predictive power, the broader story lies in multiple biomarkers and the changes they show over time,” said Dr. Marzieh Mussavi Rizi, a postdoctoral scholar in Torres Espín’s lab at Waterloo.

    The models, which do not rely on early neurological assessment, were accurate in predicting mortality and the severity of injury as early as one to three days after admission to the hospital, compared to standard non-specific severity measures that are often performed during the first day of arrival to intensive care.
    The research also found that accuracy increased over time as more blood tests became available. Although other measures, such as MRI and fluid omics-based biomarkers, can also provide objective data, they are not always readily accessible across medical settings. Routine blood tests, on the other hand, are economical, easy to obtain, and available in every hospital.
    “Prediction of injury severity in the first days is clinically relevant for decision-making, yet it is a challenging task through neurological assessment alone,” Torres Espín said. “We show the potential to predict whether an injury is motor complete or incomplete with routine blood data early after injury, and an increase in prediction performance as time progresses.
    “This foundational work can open new possibilities in clinical practice, allowing for better-informed decisions about treatment priorities and resource allocation in critical care settings for many physical injuries.”
    The study, Modeling trajectories of routine blood tests as dynamic biomarkers for outcome in spinal cord injury, was published in Nature’s NPJ Digital Medicine Magazine. More

  • in

    Can meditation apps really reduce stress, anxiety, and insomnia?

    Do you have a meditation app on your smartphone, computer or wearable device? Well, you’re not alone.
    There are now thousands of meditation apps available worldwide, the top 10 of which have been collectively downloaded more than 300 million times. What’s more, early work on these digital meditation platforms shows that even relatively brief usage can lead to benefits, from reduced depression, anxiety, and stress to improved insomnia symptoms.
    “Meditation apps, such as Calm and Headspace, have been enormously popular in the commercial market,” said J. David Creswell, a health psychologist at Carnegie Mellon University and lead author of a review paper on meditation apps, published today in the journal American Psychologist. “What they’re doing now is not only engaging millions of users every day, but they’re also creating new scientific opportunities and challenges.”
    One huge boon provided by meditation apps for users is access.
    “You can imagine a farmer in rural Nebraska not having many available opportunities to go to traditional group-based meditation programs, and now they have an app in their pocket which is available 24/7,” said Creswell, who is the William S. Dietrich II Professor in Psychology and Neuroscience.
    Meditation apps also provide scientists with opportunities to scale up their research.
    “Historically, I might bring 300 irritable bowel syndrome patients into my lab and study the impacts of meditation on pain management,” said Creswell. “But now I’m thinking, how do we harness the capacity of meditation apps and wearable health sensors to study 30,000 irritable bowel syndrome patients across the world?”
    Combined with products that measure heart rate and sleep patterns, such as Fitbit and the Apple Watch, meditation apps now also have the capacity to incorporate biometrics into meditation practices like never before.

    The biggest takeaway, though, is that meditation apps are fundamentally changing the way these practices are distributed to the general public. Scientific studies of use patterns show that meditation apps account for 96 percent of overall users in the mental health app marketplace.
    “Meditation apps dominate the mental health app market,” said Creswell. “And this paper is really the first to lay out the new normal and challenge researchers and tech developers to think in new ways about the disruptive nature of these apps and their reach.”
    Meditation apps challenge users to train their minds, in small initial training doses
    As with in-person meditation training, meditation apps start by meeting users where they are. Introductory courses may focus on breathing or mindfulness, but they tend to do so in small doses, the merits of which are still being debated.
    According to the data, just 10 to 21 minutes of meditation app exercises done three times a week is enough to see measurable results.
    “Of course, that looks really different from the daily meditation practice you might get within an in-person group-based meditation program, which might be 30 to 45 minutes a day,” said Creswell.

    The a la carte nature of meditation through a smartphone app may appeal to those pressed for time or without the budget for in-person coaching sessions. Users may also find it comforting to know that they have access to guided meditation on-demand, rather than at scheduled places, days, and times.
    “Maybe you’re waiting in line at Starbucks, and you’ve got three minutes to do a brief check-in mindfulness training practice,” said Creswell.
    Finally, as meditation apps continue to evolve, Creswell believes integration of AI, such as meditation-guiding chat-bots, will only become more common, and this will offer the option of even more personalization. This could mark an important development for meditation adoption at large, as offerings go from one-size-fits all group classes to training sessions tailored to the individual.
    “People use meditation for different things, and there’s a big difference between someone looking to optimize their free-throw shooting performance and someone trying to alleviate chronic pain,” said Creswell, who has trained Olympic athletes in the past.
    The elephant in the room
    Of course, with new technology comes new challenges, and for meditation apps, continued engagement remains a huge problem.
    “The engagement problem is not specific to meditation apps,” said Creswell. “But the numbers are really sobering. Ninety-five percent of participants who download a meditation app aren’t using it after 30 days.”
    If the meditation app industry is going to succeed, it will need to find ways to keep its users engaged, as apps like Duolingo have. But overall, Creswell said the market demand is clearly there.
    “People are suffering right now. There are just unbelievably high levels of stress and loneliness in the world, and these tools have tremendous potential to help,” he said.
    “I don’t think there is ever going to be a complete replacement for a good, in-person meditation group or teacher,” said Creswell. “But I think meditation apps are a great first step for anyone who wants to dip their toes in and start training up their mindfulness skills. The initial studies show that these meditation apps help with symptom relief and even reduce stress biomarkers.” More

  • in

    Tiny new lenses, smaller than a hair, could transform phone and drone cameras

    A new approach to manufacturing multicolor lenses could inspire a new generation of tiny, cheap, and powerful optics for portable devices such as phones and drones.
    The design uses layers of metamaterials to simultaneously focus a range of wavelengths from an unpolarized source and over a large diameter, overcoming a major limitation of metalenses, said the first author of the paper reporting the design, Mr Joshua Jordaan, from the Research School of Physics at the Australian National University and the ARC Centre of Excellence for Transformative Meta-Optical Systems (TMOS).
    “Our design has a lot of nice features that make it applicable to practical devices.”
    “It’s easy to manufacture because it has a low aspect ratio and each layer can be fabricated individually and then packaged together, it’s also polarisation insensitive, and is potentially scalable through mature semiconductor nanofabrication platforms,” Mr Jordaan said.
    The project was led by researchers from the Friedrich Schiller University Jena in Germany as part of the International Research Training Group Meta-ACTIVE. The paper reporting their design is published in Optics Express.
    Metalenses have thickness mere fractions of the width of a hair, which is orders of magnitude thinner than conventional lenses. They can be designed to have properties such as focal lengths that would be impossibly short for conventional optics.
    Initially the team attempted to focus multiple wavelengths with a single layer, but they hit up against some fundamental constraints, Mr Jordaan said.

    “It turns out the maximum group-delay attainable in a single-layer metasurface has physical limitations, and these in turn set upper bounds on the product of the numerical aperture, physical diameter and operating bandwidth.”
    “To work at the wavelength range we needed, a single layer would either have to have a very small diameter, which would defeat the purpose of the design, or basically have such a low numerical aperture that it’s hardly focusing the light at all,” he said.
    “We realized we needed a more complex structure, which then led to a multi-layer approach.”
    With the design shifted to incorporating several metalens layers, the team approached the problem with an inverse design algorithm based on shape optimization, with parameterization that meant a lot of degrees of freedom.
    They guided the software to search for metasurface shapes that, for a single wavelength, created simple resonances in both the electric and magnetic dipole, known as Huygens resonances. By employing resonances, the team were able to improve on previous designs by other groups, and develop metalens designs that were polarization independent, and had greater tolerances in manufacturing specifications – crucial in the quest to scale fabrication to industrial quantities.
    The optimization routine came up with a library of metamaterial elements in a surprising range of shapes, such as rounded squares, four-leaf clovers and propellers.

    These tiny shapes, around 300 nm tall and 1000 nm wide, spanned the full range of phase shifts, from zero to two pi, enabling the team to create a phase gradient map to achieve any arbitrary focusing pattern – although they were initially just aiming for a simple ring structure of a conventional lens.
    “We could, for example, focus different wavelengths into different locations to create a colour router,” Mr Jordaan said.
    However, the multilayer approach is limited to a maximum of around five different wavelengths, Mr Jordaan said.
    “The problem is you need structures large enough to be resonant at the longest wavelength, without getting diffraction from the shorter wavelengths,” he said.
    Within these constraints, Mr Jordaan said the ability to make metalenses to collect a lot of light will be a boon for future portable imaging systems.
    “The metalenses we have designed would be ideal for drones or earth-observation satellites, as we’ve tried to make them as small and light as possible,” he said. More