More stories

  • in

    Entangled photons tailor-made

    In order to effectively use a quantum computer, a larger number of specially prepared — in technical terms: entangled — basic building blocks are needed to carry out computational operations. A team of physicists at the Max Planck Institute of Quantum Optics in Garching has now for the very first time demonstrated this task with photons emitted by a single atom. Following a novel technique, the researchers generated up to 14 entangled photons in an optical resonator, which can be prepared into specific quantum physical states in a targeted and very efficient manner. The new method could facilitate the construction of powerful and robust quantum computers, and serve the secure transmission of data in the future.
    The phenomena of the quantum world, which often seem bizarre from the perspective of the common everyday world, have long since found their way into technology. For example, entanglement: a quantum-physical connection between particles that links them in a strange way over arbitrarily long distances. It can be used, for example, in a quantum computer — a computing machine that, unlike a conventional computer, can perform numerous mathematical operations simultaneously. However, in order to use a quantum computer profitably, a large number of entangled particles must work together. They are the basic elements for calculations, so-called qubits.
    “Photons, the particles of light, are particularly well suited for this because they are robust by nature and easy to manipulate,” says Philip Thomas, a doctoral student at the Max Planck Institute of Quantum Optics (MPQ) in Garching near Munich. Together with colleagues from the Quantum Dynamics Division led by Prof. Gerhard Rempe, he has now succeeded in taking an important step towards making photons usable for technological applications such as quantum computing: For the first time, the team generated up to 14 entangled photons in a defined way and with high efficiency.
    One atom as a photon source
    “The trick to this experiment was that we used a single atom to emit the photons and interweave them in a very specific way,” says Thomas. To do this, the Max Planck researchers placed a rubidium atom at the center of an optical cavity — a kind of echo chamber for electromagnetic waves. With laser light of a certain frequency, the state of the atom could be precisely addressed. Using an additional control pulse, the researchers also specifically triggered the emission of a photon that is entangled with the quantum state of the atom.
    “We repeated this process several times and in a previously determined manner,” Thomas reports. In between, the atom was manipulated in a certain way — in technical jargon: rotated. In this way, it was possible to create a chain of up to 14 light particles that were entangled with each other by the atomic rotations and brought into a desired state. “To the best of our knowledge, the 14 interconnected light particles are the largest number of entangled photons that have been generated in the laboratory so far,” Thomas emphasises.
    Deterministic generation process
    But it is not only the quantity of entangled photons that marks a major step towards the development of powerful quantum computers — the way they are generated is also very different from conventional methods. “Because the chain of photons emerged from a single atom, it could be produced in a deterministic way,” Thomas explains. This means: in principle, each control pulse actually delivers a photon with the desired properties. Until now, the entanglement of photons usually took place in special, non-linear crystals. The shortcoming: there, the light particles are essentially created randomly and in a way that cannot be controlled. This also limits the number of particles that can be bundled into a collective state.
    The method used by the Garching team, on the other hand, allows basically any number of entangled photons to be generated. In addition, the method is particularly efficient — another important measure for possible future technical applications: “By measuring the photon chain produced, we were able to prove an efficiency of almost 50 percent,” says Philip Thomas. This means: almost every second “push of a button” on the rubidium atom delivered a usable light particle — far more than has been achieved in previous experiments. “All in all, our work removes a long-standing obstacle on the path to scalable, measurement-based quantum computing,” summarises department Director Gerhard Rempe the results.
    More space for quantum communication
    he scientists at the MPQ want to remove yet another hurdle. Complex computing operations for instance would require at least two atoms as photon sources in the resonator. The quantum physicists speak of a two-dimensional cluster state. “We are already working on tackling this task,” reveals Philip Thomas. The Max Planck researcher also emphasises that possible technical applications extend far beyond quantum computing: “Another application example is quantum communication” — the tap-proof transmission of information, for example by light in an optical fibre. There, the light experiences unavoidable losses during its propagation due to optical effects such as scattering and absorption — which limits the distance over which data can be transported. Using the method developed in Garching, quantum information could be packaged in entangled photons and could also survive a certain amount of light loss — and enable secure communication over greater distances. More

  • in

    Researchers use computer modeling to understand how self-renewal processes impact skin cell evolution

    All normal human tissues acquire mutations over time. Some of these mutations may be driver mutations that promote the development of cancer through increased proliferation and survival, while other mutations may be neutral passenger mutations that have no impact on cancer development. Currently, it is unclear how the normal self-renewal process of the skin called homeostasis impacts the development and evolution of gene mutations in cells. In a new study published in the Proceedings of the National Academy of Sciences (PNAS), Moffitt Cancer Center used mathematical and computer modeling to demonstrate the impact of skin homeostasis on driver and passenger mutations.
    Skin cells undergo a normal life and death cycle of homeostasis. Cells in the lower basal layer proliferate, grow and move into the upper layers of the skin while undergoing cell differentiation and maturation. Eventually, the skin cells migrate into the uppermost layer of the skin where they form a protective barrier, die and are sloughed off.
    Homeostasis is typically maintained in the skin. Its thickness and growth do not significantly change over time, despite the accumulation of mutations. This is different from other tissue types that undergo increased growth and proliferation due to mutations. However, scientists are not sure how cell mutations in the skin evolve and form subclones, or groups of cells derived from a single parent cell, without impacting normal skin homeostasis.
    Moffitt researchers developed a computer simulation model to address these uncertainties and improve their understanding of the impact of skin homeostasis on gene mutations and subclone evolution. Computer modeling can address complex biological relationships among cells that cannot be studied in typical laboratory settings. The researchers built their model based on the normal structure of the skin, including a constant cell number based on self-renewal, a constant tissue height and a constant number of immature stem cells. They incorporated patient mutation data using GATTACA, a tool allowing you to introduce and track mutations, into their model to assess how mutations and UV exposure impact skin homeostasis and clonal populations. They also investigated the impact of two genes that are commonly mutated in nonmelanoma skin cancer, NOTCH1 and TP53.
    “This study prompted the creation of several new tools, such as GATTACA, which allows you to induce and track base pair resolution mutations in any agent-based modeling framework with temporal, spatial and genomic positional information,” said the study’s lead author Ryan Schenck, Ph.D., a mathematical oncology programmer in Moffitt’s Department of Integrated Mathematical Oncology. “Along with my lab colleague Dr. Chandler Gatenbee, we also developed EvoFreq to help visualize evolutionary dynamics, now being used in many of our publications.”
    The researchers demonstrated that both passenger and driver mutations exist in subclones within the skin with a similar size and frequency. Most mutations that occur in immature stem cells are lost or are present in smaller subclones due to random stem cell death and replacement, while larger subclones are likely due to persistence and older age. Large NOTCH1and TP53 subclones are rarely observed because they would destroy the homeostasis of the skin; however, those large subclones that do exist likely arose during an early age.
    The researchers used their model to determine when subclones with NOTCH1 and TP53mutations have a selective fitness advantage over neighboring cells without mutations. They showed using their model that subclones with NOTCH1 mutations may prevent neighboring cells from dividing into their positions, while subclones with TP53 mutations may be resistant to cell death from UV exposure. The researchers hope that their model can be used to study other processes impacted by homeostasis that cannot be studied with typical laboratory approaches.
    “This work broadens our current understanding of selection and fitness acting in a homeostatic, normal tissue, where subclone size more reflects persistence rather than selective sweeps, with larger subclones being predominately older subclones,” said Alexander Anderson, Ph.D., chair of Moffitt’s Department of Integrated Mathematical Oncology. “This model strives to provide a means to explore mechanisms of increased fitness in normal, homeostatic tissue and provides a simple framework for future researchers to model their hypothesized mechanisms within squamous tissue.”
    This study was supported by grants received from the National Cancer Institute (U54CA193489, U54CA217376, P01 CA196569, U01CA23238), the Wellcome Trust (108861/7/15/7, 206314/Z/17/Z), the Wellcome Centre for Human Genetics (203141/7/16/7) and Moffitt’s Center of Excellence for Evolutionary Therapy. More

  • in

    Successful labor outcomes in expectant mothers using AI

    Mayo Clinic researchers have found that using artificial intelligence (AI) algorithms to analyze patterns of changes in women who are in labor can help identify whether a successful vaginal delivery will occur with good outcomes for mom and baby. The findings were published in PLOS ONE.
    “This is the first step to using algorithms in providing powerful guidance to physicians and midwives as they make critical decisions during the labor process,” says Abimbola Famuyide, M.D., a Mayo Clinic OB-GYN and senior author of the study. “Once validated with further research, we believe the algorithm will work in real time, meaning every input of new data during an expectant woman’s labor automatically recalculate the risk of adverse outcome. This may help reduce the rate of cesarean delivery, and maternal and neonatal complications.”
    Women in labor understand the importance of periodic cervical examinations to gauge the progress of labor. This is an essential step, as it helps obstetricians predict the likelihood of a vaginal delivery in a specified period of time. The problem is that cervical dilation in labor varies from person to person, and many important factors can determine the course of labor.
    In the study, researchers used data from the Eunice Kennedy Shriver National Institute of Child Health and Human Development’s multicenter Consortium on Safe Labor database to create the prediction model. They examined more than 700 clinical and obstetric factors in 66,586 deliveries from the time of admission and during labor progression.
    The risk-prediction model consisted of data known at the time of admission in labor, including patient baseline characteristics, the patient’s most recent clinical assessment, as well as cumulative labor progress from admission. The researchers explain that the models may provide an alternative to conventional labor charts and promote individualization of clinical decisions using baseline and labor characteristics of each patient.
    “It is very individualized to the person in labor,” says Dr. Famuyide. He adds that this will be a powerful tool for midwives and physicians remotely as it will allow time for transfers of patients to occur from rural or remote settings to the appropriate level of care.
    “The AI algorithm’s ability to predict individualized risks during the labor process will not only help reduce adverse birth outcomes but it can also reduce healthcare costs associated with maternal morbidity in the U.S., which has been estimated to be over $30 billion,” adds Bijan Borah, Ph.D., Robert D. and Patricia E. Kern Scientific Director for Health Services and Outcomes Research.
    Validation studies are ongoing to assess the outcomes of these models after they were implemented in labor units.
    This study was conducted in collaboration with scientists from the Mayo Clinic Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery. The authors have declared no competing or potential conflicts of interest.
    Story Source:
    Materials provided by Mayo Clinic. Original written by Kelley Luckstein. Note: Content may be edited for style and length. More

  • in

    Researchers use infrared light to wirelessly transmit power over 30 meters

    Imagine walking into an airport or grocery store and your smartphone automatically starts charging. This could be a reality one day, thanks to a new wireless laser charging system that overcomes some of the challenges that have hindered previous attempts to develop safe and convenient on-the-go charging systems.
    “The ability to power devices wirelessly could eliminate the need to carry around power cables for our phones or tablets,” said research team leader Jinyong Ha from Sejong University in South Korea. “It could also power various sensors such as those in Internet of Things (IoT) devices and sensors used for monitoring processes in manufacturing plants.”
    In the Optica Publishing Group journal Optics Express, the researchers describe their new system, which uses infrared light to safely transfer high levels of power. Laboratory tests showed that it could transfer 400 mW light power over distances of up to 30 meters. This power is sufficient for charging sensors, and with further development, it could be increased to levels necessary to charge mobile devices.
    Several techniques have been studied for long-range wireless power transfer. However, it has been difficult to safely send enough power over meter-level distances. To overcome this challenge, the researchers optimized a method called distributed laser charging, which has recently gained more attention for this application because it provides safe high-power illumination with less light loss.
    “While most other approaches require the receiving device to be in a special charging cradle or to be stationary, distributed laser charging enables self-alignment without tracking processes as long as the transmitter and receiver are in the line of sight of each other,” said Ha. “It also automatically shifts to a safe low power delivery mode if an object or a person blocks the line of sight.”
    Going the distance
    Distributed laser charging works somewhat like a traditional laser but instead of the optical components of the laser cavity being integrated into one device, they are separated into a transmitter and receiver. When the transmitter and receiver are within a line of sight, a laser cavity is formed between them over the air — or free space — which allows the system to deliver light-based power. If an obstacle cuts the transmitter-receiver line of sight, the system automatically switches to a power-safe mode, achieving hazard-free power delivery in the air. More

  • in

    ROBE Array could let small companies access popular form of AI

    A breakthrough low-memory technique by Rice University computer scientists could put one of the most resource-intensive forms of artificial intelligence — deep-learning recommendation models (DLRM) — within reach of small companies.
    DLRM recommendation systems are a popular form of AI that learns to make suggestions users will find relevant. But with top-of-the-line training models requiring more than a hundred terabytes of memory and supercomputer-scale processing, they’ve only been available to a short list of technology giants with deep pockets.
    Rice’s “random offset block embedding array,” or ROBE Array, could change that. It’s an algorithmic approach for slashing the size of DLRM memory structures called embedding tables, and it will be presented this week at the Conference on Machine Learning and Systems (MLSys 2022) in Santa Clara, California, where it earned Outstanding Paper honors.
    “Using just 100 megabytes of memory and a single GPU, we showed we could match the training times and double the inference efficiency of state-of-the-art DLRM training methods that require 100 gigabytes of memory and multiple processors,” said Anshumali Shrivastava, an associate professor of computer science at Rice who’s presenting the research at MLSys 2022 with ROBE Array co-creators Aditya Desai, a Rice graduate student in Shrivastava’s research group, and Li Chou, a former postdoctoral researcher at Rice who is now at West Texas A&M University.
    “ROBE Array sets a new baseline for DLRM compression,” Shrivastava said. “And it brings DLRM within reach of average users who do not have access to the high-end hardware or the engineering expertise one needs to train models that are hundreds of terabytes in size.”
    DLRM systems are machine learning algorithms that learn from data. For example, a recommendation system that suggests products for shoppers would be trained with data from past transactions, including the search terms users provided, which products they were offered and which, if any, they purchased. One way to improve the accuracy of recommendations is to sort training data into more categories. For example, rather than putting all shampoos in a single category, a company could create categories for men’s, women’s and children’s shampoos. More

  • in

    Underwater messaging app for smartphones

    For millions of people who participate in activities such as snorkeling and scuba diving each year, hand signals are the only option for communicating safety and directional information underwater. While recreational divers may employ around 20 signals, professional divers’ vocabulary can exceed 200 signals on topics ranging from oxygen level, to the proximity of aquatic species, to the performance of cooperative tasks.
    The visual nature of these hand signals limits their effectiveness at distance and in low visibility. Two-way text messaging is a potential alternative, but one that requires expensive custom hardware that is not widely available.
    Researchers at the University of Washington show how to achieve underwater messaging on billions of existing smartphones and smartwatches using only software. The team developed AquaApp, the first mobile app for acoustic-based communication and networking underwater that can be used with existing devices such as smartphones and smartwatches.
    The researchers presented their paper describing AquaApp Aug. 25 at SIGCOMM 2022.
    “Smartphones rely on radio signals like WiFi and Bluetooth for wireless communication. Those don’t propagate well underwater, but acoustic signals do,” said co-lead author Tuochao Chen, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “With AquaApp, we demonstrate underwater messaging using the speaker and microphone widely available on smartphones and watches. Other than downloading an app to their phone, the only thing people will need is a waterproof phone case rated for the depth of their dive.”
    The AquaApp interface enables users to select from a list of 240 pre-set messages that correspond to hand signals employed by professional divers, with the 20 most common signals prominently displayed for easy access. Users can also filter messages according to eight categories, including directional indicators, environmental factors and equipment status. More

  • in

    Artificial Intelligence Improves Treatment in Women with Heart Attacks

    Heart attacks are one of the leading causes of death worldwide, and women who suffer a heart attack have a higher mortality rate than men. This has been a matter of concern to cardiologists for decades and has led to controversy in the medical field about the causes and effects of possible gaps in treatment. The problem starts with the symptoms: unlike men, who usually experience chest pain with radiation to the left arm, a heart attack in women often manifests as abdominal pain radiating to the back or as nausea and vomiting. These symptoms are unfortunately often misinterpreted by the patients and healthcare personnel — with disastrous consequences.
    Risk profile and clinical picture is different in women
    An international research team led by Thomas F. Lüscher, professor at the Center for Molecular Cardiology at the University of Zurich (UZH), has now investigated the role of biological sex in heart attacks in more detail. “Indeed, there are notable differences in the disease phenotype observed in females and males. Our study shows that women and men differ significantly in their risk factor profile at hospital admission,” says Lüscher. When age differences at admission and existing risk factors such as hypertension and diabetes are disregarded, female heart-attack patients have higher mortality than male patients. “However, when these differences are taken into account statistically, women and men have similar mortality,” the cardiologist adds.
    Current risk models favor under-treatment of female patients
    In their study, published in the journal The Lancet, researchers from Switzerland and the United Kingdom analyzed data from 420,781 patients across Europe who had suffered the most common type of heart attack. “The study shows that established risk models which guide current patient management are less accurate in females and favor the undertreatment of female patients,” says first author Florian A. Wenzl of the Center for Molecular Medicine at UZH. “Using a machine learning algorithm and the largest datasets in Europe we were able to develop a novel artificial- intelligence-based risk score which accounts for sex-related differences in the baseline risk profile and improves the prediction of mortality in both sexes,” Wenzl says.
    AI-based risk profiling improves individualized care
    Many researchers and biotech companies agree that artificial intelligence and Big Data analytics are the next step on the road to personalized patient care. “Our study heralds the era of artificial intelligence in the treatment of heart attacks,” says Wenzl. Modern computer algorithms can learn from large data sets to make accurate predictions about the prognosis of individual patients — the key to individualized treatments.
    Thomas F. Lüscher and his team see huge potential in the application of artificial intelligence for the management of heart disease both in male and female patients. “I hope the implementation of this novel score in treatment algorithms will refine current treatment strategies, reduce sex inequalities, and eventually improve the survival of patients with heart attacks — both male and female,” says Lüscher.
    Story Source:
    Materials provided by University of Zurich. Note: Content may be edited for style and length. More

  • in

    From bits to p-bits: One step closer to probabilistic computing

    Tohoku University scientists in Japan have developed a mathematical description of what happens within tiny magnets as they fluctuate between states when an electric current and magnetic field are applied. Their findings, published in the journal Nature Communications, could act as the foundation for engineering more advanced computers that can quantify uncertainty while interpreting complex data.
    Classical computers have gotten us this far, but there are some problems that they cannot address efficiently. Scientists have been working on addressing this by engineering computers that can utilize the laws of quantum physics to recognize patterns in complex problems. But these so-called quantum computers are still in their early stages of development and are extremely sensitive to their surroundings, requiring extremely low temperatures to function.
    Now, scientists are looking at something different: a concept called probabilistic computing. This type of computer, which could function at room temperature, would be able to infer potential answers from complex input. A simplistic example of this type of problem would be to infer information about a person by looking at their purchasing behaviour. Instead of the computer providing a single, discrete result, it picks out patterns and delivers a good guess of what the result might be.
    There could be several ways to build such a computer, but some scientists are investigating the use of devices called magnetic tunnel junctions. These are made from two layers of magnetic metal separated by an ultrathin insulator (Fig. 1). When these nanomagnetic devices are thermally activated under an electric current and magnetic field, electrons tunnel through the insulating layer. Depending on their spin, they can cause changes, or fluctuations, within the magnets. These fluctuations, called p-bits, which are the alternative to the on/off or 0/1 bits we have all heard about in classical computers, could form the basis of probabilistic computing. But to engineer probabilistic computers, scientists need to be able to describe the physics that happens within magnetic tunnel junctions.
    This is precisely what Shun Kanai, professor at Tohoku University’s Research Institute of Electrical Communication, and his colleagues have achieved.
    “We have experimentally clarified the ‘switching exponent’ that governs fluctuation under the perturbations caused by magnetic field and spin-transfer torque in magnetic tunnel junctions,” says Kanai. “This gives us the mathematical foundation to implement magnetic tunnel junctions into the p-bit in order to sophisticatedly design probabilistic computers. Our work has also shown that these devices can be used to investigate unexplored physics related to thermally activated phenomena.”
    Story Source:
    Materials provided by Tohoku University. Note: Content may be edited for style and length. More