More stories

  • in

    New photonic chip for isolating light may be key to miniaturizing quantum devices

    Light offers an irreplaceable way to interact with our universe. It can travel across galactic distances and collide with our atmosphere, creating a shower of particles that tell a story of past astronomical events. Here on earth, controlling light lets us send data from one side of the planet to the other.
    Given its broad utility, it’s no surprise that light plays a critical role in enabling 21st century quantum information applications. For example, scientists use laser light to precisely control atoms, turning them into ultra-sensitive measures of time, acceleration, and even gravity. Currently, such early quantum technology is limited by size — state-of-the-art systems would not fit on a dining room table, let alone a chip. For practical use, scientists and engineers need to miniaturize quantum devices, which requires re-thinking certain components for harnessing light.
    Now IQUIST member Gaurav Bahl and his research group have designed a simple, compact photonic circuit that uses sound waves to rein in light. The new study, published in the October 21 issue of the journal Nature Photonics, demonstrates a powerful way to isolate, or control the directionality of light. The team’s measurements show that their approach to isolation currently outperforms all previous on-chip alternatives and is optimized for compatibility with atom-based sensors.
    “Atoms are the perfect references anywhere in nature and provide a basis for many quantum applications,” said Bahl, a professor in Mechanical Science and Engineering (MechSe) at the University of Illinois at Urbana-Champaign. “The lasers that we use to control atoms need isolators that block undesirable reflections. But so far the isolators that work well in large-scale experiments have proved tough to miniaturize.”
    Even in the best of circumstances, light is difficult to control — it will reflect, absorb, and refract when encountering a surface. A mirror sends light back where it came from, a shard of glass bends light while letting it through, and dark rocks absorb light and converts it to heat. Essentially, light will gladly scatter every which way off anything in its path. This unwieldy behavior is why even a smidgen of light is beneficial for seeing in the dark.
    Controlling light within large quantum devices is normally an arduous task that involves a vast sea of mirrors, lenses, fibers, and more. Miniaturization requires a different approach to many of these components. In the last several years, scientists and engineers have made significant advances in designing various light-controlling elements on microchips. They can fabricate waveguides, which are channels for transporting light, and can even change its color using certain materials. But forcing light, which is made from tiny blips called photons, to move in one direction while suppressing undesirable backwards reflections is tricky. More

  • in

    Two beams are better than one

    Han and Leia. George and Amal. Kermit and Miss Piggy. Gomez and Morticia. History’s greatest couples rely on communication to make them so strong their power cannot be denied.
    But that’s not just true for people (or Muppets), it’s also true for lasers.
    According to new research from the USC Viterbi School of Engineering, recently published in Nature Photonics, adding two lasers together as a sort of optical “it couple” promises to make wireless communications faster and more secure than ever before. But first, a little background. Most laser-based communications — think fiber optics, commonly used for things like high-speed internet — is transmitted in the form of a laser (optical) beam traveling through a cable. Optical communications is exceptionally fast but is limited by the fact that it must travel through physical cables. Bringing the high-capacity capabilities of lasers to untethered and roving applications — such as to airplanes, drones, submarines, and satellites — is truly exciting and potentially game-changing.
    The USC Viterbi researchers have gotten us one step closer to that feat by focusing on something called Free Space Optical Communication (FSOC). This is no small feat, and it is a challenge researchers have been working on for some time. One major roadblock has been something called “atmospheric turbulence.”
    As a single optical laser beam carrying information travels through the air, it experiences natural turbulence, much like a plane does. Wind and temperature changes in the atmosphere around it cause the beam to become less stable. Our inability to control that turbulence is what has prevented FSOC from advancing in performance similar to radio and optical fiber systems. Leaving us stuck with slower old radio waves for most wireless communication.
    “While FSOC has been around a while, it has been a fundamental challenge to efficiently recover information from an optical beam that has been affected by atmospheric turbulence,” said Runzhou Zhang, the lead author and a Ph.D. student at USC Viterbi’s Optical Communications Laboratory in the Ming Hsieh Department of Electrical and Computer Engineering.
    The researchers made an advance to solving this problem by sending a second laser beam (called a “pilot” beam) traveling along with the first to act as a partner. Traveling as a couple, the two beams are sent through the same air, experience the same turbulence, and have the same distortion. If only one beam is sent, the receiver must calculate all the distortion the beam experienced along the way before it can decode the data. This severely limits the system’s performance.
    But, when the pilot beam travels alongside the original beam, the distortion is automatically removed. Like Kermit duetting “Rainbow Connection” with Miss Piggy, the information in that beam arrives at its destination clear, crisp and easy to understand. From an engineering perspective, this accomplishment is no small feat. “The problem with radio waves, our current best bet for most wireless communication, is that it is much slower in data rate and much less secure than optical communications,” said Alan Willner, team lead on the paper and USC Viterbi professor of electrical and computer engineering. “With our new approach, we are one step closer to mitigating turbulence in high-capacity optical links.”
    Perhaps most impressively, the researchers did not solve this problem with a new device or material. They simply looked at the physics and changed their perspective. “We used the underlying physics of a well-known device called a photo detector, usually used for detecting intensity of light, and realized it could be used in a new way to make an advance towards solving the turbulence problem for laser communication systems,” said Zhang.
    Think about it this way: When Kermit and Miss Piggy sing their song, both their voices get distorted through the air in a similar way. That makes sense; they’re standing right next to each other, and their sound is traveling through the same atmosphere. What this photo detector does is turn the distortion of Kermit’s voice into the opposite of the distortion for Miss Piggy’s voice. Now, when they are mixed back together, the distortion is automatically canceled in both voices and we hear the song clearly and crisply.
    With this newly realized application of physics, the team plans to continue exploring how to make the performance even better. “We hope that our approach will one day enable higher-performance and secure wireless links,” said Willner. Such links may be used for anything from high-resolution imaging to high-performance computing.
    Story Source:
    Materials provided by University of Southern California. Original written by Ben Paul. Note: Content may be edited for style and length. More

  • in

    Machine learning can be fair and accurate

    Carnegie Mellon University researchers are challenging a long-held assumption that there is a trade-off between accuracy and fairness when using machine learning to make public policy decisions.
    As the use of machine learning has increased in areas such as criminal justice, hiring, health care delivery and social service interventions, concerns have grown over whether such applications introduce new or amplify existing inequities, especially among racial minorities and people with economic disadvantages. To guard against this bias, adjustments are made to the data, labels, model training, scoring systems and other aspects of the machine learning system. The underlying theoretical assumption is that these adjustments make the system less accurate.
    A CMU team aims to dispel that assumption in a new study, recently published in Nature Machine Intelligence. Rayid Ghani, a professor in the School of Computer Science’s Machine Learning Department (MLD) and the Heinz College of Information Systems and Public Policy; Kit Rodolfa, a research scientist in MLD; and Hemank Lamba, a post-doctoral researcher in SCS, tested that assumption in real-world applications and found the trade-off was negligible in practice across a range of policy domains.
    “You actually can get both. You don’t have to sacrifice accuracy to build systems that are fair and equitable,” Ghani said. “But it does require you to deliberately design systems to be fair and equitable. Off-the-shelf systems won’t work.”
    Ghani and Rodolfa focused on situations where in-demand resources are limited, and machine learning systems are used to help allocate those resources. The researchers looked at systems in four areas: prioritizing limited mental health care outreach based on a person’s risk of returning to jail to reduce reincarceration; predicting serious safety violations to better deploy a city’s limited housing inspectors; modeling the risk of students not graduating from high school in time to identify those most in need of additional support; and helping teachers reach crowdfunding goals for classroom needs.
    In each context, the researchers found that models optimized for accuracy — standard practice for machine learning — could effectively predict the outcomes of interest but exhibited considerable disparities in recommendations for interventions. However, when the researchers applied adjustments to the outputs of the models that targeted improving their fairness, they discovered that disparities based on race, age or income — depending on the situation — could be removed without a loss of accuracy.
    Ghani and Rodolfa hope this research will start to change the minds of fellow researchers and policymakers as they consider the use of machine learning in decision making.
    “We want the artificial intelligence, computer science and machine learning communities to stop accepting this assumption of a trade-off between accuracy and fairness and to start intentionally designing systems that maximize both,” Rodolfa said. “We hope policymakers will embrace machine learning as a tool in their decision making to help them achieve equitable outcomes.”
    Story Source:
    Materials provided by Carnegie Mellon University. Original written by Aaron Aupperlee. Note: Content may be edited for style and length. More

  • in

    Quantum material to boost terahertz frequencies

    They are regarded as one of the most interesting materials for future electronics: Topological insulators conduct electricity in a special way and hold the promise of novel circuits and faster mobile communications. Under the leadership of the Helmholtz-Zentrum Dresden-Rossendorf (HZDR), a research team from Germany, Spain and Russia has now unravelled a fundamental property of this new class of materials: How exactly do the electrons in the material respond when they are “startled” by short pulses of so-called terahertz radiation? The results are not just significant for our basic understanding of this novel quantum material, but could herald faster mobile data communication or high-sensitivity detector systems for exploring distant worlds in years to come, the team reports in NPJ Quantum Materials.
    Topological insulators are a very recent class of materials which have a special quantum property: on their surface they can conduct electricity almost loss-free while their interior functions as an insulator — no current can flow there. Looking to the future, this opens up interesting prospects: Topological insulators could form the basis for high efficiency electronic components, which makes them an interesting research field for physicists.
    But a number of fundamental questions are still unanswered. What happens, for example, when you give the electrons in the material a “nudge” using specific electromagnetic waves — so-called terahertz radiation — thus generating an excited state? One thing is clear: the electrons want to rid themselves of the energy boost forced upon them as quickly as possible, such as by heating up the crystal lattice surrounding them. In the case of topological insulators, however, it was previously unclear whether getting rid of this energy happened faster in the conducting surface than in the insulating core. “So far, we simply didn’t have the appropriate experiments to find out,” explains study leader Dr. Sergey Kovalev from the Institute of Radiation Physics at HZDR. “Up to now, at room temperature, it was extremely difficult to differentiate the surface reaction from that in the interior of the material.”
    In order to overcome this hurdle, he and his international team developed an ingenious test set-up: intensive terahertz pulses hit a sample and excite the electrons. Immediately after, laser flashes illuminate the material and register how the sample responds to the terahertz stimulation. In a second test series, special detectors measure to what extent the sample exhibits an unusual non-linear effect and multiplies the frequency of the terahertz pulses applied. Kovalev and his colleagues conducted these experiments using the TELBE terahertz light source at HZDR’s ELBE Center for High-Power Radiation Sources. Researchers from the Catalan Institute of Nanoscience and Nanotechnology in Barcelona, Bielefeld University, the German Aerospace Center (DLR), the Technical University of Berlin, and Lomonosov University and the Kotelnikov Institute of Radio Engineering and Electronics in Moscow were involved.
    Rapid energy transfer
    The decisive thing was that the international team did not only investigate a single material. Instead, the Russian project partners produced three different topological insulators with different, precisely determined properties: in one case, only the electrons on the surface could directly absorb the terahertz pulses. In the others, the electrons were mainly excited in the interior of the sample. “By comparing these three experiments we were able to differentiate precisely between the behavior of the surface and the interior of the material,” Kovalev explains. “And it emerged that the electrons in the surface became excited significantly faster than those in the interior of the material.” Apparently, they were able to transfer their energy to the crystal lattice immediately.
    Put into figures: while the surface electrons reverted to their original energetic state in a few hundred femtoseconds, the “inner” electrons took approximately ten times as long, that is, a few picoseconds. “Topological insulators are highly-complex systems. The theory is anything but easy to understand,” emphasizes Michael Gensch, former head of the TELBE facility at HZDR and now head of department in the Institute of Optical Sensor Systems at the German Aerospace Center (DLR) and professor at TU Berlin. “Our results can help decide which of the theoretical ideas hold true.”
    Highly effective multiplication
    But the experiment also augurs well for interesting developments in digital communication like WLAN and mobile communications. Today, technologies such as 5G function in the gigahertz range. If we could harness higher frequencies in the terahertz range, significantly more data could be transmitted by a single radio channel, whereby frequency multipliers could play an important role: They are able to translate relatively low radio frequencies into significantly higher ones.
    Some time ago, the research team had already realized that, under certain conditions, graphene — a two-dimensional, super thin carbon — can act as an efficient frequency multiplier. It is able to convert 300 gigahertz radiation into frequencies of some terahertz. The problem is that when the applied radiation is extremely intensive, there is a significant drop in the efficiency of the graphene. Topological insulators, on the other hand, even function with the most intensive stimulation, the new study discovered. “This might mean it’s possible to multiply frequencies from a few terahertz to several dozen terahertz,” surmises HZDR physicist Jan-Christoph Deinert, who heads the TELBE team together with Sergey Kovalev. “At the moment, there is no end in sight when it comes to topological insulators.”
    If such a development comes about, the new quantum materials could be used in a much wider frequency range than with graphene. “At DLR, we are very interested in using quantum materials of this kind in high-performance heterodyne receivers for astronomy, especially in space telescopes,” Gensch explains. More

  • in

    Unmasking the magic of superconductivity in twisted graphene

    The discovery in 2018 of superconductivity in two single-atom-thick layers of graphene stacked at a precise angle of 1.1 degrees (called ‘magic’-angle twisted bilayer graphene) came as a big surprise to the scientific community. Since the discovery, physicists have asked whether magic graphene’s superconductivity can be understood using existing theory, or whether fundamentally new approaches are required — such as those being marshalled to understand the mysterious ceramic compound that superconducts at high temperatures. Now, as reported in the journal Nature, Princeton researchers have settled this debate by showing an uncanny resemblance between the superconductivity of magic graphene and that of high temperature superconductors. Magic graphene may hold the key to unlocking new mechanisms of superconductivity, including high temperature superconductivity.
    Ali Yazdani, the Class of 1909 Professor of Physics and Director of the Center for Complex Materials at Princeton University led the research. He and his team have studied many different types of superconductors over the years and have recently turned their attention to magic bilayer graphene.
    “Some have argued that magic bilayer graphene is actually an ordinary superconductor disguised in an extraordinary material,” said Yazdani, “but when we examined it microscopically it has many of the characteristics of high temperature cuprate superconductors. It is a déjà vu moment.”
    Superconductivity is one of nature’s most intriguing phenomena. It is a state in which electrons flow freely without any resistance. Electrons are subatomic particles that carry negative electric charges; they are vital to our way of life because they power our everyday electronics. In normal circumstances, electrons behave erratically, jumping and jostling against each other in a manner that is ultimately inefficient and wastes energy.
    But under superconductivity, electrons suddenly pair up and start to flow in unison, like a wave. In this state the electrons not only do not lose energy, but they also display many novel quantum properties. These properties have allowed for a number of practical applications, including magnets for MRIs and particle accelerators as well as in the making of quantum bits that are being used to build quantum computers. Superconductivity was first discovered at extremely low temperatures in elements such as aluminum and niobium. In recent years, it has been found close to room temperatures under extraordinarily high pressure, and also at temperatures just above the boiling point of liquid nitrogen (77 degrees Kelvin) in ceramic compounds.
    But not all superconductors are created equal. More

  • in

    AI helping to quantify enzyme activity

    Without enzymes, an organism would not be able to survive. It is these biocatalysts that facilitate a whole range of chemical reactions, producing the building blocks of the cells. Enzymes are also used widely in biotechnology and in our households, where they are used in detergents, for example.
    To describe metabolic processes facilitated by enzymes, scientists refer to what is known as the Michaelis-Menten equation. The equation describes the rate of an enzymatic reaction depending on the concentration of the substrate — which is transformed into the end products during the reaction. A central factor in this equation is the ‘Michaelis constant’, which characterises the enzyme’s affinity for its substrate.
    It takes a great deal of time and effort to measure this constant in a lab. As a result, experimental estimates of these constants exist for only a minority of enzymes. A team of researchers from the HHU Institute of Computational Cell Biology and Chalmers University of Technology in Stockholm has now chosen a different approach to predict the Michaelis constants from the structures of the substrates and enzymes using AI.
    They applied their approach, based on deep learning methods, to 47 model organisms ranging from bacteria to plants and humans. Because this approach requires training data, the researchers used known data from almost 10,000 enzyme-substrate combinations. They tested the results using Michaelis constants that had not been used for the learning process.
    Prof. Lercher had this to say about the quality of the results: “Using the independent test data, we were able to demonstrate that the process can predict Michaelis constants with an accuracy similar to the differences between experimental values from different laboratories. It is now possible for computers to estimate a new Michaelis constant in just a few seconds without the need for an experiment.”
    The sudden availability of Michaelis constants for all enzymes of model organisms opens up new paths for metabolic computer modelling, as highlighted by the journal PLOS Biology in an accompanying article.
    Story Source:
    Materials provided by Heinrich-Heine University Duesseldorf. Original written by Arne Claussen. Note: Content may be edited for style and length. More

  • in

    New way to find cancer at the nanometer scale

    Diagnosing and treating cancer can be a race against time. By the time the disease is diagnosed in a patient, all too often it is advanced and able to spread throughout the body, decreasing chances of survival. Early diagnosis is key to stopping it.
    In a new Concordia-led paper published in the journal Biosensors and Bioelectronics, researchers describe a new liquid biopsy method using lab-on-a-chip technology that they believe can detect cancer before a tumour is even formed.
    Using magnetic particles coated in a specially designed bonding agent, the liquid biopsy chip attracts and captures particles containing cancer-causing biomarkers. A close analysis can identify the type of cancer they are carrying. This, the researchers say, can significantly improve cancer diagnosis and treatment.
    Trapping the messenger
    The chip targets extracellular vesicles (EVs), a type of particle that is released by most kinds of organic cells. EVs — sometimes called exosomes — are extremely small, usually measuring between 40 and 200 nanometres. But they contain a cargo of proteins, nucleic acids such as RNA, metabolites and other molecules from the parent cell, and they are taken up by other cells. If EVs contain biomarkers associated with cancer and other diseases, they will spread their toxic cargo from cell to cell.
    To capture the cancer-carrying exosomes exclusively, the researchers developed a small microfluidic chip containing magnetic or gold nanoparticles coated with a synthetic polypeptide to act as a molecular bonding agent. When a droplet of organic liquid, be it blood, saliva, urine or any other, is run through the chip, the exosomes attach themselves to the treated nanoparticles. After the exosomes are trapped, the researchers then separate them from the nanoparticles and carry out proteomic and genomic analysis to determine the specific cancer type.
    “This technique can provide a very early diagnosis of cancer that would help find therapeutic solutions and improve the lives of patients,” says the paper’s senior author Muthukumaran Packirisamy, a professor in the Department of Mechanical, Industrial and Aerospace Engineering and director of Concordia’s Optical Bio-Microsystems Laboratory.
    Alternatives to conventional chemo and exploratory surgeries
    “Liquid biopsies avoid the trauma of invasive biopsies, which involve exploratory surgery,” he adds. “We can get all the cancer markers and cancer prognoses just by examining any bodily fluid.”
    Having detailed knowledge of a particular form of cancer’s genetic makeup will expose its weaknesses to treatment, notes Anirban Ghosh, a co-author and affiliate professor at Packirisamy’s laboratory. “Conventional chemotherapy targets all kinds of cells and results in significant and unpleasant side effects,” he says. “With the precision diagnostics afforded to us here, we can devise a treatment that only targets cancer cells.”
    The paper’s lead author is PhD student Srinivas Bathini, whose academic background is in electrical engineering. He says the interdisciplinary approach to his current area of study has been challenging and rewarding and notes that the technology’s potential could revolutionize medical diagnostics. The researchers used breast cancer cells in this study but are looking at ways to expand their capabilities to include a wide range of disease testing.
    “Perhaps one day this product could be as readily available as other point-of-care devices, such as home pregnancy tests,” he speculates.
    Story Source:
    Materials provided by Concordia University. Original written by Patrick Lejtenyi. Note: Content may be edited for style and length. More

  • in

    COVID-19 vaccination strategies: When is one dose better than two?

    In many parts of the world, the supply of COVID-19 vaccines continues to lag behind the demand. While most vaccines are designed as a two-dose regimen, some countries, like Canada, have prioritized vaccinating as many people as possible with a single dose before giving out an additional dose.
    In Chaos, by AIP Publishing, researchers from the Frankfurt School of Finance and Management and the University of California, Los Angeles illustrate the conditions under which a “prime first” vaccine campaign is most effective at stopping the spread of the COVID-19 virus.
    The prime first campaign does not suggest people should receive only one dose of the vaccine. Instead, it emphasizes vaccinating large numbers of people as quickly as possible, then doubling back to give out second doses. In comparison, the “prime boost” vaccine campaign prioritizes fully vaccinating fewer people.
    Immunologically speaking, the prime boost scenario is always superior. However, under supply constraints, the advantages of vaccinating twice as many people may outweigh the advantages of a double dose.
    The scientists simulated the transmission of COVID-19 with a susceptible, exposed, infected, recovered, deceased model. Each of these disease states is associated with a compartment containing individual people. Transitions between compartments depend on disease parameters like virus transmissibility.
    Each compartment is further divided to account for unvaccinated, partially vaccinated, and fully vaccinated individuals. The researchers measured how each vaccine group compared to the others under different conditions.
    “We have this giant degree of uncertainty about the parameters of COVID-19,” said author Jan Nagler. “We acknowledge that we don’t know these precise values, so we sample over the entire parameter space. We give a nice idea of when prime first campaigns are better with respect to saving lives than prime boost vaccination.”
    The team found the vaccine waning rate to be a critically important factor in the decision. If the waning, or decrease in vaccine effectiveness, is too strong after a single dose, the double dose vaccination strategy is often the better option.
    However, the vaccine strategy flips if the waning rate after a single dose is more like the waning rate after a double dose.
    “Our results suggest that better estimates of immunity waning rates are important to decide if prime first protocols are more effective than prime boost vaccination,” said author Lucas Böttcher.
    As the scientific community gathers more data on COVID-19 vaccinations, the scientists hope this model will become more informative for public health experts and politicians who must decide for or against a certain vaccination protocol.
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More