More stories

  • in

    Quantum breakthrough when light makes materials magnetic

    The potential of quantum technology is huge but is today largely limited to the extremely cold environments of laboratories. Now, researchers at Stockholm University, at the Nordic Institute for Theoretical Physics and at the Ca’ Foscari University of Venice have succeeded in demonstrating for the very first time how laser light can induce quantum behavior at room temperature — and make non-magnetic materials magnetic. The breakthrough is expected to pave the way for faster and more energy-efficient computers, information transfer and data storage.
    Within a few decades, the advancement of quantum technology is expected to revolutionize several of society’s most important areas and pave the way for completely new technological possibilities in communication and energy. Of particular interest for researchers in the field are the peculiar and bizarre properties of quantum particles — which deviate completely from the laws of classical physics and can make materials magnetic or superconducting. By increasing the understanding of exactly how and why this type of quantum states arise, the goal is to be able to control and manipulate materials to obtain quantum mechanical properties.
    So far, researchers have only been able to induce quantum behaviors, such as magnetism and superconductivity, at extremely cold temperatures. Therefore, the potential of quantum research is still limited to laboratory environments.
    Now, a research team from Stockholm University and the Nordic Institute of Theoretical Physics (NORDITA)* in Sweden, the University of Connecticut and the SLAC National Accelerator Laboratory in USA, the National Institute for Materials Science in Tsukuba, Japan, the Elettra-Sincrotrone Trieste, the ‘Sapienza’ University of Rome and the Ca’ Foscari University of Venice in Italy, is the first in the world to demonstrate in an experiment how laser light can induce magnetism in a non-magnetic material at room temperature. In the study, published in Nature, the researchers subjected the quantum material strontium titanate to short but intense laser beams of a peculiar wavelength and polarization, to induced magnetism.
    “The innovation in this method lies in the concept of letting light move atoms and electrons in this material in circular motion, so to generate currents that make it as magnetic as a refrigerator magnet. We have been able to do so by developing a new light source in the far-infrared with a polarization which has a “corkscrew” shape. This is the first time we have been able to induce and clearly see how the material becomes magnetic at room temperature in an experiment. Furthermore, our approach allows to make magnetic materials out of many insulators, when magnets are typically made of metals. In the long run, this opens for completely new applications in society,” says the research leader Stefano Bonetti at Stockholm University and at the Ca’ Foscari University of Venice
    The method is based on the theory of “dynamic multiferroicity,” which predicts that when titanium atoms are “stirred up” with circularly polarized light in an oxide based on titanium and strontium, a magnetic field will be formed. But it is only now that the theory can be confirmed in practice. The breakthrough is expected to have broad applications in several information technologies.
    “This opens up for ultra-fast magnetic switches that can be used for faster information transfer and considerably better data storage, and for computers that are significantly faster and more energy-efficient,” says Alexander Balatsky, professor of physics at NORDITA.
    In fact, the results of the team have already been reproduced in several other labs, and a publication in the same issue of Nature demonstrates that this approach can be used to write, and hence store, magnetic information. A new chapter in designing new materials using light has been opened. More

  • in

    AI makes retinal imaging 100 times faster, compared to manual method

    Researchers at the National Institutes of Health applied artificial intelligence (AI) to a technique that produces high-resolution images of cells in the eye. They report that with AI, imaging is 100 times faster and improves image contrast 3.5-fold. The advance, they say, will provide researchers with a better tool to evaluate age-related macular degeneration (AMD) and other retinal diseases.
    “Artificial intelligence helps overcome a key limitation of imaging cells in the retina, which is time,” said Johnny Tam, Ph.D., who leads the Clinical and Translational Imaging Section at NIH’s National Eye Institute.
    Tam is developing a technology called adaptive optics (AO) to improve imaging devices based on optical coherence tomography (OCT). Like ultrasound, OCT is noninvasive, quick, painless, and standard equipment in most eye clinics.
    Imaging RPE cells with AO-OCT comes with new challenges, including a phenomenon called speckle. Speckle interferes with AO-OCT the way clouds interfere with aerial photography. At any given moment, parts of the image may be obscured. Managing speckle is somewhat similar to managing cloud cover. Researchers repeatedly image cells over a long period of time. As time passes, the speckle shifts, which allows different parts of the cells to become visible. The scientists then undertake the laborious and time-consuming task of piecing together many images to create an image of the RPE cells that’s speckle-free.
    Tam and his team developed a novel AI-based method called parallel discriminator generative adverbial network (P-GAN) — a deep learning algorithm. By feeding the P-GAN network nearly 6,000 manually analyzed AO-OCT-acquired images of human RPE, each paired with its corresponding speckled original, the team trained the network to identify and recover speckle-obscured cellular features.
    When tested on new images, P-GAN successfully de-speckled the RPE images, recovering cellular details. With one image capture, it generated results comparable to the manual method, which required the acquisition and averaging of 120 images. With a variety of objective performance metrics that assess things like cell shape and structure, P-GAN outperformed other AI techniques. Vineeta Das, Ph.D., a postdoctoral fellow in the Clinical and Translational Imaging Section at NEI, estimates that P-GAN reduced imaging acquisition and processing time by about 100-fold. P-GAN also yielded greater contrast, about 3.5 greater than before.
    “Adaptive optics takes OCT-based imaging to the next level,” said Tam. “It’s like moving from a balcony seat to a front row seat to image the retina. With AO, we can reveal 3D retinal structures at cellular-scale resolution, enabling us to zoom in on very early signs of disease.”
    While adding AO to OCT provides a much better view of cells, processing AO-OCT images after they’ve been captured takes much longer than OCT without AO.

    Tam’s latest work targets the retinal pigment epithelium (RPE), a layer of tissue behind the light-sensing retina that supports the metabolically active retinal neurons, including the photoreceptors. The retina lines the back of the eye and captures, processes, and converts the light that enters the front of the eye into signals that it then transmits through the optic nerve to the brain. Scientists are interested in the RPE because many diseases of the retina occur when the RPE breaks down.
    By integrating AI with AO-OCT, Tam believes that a major obstacle for routine clinical imaging using AO-OCT has been overcome, especially for diseases that affect the RPE, which has traditionally been difficult to image.
    “Our results suggest that AI can fundamentally change how images are captured,” said Tam. “Our P-GAN artificial intelligence will make AO imaging more accessible for routine clinical applications and for studies aimed at understanding the structure, function, and pathophysiology of blinding retinal diseases. Thinking about AI as a part of the overall imaging system, as opposed to a tool that is only applied after images have been captured, is a paradigm shift for the field of AI.” More

  • in

    New method of measuring qubits promises ease of scalability in a microscopic package

    Chasing ever-higher qubit counts in near-term quantum computers constantly demands new feats of engineering.
    Among the troublesome hurdles of this scaling-up race is refining how qubits are measured. Devices called parametric amplifiers are traditionally used to do these measurements. But as the name suggests, the device amplifies weak signals picked up from the qubits to conduct the readout, which causes unwanted noise and can lead to decoherence of the qubits if not protected by additional large components. More importantly, the bulky size of the amplification chain becomes technically challenging to work around as qubit counts increase in size-limited refrigerators.
    Cue the Aalto University research group Quantum Computing and Devices (QCD). They have a hefty track record of showing how thermal bolometers can be used as ultrasensitive detectors, and they just demonstrated in an April 10 Nature Electronics paper that bolometer measurements can be accurate enough for single-shot qubit readout.
    A new method of measuring
    To the chagrin of many physicists, the Heisenberg uncertainty principle determines that one cannot simultaneously know a signal’s position and momentum, or voltage and current, with accuracy. So it goes with qubit measurements conducted with parametric voltage-current amplifiers. But bolometric energy sensing is a fundamentally different kind of measurement — serving as a means of evading Heisenberg’s infamous rule. Since a bolometer measures power, or photon number, it is not bound to add quantum noise stemming from the Heisenberg uncertainty principle in the way that parametric amplifiers are.
    Unlike amplifiers, bolometers very subtly sense microwave photons emitted from the qubit via a minimally invasive detection interface. This form factor is roughly 100 times smaller than its amplifier counterpart, making it extremely attractive as a measurement device.
    ‘When thinking of a quantum-supreme future, it is easy to imagine high qubit counts in the thousands or even millions could be commonplace. A careful evaluation of the footprint of each component is absolutely necessary for this massive scale-up. We have shown in the Nature Electronics paper that our nanobolometers could seriously be considered as an alternative to conventional amplifiers. In our very first experiments, we found these bolometers accurate enough for single-shot readout, free of added quantum noise, and they consume 10,000 times less power than the typical amplifiers — all in a tiny bolometer, the temperature-sensitive part of which can fit inside of a single bacterium,’ says Aalto University Professor Mikko Möttönen, who heads the QCD research group.

    Single-shot fidelity is an important metric physicists use to determine how accurately a device can detect a qubit’s state in just one measurement as opposed to an average of multiple measurements. In the case of the QCD group’s experiments, they were able to obtain a single-shot fidelity of 61.8% with a readout duration of roughly 14 microseconds. When correcting for the qubit’s energy relaxation time, the fidelity jumps up to 92.7%.
    ‘With minor modifications, we could expect to see bolometers approaching the desired 99.9% single-shot fidelity in 200 nanoseconds. For example, we can swap the bolometer material from metal to graphene, which has a lower heat capacity and can detect very small changes in its energy quickly. And by removing other unnecessary components between the bolometer and the chip itself, we can not only make even greater improvements on the readout fidelity, but we can achieve a smaller and simpler measurement device that makes scaling-up to higher qubit counts more feasible,’ says András Gunyhó, the first author on the paper and a doctoral researcher in the QCD group.
    Prior to demonstrating the high single-shot readout fidelity of bolometers in their most recent paper, the QCD research group first showed that bolometers can be used for ultrasensitive, real-time microwave measurements in 2019. They then published in 2020 a paper in Nature showing how bolometers made of graphene can shorten readout times to well below a microsecond.
    The work was carried out in the Research Council of Finland Centre of Excellence for Quantum Technology (QTF) using OtaNano research infrastructure in collaboration with VTT Technical Research Centre of Finland and IQM Quantum Computers. It was primarily funded by the European Research Council Advanced Grant ConceptQ and the Future Makers Program of the Jane and Aatos Erkko Foundation and the Technology Industries of Finland Centennial Foundation. More

  • in

    Breakthrough for next-generation digital displays

    Researchers at Linköping University, Sweden, have developed a digital display screen where the LEDs themselves react to touch, light, fingerprints and the user’s pulse, among other things. Their results, published in Nature Electronics, could be the start of a whole new generation of displays for phones, computers and tablets.
    “We’ve now shown that our design principle works. Our results show that there is great potential for a new generation of digital displays where new advanced features can be created. From now on, it’s about improving the technology into a commercially viable product,” says Feng Gao, professor in optoelectronics at Linköping University (LiU).
    Digital displays have become a cornerstone of almost all personal electronics. However, the most modern LCD and OLED screens on the market can only display information. To become a multi-function display that detects touch, fingerprints or changing lighting conditions, a variety of sensors are required that are layered on top of or around the display.
    Researchers at Linköping University have now developed a completely new type of display where all sensor functions are also found in the display’s LEDs without the need of any additional sensors.
    The LEDs are made of a crystalline material called perovskite. Its excellent ability of light absorption and emission is the key that enables the newly developed screen.
    In addition to the screen reacting to touch, light, fingerprints and the user’s pulse, the device can also be charged through the screen thanks to the perovskites’ ability to also act as solar cells.
    “Here’s an example — your smartwatch screen is off most of the time. During the off-time of the screen, instead of displaying information, it can harvest light to charge your watch, significantly extending how long you can go between charges,” says Chunxiong Bao, associate professor at Nanjing University, previously a postdoc researcher at LiU and the lead author of the paper.
    For a screen to display all colours, there needs to be LEDs in three colours — red, green and blue — that glow with different intensity and thus produce thousands of different colours. The researchers at Linköping University have developed screens with perovskite LEDs in all three colours, paving the way for a screen that can display all colours within the visible light spectrum.
    But there are still many challenges to be solved before the screen is in everyone’s pocket. Zhongcheng Yuan, researcher at the University of Oxford, previously postdoc at LiU and the other lead author of the paper, believes that many of the problems will be solved within ten years:
    “For instance, the service life of perovskite LEDs needs to be improved. At present, the screen only works for a few hours before the material becomes unstable, and the LEDs go out,” he says. More

  • in

    Waterproof ‘e-glove’ could help scuba divers communicate

    When scuba divers need to say “I’m okay” or “Shark!” to their dive partners, they use hand signals to communicate visually. But sometimes these movements are difficult to see. Now, researchers reporting in ACS Nano have constructed a waterproof “e-glove” that wirelessly transmits hand gestures made underwater to a computer that translates them into messages. The new technology could someday help divers communicate better with each other and with boat crews on the surface.
    E-gloves — gloves fitted with electronic sensors that translate hand motions into information — are already in development, including designs that allow the wearer to interact with virtual reality environments or help people recovering from a stroke regain fine motor skills. However, rendering the electronic sensors waterproof for use in a swimming pool or the ocean, while also keeping the glove flexible and comfortable to wear, is a challenge. So Fuxing Chen, Lijun Qu, Mingwei Tian and colleagues wanted to create an e-glove capable of sensing hand motions when submerged underwater.
    The researchers began by fabricating waterproof sensors that rely on flexible microscopic pillars inspired by the tube-like feet of a starfish. Using laser writing tools, they created an array of these micropillars on a thin film of polydimethylsiloxane (PDMS), a waterproof plastic commonly used in contact lenses. After coating the PDMS array with conductive layer of silver, the researchers sandwiched two of the films together with the pillars facing inward to create a waterproof sensor. The sensor — roughly the size of a USB-C port — is responsive when flexed and can detect a range of pressures comparable to the light touch of a dollar bill up to the impact of water streaming from a garden hose. The researchers packaged 10 of these waterproof sensors within self-adhesive bandages and sewed them over the knuckles and first finger joints of their e-glove prototype.
    To create a hand-gesture vocabulary for the researchers’ demonstration, a participant wearing the e-glove made 16 gestures, including “OK” and “Exit.” The researchers recorded the specific electronic signals generated by the e-glove sensors for each corresponding gesture. They applied a machine learning technique for translating sign language into words to create a computer program that could translate the e-glove gestures into messages. When tested, the program translated hand gestures made on land and underwater with 99.8% accuracy. In the future, the team says a version of this e-glove could help scuba divers communicate with visual hand signals even when they cannot clearly see their dive partners.
    The authors acknowledge funding from the Shiyanjia Lab, National Key Research and Development Program, Taishan Scholar Program of Shandong Province in China, Shandong Province Key Research and Development Plan, Shandong Provincial Universities Youth Innovation Technology Plan Team, National Natural Science Foundation of China, Natural Science Foundation of Shandong Province of China, Shandong Province Science and Technology Small and Medium sized Enterprise Innovation Ability Enhancement Project, Natural Science Foundation of Qingdao, Qingdao Key Technology Research and Industrialization Demonstration Projects, Qingdao Shinan District Science and Technology Plan Project, and Suqian Key Research and Development Plan. More

  • in

    AI-assisted breast-cancer screening may reduce unnecessary testing

    Using artificial intelligence (AI) to supplement radiologists’ evaluations of mammograms may improve breast-cancer screening by reducing false positives without missing cases of cancer, according to a study by researchers at Washington University School of Medicine in St. Louis and Whiterabbit.ai, a Silicon Valley-based technology startup.
    The researchers developed an algorithm that identified normal mammograms with very high sensitivity. They then ran a simulation on patient data to see what would have happened if all of the very low-risk mammograms had been taken off radiologists’ plates, freeing the doctors to concentrate on the more questionable scans. The simulation revealed that fewer people would have been called back for additional testing but that the same number of cancer cases would have been detected.
    “False positives are when you call a patient back for additional testing, and it turns out to be benign,” explained senior author Richard L. Wahl, MD, a professor of radiology at Washington University’s Mallinckrodt Institute of Radiology (MIR) and a professor of radiation oncology. “That causes a lot of unnecessary anxiety for patients and consumes medical resources. This simulation study showed that very low-risk mammograms can be reliably identified by AI to reduce false positives and improve workflows.”
    The study is published April 10 in the journal Radiology: Artificial Intelligence.
    Wahl previously collaborated with Whiterabbit.ai on an algorithm to help radiologists judge breast density on mammograms to identify people who could benefit from additional or alternative screening. That algorithm received clearance from the Food and Drug Administration (FDA) in 2020 and is now marketed by Whiterabbit.ai as WRDensity.
    In this study, Wahl and colleagues at Whiterabbit.ai worked together to develop a way to rule out cancer using AI to evaluate mammograms. They trained the AI model on 123,248 2D digital mammograms (containing 6,161 showing cancer) that were largely collected and read by Washington University radiologists. Then, they validated and tested the AI model on three independent sets of mammograms, two from institutions in the U.S. and one in the United Kingdom.
    First, the researchers figured out what the doctors did: how many patients were called back for secondary screening and biopsies; the results of those tests; and the final determination in each case. Then, they applied AI to the datasets to see what would have been different if AI had been used to remove negative mammograms in the initial assessments and physicians had followed standard diagnostic procedures to evaluate the rest.
    For example, consider the largest dataset, which contained 11,592 mammograms. When scaled to 10,000 mammograms (to make the math simpler for the purposes of the simulation), AI identified 34.9% as negative. If those 3,485 negative mammograms had been removed from the workload, radiologists would have made 897 callbacks for diagnostic exams, a reduction of 23.7% from the 1,159 they made in reality. At the next step, 190 people would have been called in a second time for biopsies, a reduction of 6.9% from the 200 in reality. At the end of the process, both the AI rule-out and real-world standard-of-care approaches identified the same 55 cancers. In other words, this study of AI suggests that out of 10,000 people who underwent initial mammograms, 262 could have avoided diagnostic exams, and 10 could have avoided biopsies, without any cancer cases being missed.
    “At the end of the day, we believe in a world where the doctor is the superhero who finds cancer and helps patients navigate their journey ahead,” said co-author Jason Su, co-founder and chief technology officer at Whiterabbit.ai. “The way AI systems can help is by being in a supporting role. By accurately assessing the negatives, it can help remove the hay from the haystack so doctors can find the needle more easily. This study demonstrates that AI can potentially be highly accurate in identifying negative exams. More importantly, the results showed that automating the detection of negatives may also lead to a tremendous benefit in the reduction of false positives without changing the cancer detection rate.” More

  • in

    Can the bias in algorithms help us see our own?

    Algorithms were supposed to make our lives easier and fairer: help us find the best job applicants, help judges impartially assess the risks of bail and bond decisions, and ensure that healthcare is delivered to the patients with the greatest need. By now, though, we know that algorithms can be just as biased as the human decision-makers they inform and replace.
    What if that weren’t a bad thing?
    New research by Carey Morewedge, a Boston University Questrom School of Business professor of marketing and Everett W. Lord Distinguished Faculty Scholar, found that people recognize more of their biases in algorithms’ decisions than they do in their own — even when those decisions are the same. The research, publishing in the Proceedings of the National Academy of Sciences, suggests ways that awareness might help human decision-makers recognize and correct for their biases.
    “A social problem is that algorithms learn and, at scale, roll out biases in the human decisions on which they were trained,” says Morewedge, who also chairs Questrom’s marketing department. For example: In 2015, Amazon tested (and soon scrapped) an algorithm to help its hiring managers filter through job applicants. They found that the program boosted résumés it perceived to come from male applicants, and downgraded those from female applicants, a clear case of gender bias.
    But that same year, just 39 percent of Amazon’s workforce were women. If the algorithm had been trained on Amazon’s existing hiring data, it’s no wonder it prioritized male applicants — Amazon already was. If its algorithm had a gender bias, “it’s because Amazon’s managers were biased in their hiring decisions,” Morewedge says.
    “Algorithms can codify and amplify human bias, but algorithms also reveal structural biases in our society,” he says. “Many biases cannot be observed at an individual level. It’s hard to prove bias, for instance, in a single hiring decision. But when we add up decisions within and across persons, as we do when building algorithms, it can reveal structural biases in our systems and organizations.”
    Morewedge and his collaborators — Begüm Çeliktutan and Romain Cadario, both at Erasmus University in the Netherlands — devised a series of experiments designed to tease out people’s social biases (including racism, sexism, and ageism). The team then compared research participants’ recognition of how those biases colored their own decisions versus decisions made by an algorithm. In the experiments, participants sometimes saw the decisions of real algorithms. But there was a catch: other times, the decisions attributed to algorithms were actually the participants’ choices, in disguise.

    Across the board, participants were more likely to see bias in the decisions they thought came from algorithms than in their own decisions. Participants also saw as much bias in the decisions of algorithms as they did in the decisions of other people. (People generally better recognize bias in others than in themselves, a phenomenon called the bias blind spot.) Participants were also more likely to correct for bias in those decisions after the fact, a crucial step for minimizing bias in the future.
    Algorithms Remove the Bias Blind Spot
    The researchers ran sets of participants, more than 6,000 in total, through nine experiments. In the first, participants rated a set of Airbnb listings, which included a few pieces of information about each listing: its average star rating (on a scale of 1 to 5) and the host’s name. The researchers assigned these fictional listings to hosts with names that were “distinctively African American or white,” based on previous research identifying racial bias, according to the paper. The participants rated how likely they were to rent each listing.
    In the second half of the experiment, participants were told about a research finding that explained how the host’s race might bias the ratings. Then, the researchers showed participants a set of ratings and asked them to assess (on a scale of 1 to 7) how likely it was that bias had influenced the ratings.
    Participants saw either their own rating reflected back to them, their own rating under the guise of an algorithm’s, their own rating under the guise of someone else’s, or an actual algorithm rating based on their preferences.
    The researchers repeated this setup several times, testing for race, gender, age, and attractiveness bias in the profiles of Lyft drivers and Airbnb hosts. Each time, the results were consistent. Participants who thought they saw an algorithm’s ratings or someone else’s ratings (whether or not they actually were) were more likely to perceive bias in the results.

    Morewedge attributes this to the different evidence we use to assess bias in others and bias in ourselves. Since we have insight into our own thought process, he says, we’re more likely to trace back through our thinking and decide that it wasn’t biased, perhaps driven by some other factor that went into our decisions. When analyzing the decisions of other people, however, all we have to judge is the outcome.
    “Let’s say you’re organizing a panel of speakers for an event,” Morewedge says. “If all those speakers are men, you might say that the outcome wasn’t the result of gender bias because you weren’t even thinking about gender when you invited these speakers. But if you were attending this event and saw a panel of all-male speakers, you’re more likely to conclude that there was gender bias in the selection.”
    Indeed, in one of their experiments, the researchers found that participants who were more prone to this bias blind spot were also more likely to see bias in decisions attributed to algorithms or others than in their own decisions. In another experiment, they discovered that people more easily saw their own decisions influenced by factors that were fairly neutral or reasonable, such as an Airbnb host’s star rating, compared to a prejudicial bias, such as race — perhaps because admitting to preferring a five-star rental isn’t as threatening to one’s sense of self or how others might view us, Morewedge suggests.
    Algorithms as Mirrors: Seeing and Correcting Human Bias
    In the researchers’ final experiment, they gave participants a chance to correct bias in either their ratings or the ratings of an algorithm (real or not). People were more likely to correct the algorithm’s decisions, which reduced the actual bias in its ratings.
    This is the crucial step for Morewedge and his colleagues, he says. For anyone motivated to reduce bias, being able to see it is the first step. Their research presents evidence that algorithms can be used as mirrors — a way to identify bias even when people can’t see it in themselves.
    “Right now, I think the literature on algorithmic bias is bleak,” Morewedge says. “A lot of it says that we need to develop statistical methods to reduce prejudice in algorithms. But part of the problem is that prejudice comes from people. We should work to make algorithms better, but we should also work to make ourselves less biased.
    “What’s exciting about this work is that it shows that algorithms can codify or amplify human bias, but algorithms can also be tools to help people better see their own biases and correct them,” he says. “Algorithms are a double-edged sword. They can be a tool that amplifies our worst tendencies. And algorithms can be a tool that can help better ourselves.” More

  • in

    Could new technique for ‘curving’ light be the secret to improved wireless communication?

    While cellular networks and Wi-Fi systems are more advanced than ever, they are also quickly reaching their bandwidth limits. Scientists know that in the near future they’ll need to transition to much higher communication frequencies than what current systems rely on, but before that can happen there are a number of — quite literal — obstacles standing in the way.
    Researchers from Brown University and Rice University say they’ve advanced one step closer to getting around these solid obstacles, like walls, furniture and even people — and they do it by curving light.
    In a new study published in Communications Engineering, the researchers describe how they are helping address one of the biggest logjams emerging in wireless communication. Current systems rely on microwave radiation to carry data, but it’s become clear that the future standard for transmitting data will make use of terahertz waves, which have as much as 100 times the data-carrying capacity of microwaves. One longstanding issue has been that, unlike microwaves, terahertz signals can be blocked by most solid objects, making a direct line of sight between transmitter and receiver a logistical requirement.
    “Most people probably use a Wi-Fi base station that fills the room with wireless signals,” said Daniel Mittleman, a professor in Brown’s School of Engineering and senior author of the study. “No matter where they move, they maintain the link. At the higher frequencies that we’re talking about here, you won’t be able to do that anymore. Instead, it’s going to be a directional beam. If you move around, that beam is going to have to follow you in order to maintain the link, and if you move outside of the beam or something blocks that link, then you’re not getting any signal.”
    The researchers circumvented this by creating a terahertz signal that follows a curved trajectory around an obstacle, instead of being blocked by it. The novel method unveiled in the study could help revolutionize wireless communication and highlights the future feasibility of wireless data networks that run on terahertz frequencies, according to the researchers.
    “We want more data per second,” Mittleman said. “If you want to do that, you need more bandwidth, and that bandwidth simply doesn’t exist using conventional frequency bands.”
    In the study, Mittleman and his colleagues introduce the concept of self-accelerating beams. The beams are special configurations of electromagnetic waves that naturally bend or curve to one side as they move through space. The beams have been studied at optical frequencies but are now explored for terahertz communication.

    The researchers used this idea as a jumping off point. They engineered transmitters with carefully designed patterns so that the system can manipulate the strength, intensity and timing of the electromagnetic waves that are produced. With this ability to manipulate the light, the researchers make the waves work together more effectively to maintain the signal when a solid object blocks a portion of the beam. Essentially, the light beam adjusts to the blockage by shuffling data along the patterns the researchers engineered into the transmitter. When one pattern is blocked, the data transfers to the next one, and then the next one if that is blocked. This keeps the signal link fully intact. Without this level of control, when the beam is blocked, the system can’t make any adjustments, so no signal gets through.
    This effectively makes the signal bend around objects as long as the transmitter is not completely blocked. If it is completely blocked, another way of getting the data to the receiver will be needed.
    “Curving a beam doesn’t solve all possible blockage problems, but what it does is solve some of them and it solves them in a way that’s better than what others have tried,” said Hichem Guerboukha, who led the study as a postdoctoral researcher at Brown and is now an assistant professor at the University of Missouri — Kansas City.
    The researchers validated their findings through extensive simulations and experiments navigating around obstacles to maintain communication links with high reliability and integrity. The work builds on a previous study from the team that showed terahertz data links can be bounced off walls in a room without dropping too much data.
    By using these curved beams, the researchers hope to one day make wireless networks more reliable, even in crowded or obstructed environments. This could lead to faster and more stable internet connections in places like offices or cities where obstacles are common. Before getting to that point, however, there’s much more basic research to be done and plenty of challenges to overcome as terahertz communication technology is still in its infancy.
    “One of the key questions that everybody asks us is how much can you curve and how far away,” Mittleman said. “We’ve done rough estimations of these things, but we haven’t really quantified it yet, so we hope to map it out.” More