More stories

  • in

    Making quantum bits fly

    Two physicists at the University of Konstanz are developing a method that could enable the stable exchange of information in quantum computers. In the leading role: photons that make quantum bits “fly.”
    Quantum computers are considered the next big evolutionary step in information technology. They are expected to solve computing problems that today’s computers simply cannot solve — or would take ages to do so. Research groups around the world are working on making the quantum computer a reality. This is anything but easy, because the basic components of such a computer, the quantum bits or qubits, are extremely fragile. One type of qubits consists of the intrinsic angular momentum (spin) of a single electron, i.e. they are at the scale of an atom. It is hard enough to keep such a fragile system intact. It is even more difficult to interconnect two or more of these qubits. So how can a stable exchange of information between qubits be achieved?
    Flying qubits
    The two Konstanz physicists Benedikt Tissot and Guido Burkard have now developed a theoretical model of how the information exchange between qubits could succeed by using photons as a “means of transport” for quantum information. The general idea: The information content (electron spin state) of the material qubit is converted into a “flying qubit,” namely a photon. Photons are “light quanta” that constitute the basic building blocks making up the electromagnetic radiation field. The special feature of the new model: stimulated Raman emissions are used for converting the qubit into a photon. This procedure allows more control over the photons. “We are proposing a paradigm shift from optimizing the control during the generation of the photon to directly optimizing the temporal shape of the light pulse in the flying qubit,” explains Guido Burkard.
    Benedikt Tissot compares the basic procedure with the Internet: “In a classic computer, we have our bits, which are encoded on a chip in the form of electrons. If we want to send information over long distances, the information content of the bits is converted into a light signal that is transmitted through optical fibers.” The principle of information exchange between qubits in a quantum computer is very similar: “Here, too, we have to convert the information into states that can be easily transmitted — and photons are ideal for this,” explains Tissot.
    A three-level system for controlling the photon
    “We need to consider several aspects,” says Tissot: “We want to control the direction in which the information flows — as well as when, how quickly and where it flows to. That’s why we need a system that allows for a high level of control.” The researchers’ method makes this control possible by means of resonator-enhanced, stimulated Raman emissions. Behind this term is a three-level system, which leads to a multi-stage procedure. These stages offer the physicists control over the photon that is created. “We have ‘more buttons’ here that we can operate to control the photon,” Tissot illustrates.
    Stimulated Raman emission are an established method in physics. However, using them to send qubit states directly is unusual. The new method might make it possible to balance the consequences of environmental perturbations and unwanted side effects of rapid changes in the temporal shape of the light pulse, so that information transport can be implemented more accurately. The detailed procedure was published in the journal Physical Review Research in February 2024. More

  • in

    3D reflector microchips could speed development of 6G wireless

    Cornell University researchers have developed a semiconductor chip that will enable ever-smaller devices to operate at the higher frequencies needed for future 6G communication technology.
    The next generation of wireless communication not only requires greater bandwidth at higher frequencies — it also needs a little extra time. The new chip adds a necessary time delay so signals sent across multiple arrays can align at a single point in space — without disintegrating.
    The team’s paper, “Ultra-Compact Quasi-True-Time-Delay for Boosting Wireless Channel-Capacity,” published March 6 in Nature. The lead author is Bal Govind, a doctoral student in electrical and computer engineering.
    The majority of current wireless communications, such as 5G phones, operate at frequencies below 6 gigahertz (GHz). Technology companies have been aiming to develop a new wave of 6G cellular communications that use frequencies above 20 GHz, where there is more available bandwidth, which means more data can flow and at a faster rate. 6G is expected to be 100 times faster than 5G.
    However, since data loss through the environment is greater at higher frequencies, one crucial factor is how the data is relayed. Instead of relying on a single transmitter and a single receiver, most 5G and 6G technologies use a more energy-efficient method: a series of phased arrays of transmitters and receivers.
    “Every frequency in the communication band goes through different time delays,” Govind said. “The problem we’re addressing is decades old — that of transmitting high-bandwidth data in an economical manner so signals of all frequencies line up at the right place and time.”
    “It’s not just building something with enough delay, it’s building something with enough delay where you still have a signal at the end,” said senior author Alyssa Apsel,professor of engineering. “The trick is that we were able to do it without enormous loss.”
    Govind worked with postdoctoral researcher and co-author Thomas Tapen to design a complementary metal-oxide-semiconductor (CMOS) that could tune a time delay over an ultra-broad bandwidth of 14 GHz, with as high as 1 degree of phase resolution

    “Since the aim of our design was to pack as many of these delay elements as possible,” Govind said, “we imagined what it would be like to wind the path of the signal in three-dimensional waveguides and bounce signals off of them to cause delay, instead of laterally spreading wavelength-long wires across the chip.”
    The team engineered a series of these 3D reflectors strung together to form a “tunable transmission line.”
    The resulting integrated circuit occupies a 0.13-square-millimeter footprint that is smaller than phase shifters yet nearly doubles the channel-capacity — i.e., data rate — of conventional wireless arrays. And by boosting the projected data rate, the chip could provide faster service, getting more data to cellphone users.
    “The big problem with phased arrays is this tradeoff between trying to make these things small enough to put on a chip and maintain efficiency,” Apsel said. “The answer that most of the industry has landed on is, ‘Well, we can’t do time delay, so we’re going to do phase delay.’ And that fundamentally limits how much information you can transmit and receive. They just sort of take that hit.
    “I think one of our major innovations is really the question: Do you need to build it this way?” Apsel said. “If we can boost the channel capacity by a factor of 10 by changing one component, that is a pretty interesting game-changer for communications.” More

  • in

    Compact chips advance precision timing for communications, navigation and other applications

    The National Institute of Standards and Technology (NIST) and its collaborators have delivered a small but mighty advancement in timing technology: compact chips that seamlessly convert light into microwaves. This chip could improve GPS, the quality of phone and internet connections, the accuracy of radar and sensing systems, and other technologies that rely on high-precision timing and communication.
    This technology reduces something known as timing jitter, which is small, random changes in the timing of microwave signals. Similar to when a musician is trying to keep a steady beat in music, the timing of these signals can sometimes waver a bit. The researchers have reduced these timing wavers to a very small fraction of a second — 15 femtoseconds to be exact, a big improvement over traditional microwave sources — making the signals much more stable and precise in ways that could increase radar sensitivity, the accuracy of analog-to-digital converters and the clarity of astronomical images captured by groups of telescopes.
    The team’s results were published in Nature.
    Shining a Light on Microwaves
    What sets this demonstration apart is the compact design of the components that produce these signals. For the first time, researchers have taken what was once a tabletop-size system and shrunken much of it into a compact chip, about the same size as a digital camera memory card. Reducing timing jitter on a small scale reduces power usage and makes it more usable in everyday devices.
    Right now, several of the components for this technology are located outside of the chip, as researchers test their effectiveness. The ultimate goal of this project is to integrate all the different parts, such as lasers, modulators, detectors and optical amplifiers, onto a single chip.
    By integrating all the components onto a single chip, the team could reduce both the size and power consumption of the system. This means it could be easily incorporated into small devices without requiring lots of energy and specialized training.

    “The current technology takes several labs and many Ph.D.s to make microwave signals happen,” said Frank Quinlan, NIST physical scientist. “A lot of what this research is about is how we utilize the advantages of optical signals by shrinking the size of components and making everything more accessible.”
    To accomplish this, researchers use a semiconductor laser, which acts as a very steady flashlight. They direct the light from the laser into a tiny mirror box called a reference cavity, which is like a miniature room where light bounces around. Inside this cavity, some light frequencies are matched to the size of the cavity so that the peaks and valleys of the light waves fit perfectly between the walls. This causes the light to build up power in those frequencies, which is used to keep the laser’s frequency stable. The stable light is then converted into microwaves using a device called a frequency comb, which changes high-frequency light into lower-pitched microwave signals. These precise microwaves are crucial for technologies like navigation systems, communication networks and radar because they provide accurate timing and synchronization.
    “The goal is to make all these parts work together effectively on a single platform, which would greatly reduce the loss of signals and remove the need for extra technology,” said Quinlan. “Phase one of this project was to show that all these individual pieces work together. Phase two is putting them together on the chip.”
    In navigation systems such as GPS, the precise timing of signals is essential for determining location. In communication networks, such as mobile phone and internet systems, accurate timing and synchronization of multiple signals ensure that data is transmitted and received correctly.
    For example, synchronizing signals is important for busy cell networks to handle multiple phone calls. This precise alignment of signals in time enables the cell network to organize and manage the transmission and reception of data from multiple devices, like your cellphone. This ensures that multiple phone calls can be carried over the network simultaneously without experiencing significant delays or drops.
    In radar, which is used for detecting objects like airplanes and weather patterns, precise timing is crucial for accurately measuring how long it takes for signals to bounce back.

    “There are all sorts of applications for this technology. For instance, astronomers who are imaging distant astronomical objects, like black holes, need really low-noise signals and clock synchronization,” said Quinlan. “And this project helps get those low noise signals out of the lab, and into the hands of radar technicians, of astronomers, of environmental scientists, of all these different fields, to increase their sensitivity and ability to measure new things.”
    Working Together Toward a Shared Goal
    Creating this type of technological advancement is not done alone. Researchers from the University of Colorado Boulder, the NASA Jet Propulsion Laboratory, California Institute of Technology, the University of California Santa Barbara, the University of Virginia, and Yale University came together to accomplish this shared goal: to revolutionize how we harness light and microwaves for practical applications.
    “I like to compare our research to a construction project. There’s a lot of moving parts, and you need to make sure everyone is coordinated so the plumber and electrician are showing up at the right time in the project,” said Quinlan. “We all work together really well to keep things moving forward.”
    This collaborative effort underscores the importance of interdisciplinary research in driving technological progress, Quinlan said. More

  • in

    AI can speed design of health software

    Artificial intelligence helped clinicians to accelerate the design of diabetes prevention software, a new study finds.
    Publishing online March 6 in the Journal of Medical Internet Research, the study examined the capabilities of a form of artificial intelligence (AI) called generative AI or GenAI, which predicts likely options for the next word in any sentence based on how billions of people used words in context on the internet. A side effect of this next-word prediction is that the generative AI “chatbots” like chatGPT can generate replies to questions in realistic language, and produce clear summaries of complex texts.
    Led by researchers at NYU Langone Health, the current paper explores the application of ChatGPT to the design of a software program that uses text messages to counter diabetes by encouraging patients to eat healthier and get exercise. The team tested whether AI-enabled interchanges between doctors and software engineers could hasten the development of such a personalized automatic messaging system (PAMS).
    In the current study, eleven evaluators in fields ranging from medicine to computer science successfully used ChatGPT to produce a version of the diabetes tool over 40 hours, where an original, non-AI-enabled effort had required more than 200 programmer hours.
    “We found that ChatGPT improves communications between technical and non-technical team members to hasten the design of computational solutions to medical problems,” says study corresponding author Danissa Rodriguez, PhD, assistant professor in the Department of Population Health at NYU Langone, and member of its Healthcare Innovation Bridging Research, Informatics and Design (HiBRID) Lab. “The chatbot drove rapid progress throughout the software development life cycle, from capturing original ideas, to deciding which features to include, to generating the computer code. If this proves to be effective at scale it could revolutionize healthcare software design.”
    AI as Translator
    Generative AI tools are sensitive, say the study authors, and asking a question of the tool in two subtly different ways may yield divergent answers. The skill required to frame the questions asked of chatbots in a way that elicits the desired response, called prompt engineering, combines intuition and experimentation. Physicians and nurses, with their understanding of nuanced medical contexts, are well positioned to engineer strategic prompts that improve communications with engineers, and without learning to write computer code.
    These design efforts, however, where care providers, the would-be users of a new software, seek to advise engineers about what it must include can be compromised by attempts to converse using “different” technical languages. In the current study, the clinical members of the team were able to type their ideas in plain English, enter them into chatGPT, and ask the tool to convert their input into the kind of language required to guide coding work by the team’s software engineers. AI could take software design only so far before human software developers were needed for final code generation, but the overall process was greatly accelerated, say the authors.
    “Our study found that chatGPT can democratize the design of healthcare software by enabling doctors and nurses to drive its creation,” says senior study author Devin Mann, MD, director of the HiBRID Lab, and strategic director of Digital Health Innovation within NYU Langone Medical Center Information Technology (MCIT).”GenAI-assisted development promises to deliver computational tools that are usable, reliable, and in-line with the highest coding standards.”
    Along with Rodriguez and Mann, study authors from the Department of Population Health at NYU Langone were Katharine Lawrence, MD, Beatrix Brandfield-Harvey, Lynn Xu, Sumaiya Tasneem, and Defne Levine. Javier Gonzalez,technical lead in the HIBRID Lab, was also a study author. This work was supported by the National Institute of Diabetes and Digestive and Kidney Diseases grant 1R18DK118545-01A1. More

  • in

    Can you tell AI-generated people from real ones?

    If you recently had trouble figuring out if an image of a person is real or generated through artificial intelligence (AI), you’re not alone.
    A new study from University of Waterloo researchers found that people had more difficulty than was expected distinguishing who is a real person and who is artificially generated.
    The Waterloo study saw 260 participants provided with 20 unlabeled pictures: 10 of which were of real people obtained from Google searches, and the other 10 generated by Stable Diffusion or DALL-E, two commonly used AI programs that generate images.
    Participants were asked to label each image as real or AI-generated and explain why they made their decision. Only 61 per cent of participants could tell the difference between AI-generated people and real ones, far below the 85 per cent threshold that researchers expected.
    “People are not as adept at making the distinction as they think they are,” said Andreea Pocol, a PhD candidate in Computer Science at the University of Waterloo and the study’s lead author.
    Participants paid attention to details such as fingers, teeth, and eyes as possible indicators when looking for AI-generated content — but their assessments weren’t always correct.
    Pocol noted that the nature of the study allowed participants to scrutinize photos at length, whereas most internet users look at images in passing.

    “People who are just doomscrolling or don’t have time won’t pick up on these cues,” Pocol said.
    Pocol added that the extremely rapid rate at which AI technology is developing makes it particularly difficult to understand the potential for malicious or nefarious action posed by AI-generated images. The pace of academic research and legislation isn’t often able to keep up: AI-generated images have become even more realistic since the study began in late 2022.
    These AI-generated images are particularly threatening as a political and cultural tool, which could see any user create fake images of public figures in embarrassing or compromising situations.
    “Disinformation isn’t new, but the tools of disinformation have been constantly shifting and evolving,” Pocol said. “It may get to a point where people, no matter how trained they will be, will still struggle to differentiate real images from fakes. That’s why we need to develop tools to identify and counter this. It’s like a new AI arms race.”
    The study, “Seeing Is No Longer Believing: A Survey on the State of Deepfakes, AI-Generated Humans, and Other Nonveridical Media,” appears in the journal Advances in Computer Graphics. More

  • in

    Shortcut to Success: Toward fast and robust quantum control through accelerating adiabatic passage

    Researchers at Osaka University’s Institute of Scientific and Industrial Research (SANKEN) used the shortcuts to the adiabaticity (STA) method to greatly speed-up the adiabatic evolution of spin qubits. The spin flip fidelity after pulse optimization can be as high as 97.8% in GaAs quantum dots. This work may be applicable to other adiabatic passage and will be useful for fast and high-fidelity quantum control.
    A quantum computer uses the superposition of “0” and “1” states to perform information processing, which is completely different from classical computing, thus allowing for the solution of certain problems at a much faster rate. High-fidelity quantum state operation in large enough programmable qubit spaces is required to achieve the “quantum advantage.” The conventional method for changing quantum states uses pulse control, which is sensitive to noises and control errors. In contrast, adiabatic evolution can always keep the quantum system in its eigenstate. It is robust to noises but requires a certain length of time.
    Recently, a team from SANKEN used the STA method to greatly accelerate the adiabatic evolution of spin qubits in gate-defined quantum dots for the first time. The theory they used was proposed by the scientist Xi Chen et al. “We used the transitionless quantum driving style of STA, thus allowing the system to always remain in its ideal eigenstate even under rapid evolution.” co-author Takafumi Fujita explains.
    According to the target evolution of spin qubits, this group’s experiment adds another effective driving to suppress diabatic errors, which guarantees a fast and nearly ideal adiabatic evolution. The dynamic properties were also investigated and proved the effectiveness of this method. Additionally, the modified pulse after optimization was able to further suppress noises and improve the efficiency of quantum state control. Finally, this group achieved spin flip fidelity of up to 97.8%. According to their estimation, the acceleration of adiabatic passage would be much better in Si or Ge quantum dots with less nuclear spin noise.
    “This provides a fast and high-fidelity quantum control method. Our results may also be useful to accelerate other adiabatic passage in quantum dots.” corresponding author Akira Oiwa says. As a promising candidate for quantum computing, gate-defined quantum dots have long coherence times and good compatibility with the modern semiconductor industry. The team is trying to find more applications in gate-defined quantum dots systems, such as the promotion to more spin qubits. They hope to find a simpler and more feasible solution for fault-tolerant quantum information processing using this method. More

  • in

    Running performance helped by mathematical research

    How to optimise running? A new mathematical model1 has shown, with great precision, the impact that physiological and psychological parameters have on running performance and provides tips for optimised training. The model grew out of research conducted by a French-British team including two CNRS researchers2, the results of which will appear on March 5th 2024 in the journal Frontiers in Sports and Active Living.
    This innovative model was developed thanks to extremely precise data3 from the performances of Matthew Hudson-Smith (400m), Femke Bol (400m), and Jakob Ingebrigtsen (1500m) at the 2022 European Athletics Championships in Munich, and for Gaia Sabbatini (1500m) at the 2021 European Athletics U23 Championships in Tallinn. It led to an optimal control problem for finishing time, effort, and energy expenditure. This is the first time that such a model has also considered the variability of motor control, i.e., the role of the brain in the process of producing movement. The simulations allow the researchers to have access to the physiological parameters of the runners — especially oxygen consumption (or VO2)4, and energy expenditure during the race — as well as compute their variations. Quantifying costs and benefits in the model provides immediate access to the best strategy for achieving the runner’s optimal performance.
    The study details multiple criteria, such as the importance of a quick start in the first 50 metres (due to the need for fast oxygen kinetics), or reducing the decrease in velocity in a 400m race. The scientists also demonstrated that improving the aerobic metabolism (oxygen uptake) and the ability to maintain VO2 are crucial elements to 1500m race performance.
    The development of this model represents considerable progress in studying variations in physiological parameters during championship races, for which in vivo measurements are not possible.
    Notes:
    1 For more details on the model, “Be a champion, 40 facts you didn’t know about sports and science,” Amandine Aftalion, Springer, to appear May 14th 2024.
    2 From the Centre for Analysis and Social Mathematics (CNRS/EHESS), in collaboration with the Jacques-Louis Lions Laboratory (CNRS/Sorbonne Université/Université Paris Cité) and the Carnegie School of Sport at Leeds Beckett University.
    3 Values measured every 100 milliseconds.
    4 Rate at which oxygen is transformed into energy. More

  • in

    Robotic-assisted surgery for gallbladder cancer as effective as traditional surgery

    Each year, approximately 2,000 people die annually of gallbladder cancer (GBC) in the U.S., with only one in five cases diagnosed at an early stage. With GBC rated as the first biliary tract cancer and the 17th most deadly cancer worldwide, pressing attention for proper management of disease must be addressed. For patients diagnosed, surgery is the most promising curative treatment. While there has been increasing adoption of minimally invasive surgical techniques in gastrointestinal malignancies, including utilization of laparoscopic and robotic surgery, there are reservations in utilizing minimally invasive surgery for gallbladder cancer.
    A new study by researchers at Boston University Chobanian & Avedisian School of Medicine has found that robotic-assisted surgery for GBC is as effective as traditional open and laparoscopic methods, with added benefits in precision and quicker post-operative recovery.
    “Our study demonstrates the viability of robotic surgery for gallbladder cancer treatment, a field where minimally invasive approaches have been cautiously adopted due to concerns over oncologic efficacy and technical challenges,” say’s corresponding author Eduardo Vega, MD, assistant professor of surgery at the school.
    The researchers conducted a systematic review of the literature focusing on comparing patient outcomes following robotic, open and laparoscopic surgeries. This involved analyzing studies that reported on oncological results and perioperative benefits, such as operation time, blood loss and recovery period.
    According to the researchers, there has been reluctance to utilize robotic surgery for GBC due to fears of dissemination of the tumor via tumor manipulation, bile spillage and technical challenges, including liver resection and adequate removal of lymph nodes. “Since its early use, robotic surgery has advanced in ways that provide surgeons technical advantages over laparoscopic surgery, improving dexterity and visualization of the surgical field. Additionally, robotic assistance has eased the process of detailed dissection around blood vessels as well as knot tying and suturing, and provides high-definition, three-dimensional vision, allowing the surgeon to perform under improved ergonomics,” said Vega.
    The researchers believe these findings are significant since they suggest robotic surgery is a safer and potentially less painful option for gallbladder cancer treatment, with a faster recovery time. Clinically, it could lead to the adoption of robotic surgery as a standard care option for gallbladder cancer, improving patient outcomes and potentially reducing healthcare costs due to shorter hospital stays,” he added.
    These findings appear online in the American Journal of Surgery. More