More stories

  • in

    A linear path to efficient quantum technologies

    Researchers at the University of Stuttgart have demonstrated that a key ingredient for many quantum computation and communication schemes can be performed with an efficiency that exceeds the commonly assumed upper theoretical limit — thereby opening up new perspectives for a wide range of photonic quantum technologies.
    Quantum science not only has revolutionized our understanding of nature, but is also inspiring groundbreaking new computing, communication and sensor devices. Exploiting quantum effects in such ‘quantum technologies’ typically requires a combination of deep insight into the underlying quantum-physical principles, systematic methodological advances, and clever engineering. And it is precisely this combination that researches in the group of Prof. Stefanie Barz at the University of Stuttgart and the Center for Integrated Quantum Science and Technology (IQST) have delivered in recent study, in which they have improved the efficiency of an essential building block of many quantum devices beyond a seemingly inherent limit.
    From philosophy to technology
    One of the protagonist in the field of quantum technologies is a property known as quantum entanglement. The first step in the development of this concept involved a passionate debate between Albert Einstein and Niels Bohr. In a nutshell, their argument was about how information can be shared across several quantum systems. Importantly, this can happen in ways that have no analogue in classical physics. The discussion that Einstein and Bohr started remained largely philosophical until the 1960s, when the physicist John Stewart Bell devised a way to resolve the disagreement experimentally. Bell’s framework was first explored in experiments with photons, the quanta of light. Three pioneers in this field — Alain Aspect, John Clauser and Anton Zeilinger — were jointly awarded last year’s Nobel Prize in Physics for their groundbreaking works towards quantum technologies.
    Bell himself died in 1990, but his name is immortalized not least in the so-called Bell states. These describe the quantum states of two particles that are as strongly entangled as is possible. There are four Bell states in all, and Bell-state measurements — which determine which of the four states a quantum system is in — are an essential tool for putting quantum entanglement to practical use. Perhaps most famously, Bell-state measurements are the central component in quantum teleportation, which in turn makes most quantum communication and quantum computation possible.
    But there is a problem: when experiments are performed using conventional optical elements, such as mirrors, beam splitters and waveplates, then two of the four Bell states have identical experimental signatures and are therefore indistinguishable from each other. This means that the overall probability of success (and thus the success rate of, say, a quantum- teleportation experiment) is inherently limited to 50 percent if only such ‘linear’ optical components are used. Or is it?
    With all the bells and whistles
    This is where the work of the Barz group comes in. As they recently reported in the journal Science Advances, doctoral researchers Matthias Bayerbach and Simone D’Aurelio carried out Bell-state measurements in which they achieved a success rate of 57.9 percent. But how did they reach an efficiency that should have been unattainable with the tools available? More

  • in

    In the age of ChatGPT, what’s it like to be accused of cheating?

    While the public release of the artificial intelligence-driven large-language chatbot, ChatGPT, has created a great deal of excitement around the promise of the technology and expanded use of AI, it has also seeded a good bit of anxiety around what a program that can churn out a passable college-level essay in seconds means for the future of teaching and learning. Naturally, this consternation drove a proliferation of detection programs — of varying effectiveness — and a commensurate increase in accusations of cheating. But how are the students feeling about all of this? Recently published research by Drexel University’s Tim Gorichanaz, Ph.D.,provides a first look into some of the reactions of college students who have been accused of using ChatGPT to cheat.
    The study, published in the journal Learning: Research and Practice as part of a series on generative AI, analyzed 49 Reddit posts and their related discussions from college students who had been accused of using ChatGPT on an assignment. Gorichanaz, who is an assistant teaching professor in Drexel’s College of Computing & Informatics, identified a number of themes in these conversations, most notably frustration from wrongly accused students, anxiety about the possibility of being wrongly accused and how to avoid it, and creeping doubt and cynicism about the need for higher education in the age of generative artificial intelligence.
    “As the world of higher ed collectively scrambles to understand and develop best practices and policies around the use of tools like ChatGPT, it’s vital for us to understand how the fascination, anxiety and fear that comes with adopting any new educational technology also affects the students who are going through their own process of figuring out how to use it,” Gorichanaz said.
    Of the 49 students who posted, 38 of them said they did not use ChatGPT, but detection programs like Turnitin or GPTZero had nonetheless flagged their assignment as being AI-generated. As a result, many of the discussions took on the tenor of a legal argument. Students asked how they could present evidence to prove that they hadn’t cheated, some commenters advised continuing to deny that they had used the program because the detectors are unreliable.
    “Many of the students expressed concern over the possibility of being wrongly accused by an AI detector,” Gorichanaz said. “Some discussions went into great detail about how students could collect evidence to prove that they had written an essay without AI, including tracking draft versions and using screen recording software. Others suggested running a detector on their own writing until it came back without being incorrectly flagged.”
    Another theme that emerged in the discussions was the perceived role of colleges and universities as “gatekeepers” to success and, as a result, the high stakes associated with being wrongly accused of cheating. This led to questions about the institutions’ preparedness for the new technology and concerns that professors would be too dependent on AI detectors — whose accuracy remains in doubt.
    “The conversations happening online evolved from specific doubts about the accuracy of AI detection and universities’ policies around the use of generative AI, to broadly questioning the role of higher education in society and suggesting that the technology will render institutions of higher education irrelevant in the near future,” Gorichanaz said. More

  • in

    Ecology and artificial intelligence: Stronger together

    Many of today’s artificial intelligence systems loosely mimic the human brain. In a new paper, researchers suggest that another branch of biology — ecology — could inspire a whole new generation of AI to be more powerful, resilient, and socially responsible.
    Published September 11 in Proceedings of the National Academy of Sciences, the paper argues for a synergy between AI and ecology that could both strengthen AI and help to solve complex global challenges, such as disease outbreaks, loss of biodiversity, and climate change impacts.
    The idea arose from the observation that AI can be shockingly good at certain tasks, but still far from useful at others — and that AI development is hitting walls that ecological principles could help it to overcome.
    “The kinds of problems that we deal with regularly in ecology are not only challenges that AI could benefit from in terms of pure innovation — they’re also the kinds of problems where if AI could help, it could mean so much for the global good,” explained Barbara Han, a disease ecologist at Cary Institute of Ecosystem Studies, who co-led the paper along with IBM Research’s Kush Varshney. “It could really benefit humankind.”
    How AI can help ecology
    Ecologists — Han included — are already using artificial intelligence to search for patterns in large data sets and to make more accurate predictions, such as whether new viruses might be capable of infecting humans, and which animals are most likely to harbor those viruses.
    However, the new paper argues that there are many more possibilities for applying AI in ecology, such as in synthesizing big data and finding missing links in complex systems. More

  • in

    Not too big: Machine learning tames huge data sets

    A machine-learning algorithm demonstrated the capability to process data that exceeds a computer’s available memory by identifying a massive data set’s key features and dividing them into manageable batches that don’t choke computer hardware. Developed at Los Alamos National Laboratory, the algorithm set a world record for factorizing huge data sets during a test run on Oak Ridge National Laboratory’s Summit, the world’s fifth-fastest supercomputer.
    Equally efficient on laptops and supercomputers, the highly scalable algorithm solves hardware bottlenecks that prevent processing information from data-rich applications in cancer research, satellite imagery, social media networks, national security science and earthquake research, to name just a few.
    “We developed an ‘out-of-memory’ implementation of the non-negative matrix factorization method that allows you to factorize larger data sets than previously possible on a given hardware,” said Ismael Boureima, a computational physicist at Los Alamos National Laboratory. Boureima is first author of the paper in The Journal of Supercomputing on the record-breaking algorithm. “Our implementation simply breaks down the big data into smaller units that can be processed with the available resources. Consequently, it’s a useful tool for keeping up with exponentially growing data sets.”
    “Traditional data analysis demands that data fit within memory constraints. Our approach challenges this notion,” said Manish Bhattarai, a machine learning scientist at Los Alamos and co-author of the paper. “We have introduced an out-of-memory solution. When the data volume exceeds the available memory, our algorithm breaks it down into smaller segments. It processes these segments one at a time, cycling them in and out of the memory. This technique equips us with the unique ability to manage and analyze extremely large data sets efficiently.”
    The distributed algorithm for modern and heterogeneous high-performance computer systems can be useful on hardware as small as a desktop computer, or as large and complex as Chicoma, Summit or the upcoming Venado supercomputers, Boureima said.
    “The question is no longer whether it is possible to factorize a larger matrix, rather how long is the factorization going to take,” Boureima said.
    The Los Alamos implementation takes advantage of hardware features such as GPUs to accelerate computation and fast interconnect to efficiently move data between computers. At the same time, the algorithm efficiently gets multiple tasks done simultaneously. More

  • in

    When electronic health records are hard to use, patient safety may be at risk

    New research suggests that hospital electronic health records (EHRs) that are difficult to use are also less likely to catch medical errors that could harm patients.
    As clinicians navigate EHR systems, alerts, reminders, and clinical guidelines pop up to steer decision making. Yet a common complaint is that these notifications are distracting rather than helpful. These frustrations could signal that built-in safety mechanisms similarly suffer from suboptimal design, suggests the new study. Researchers found that EHR systems rated as being difficult to operate did not perform well in safety tests.
    “Poor usability of EHRs is the number one complaint of doctors, nurses, pharmacists, and most health care professionals,” says David Classen, M.D., the study’s corresponding author and a professor of internal medicine at University of Utah Health. “This correlates with poor performance in terms of safety.”
    Classen likens the situation to the software problems that led to two deadly Boeing 737 MAX airplane crashes in 2018 and 2019. In both cases, pilots struggling to use the system foretold deeper safety issues.
    “Our findings suggest that we need to improve EHR systems to make them both easier to use and safer,” Classen says. He collaborated on the study with senior author David Bates, M.D., at Brigham and Women’s Hospital and Harvard T.H. Chan School of Public Health, and scientists at University of California San Diego Health; KLAS Enterprises, LLC; and University of California, San Francisco.
    The research appears in the September 11 issue of JAMA Network Open.
    Experts estimate that as many as 400,000 people are injured each year from medical errors that occur in hospitals. Medical professionals predicted that widespread use of EHRs would mitigate the problem. But research published by Classen, Bates and colleagues in 2020 showed that EHRs failed to reliably detect medical errors that could harm patients, including dangerous drug interactions. Additional reports have indicated that poorly designed EHRs could be a contributing factor. More

  • in

    Wifi can read through walls

    Researchers in UC Santa Barbara professor Yasamin Mostofi’s lab have proposed a new foundation that can enable high-quality imaging of still objects with only WiFi signals. Their method uses the Geometrical Theory of Diffraction and the corresponding Keller cones to trace edges of the objects. The technique has also enabled, for the first time, imaging, or reading, the English alphabet through walls with WiFi, a task deemed too difficult for WiFi due to the complex details of the letters.
    “Imaging still scenery with WiFi is considerably challenging due to the lack of motion,” said Mostofi, a professor of electrical and computer engineering. “We have then taken a completely different approach to tackle this challenging problem by focusing on tracing the edges of the objects instead.” The proposed methodology and experimental results appeared in the Proceedings of the 2023 IEEE National Conference on Radar (RadarConf) on June 21, 2023.
    This innovation builds on previous work in the Mostofi Lab, which since 2009 has pioneered sensing with everyday radio frequency signals such as WiFi for several different applications, including crowd analytics, person identification, smart health and smart spaces.
    “When a given wave is incident on an edge point, a cone of outgoing rays emerges according to the Keller’s Geometrical Theory of Diffraction (GTD), referred to as a Keller cone,” Mostofi explained. The researchers note that this interaction is not limited to visibly sharp edges but applies to a broader set of surfaces with a small enough curvature.
    “Depending on the edge orientation, the cone then leaves different footprints (i.e., conic sections) on a given receiver grid. We then develop a mathematical framework that uses these conic footprints as signatures to infer the orientation of the edges, thus creating an edge map of the scene,” Mostofi continued.
    More specifically, the team proposed a Keller cone-based imaging projection kernel. This kernel is implicitly a function of the edge orientations, a relationship that is then exploited to infer the existence/orientation of the edges via hypothesis testing over a small set of possible edge orientations. In other words, if existence of an edge is determined, the edge orientation that best matches the resulting Keller cone-based signature is chosen for a given point that they are interested in imaging.
    “Edges of real-life objects have local dependencies,” said Anurag Pallaprolu, the lead Ph.D. student on the project. “Thus, once we find the high-confidence edge points via the proposed imaging kernel, we then propagate their information to the rest of the points using Bayesian information propagation. This step can further help improve the image, since some of the edges may be in a blind region, or can be overpowered by other edges that are closer to the transmitters.” Finally, once an image is formed, the researchers can further improve the image by using image completion tools from the area of vision. More

  • in

    Researchers make a significant step towards reliably processing quantum information

    Using laser light, researchers have developed the most robust method currently known to control individual qubits made of the chemical element barium. The ability to reliably control a qubit is an important achievement for realizing future functional quantum computers.
    This new method, developed at the University of Waterloo’s Institute for Quantum Computing (IQC), uses a small glass waveguide to separate laser beams and focus them four microns apart, about four-hundredths of the width of a single human hair. The precision and extent to which each focused laser beam on its target qubit can be controlled in parallel is unmatched by previous research.
    “Our design limits the amount of crosstalk-the amount of light falling on neighbouring ions-to the very small relative intensity of 0.01 per cent, which is among the best in the quantum community,” said Dr. K. Rajibul Islam, a professor at IQC and Waterloo’s Department of Physics and Astronomy. “Unlike previous methods to create agile controls over individual ions, the fibre-based modulators do not affect each other.
    “This means we can talk to any ion without affecting its neighbours while also retaining the capability to control each individual ion to the maximum possible extent. This is the most flexible ion qubit control system with this high precision that we know of anywhere, in both academia and industry.”
    The researchers targeted barium ions, which are becoming increasingly popular in the field of trapped ion quantum computation. Barium ions have convenient energy states that can be used as the zero and one levels of a qubit and be manipulated with visible green light, unlike the higher energy ultraviolet light needed for other atom types for the same manipulation. This allows the researchers to use commercially available optical technologies that are not available for ultraviolet wavelengths.
    The researchers created a waveguide chip that divides a single laser beam into 16 different channels of light. Each channel is then directed into individual optical fibre-based modulators which independently provide agile control over each laser beam’s intensity, frequency, and phase. The laser beams are then focused down to their small spacing using a series of optical lenses similar to a telescope. The researchers confirmed each laser beam’s focus and control by measuring them with precise camera sensors.
    “This work is part of our effort at the University of Waterloo to build barium ion quantum processors using atomic systems,” said Dr. Crystal Senko, Islam’s co-principal investigator and a faculty member at IQC and Waterloo’s Department of Physics and Astronomy. “We use ions because they are identical, nature-made qubits, so we don’t need to fabricate them. Our task is to find ways to control them.”
    The new waveguide method demonstrates a simple and precise method of control, showing promise for manipulating ions to encode and process quantum data and for implementation in quantum simulation and computing. More

  • in

    Magnetic whirls pave the way for energy-efficient computing

    Researchers of Johannes Gutenberg University Mainz and the University of Konstanz in Germany as well as of Tohoku University in Japan have been able to increase the diffusion of magnetic whirls, so called skyrmions, by a factor of ten.
    In today’s world, our lives are unimaginable without computers. Up until now, these devices process information using primarily electrons as charge carriers, with the components themselves heating up significantly in the process. Active cooling is thus necessary, which comes with high energy costs. Spintronics aims to solve this problem: Instead of utilizing the electron flow for information processing, it relies on their spin or their intrinsic angular momentum. This approach is expected to have a positive impact on the size, speed, and sustainability of computers or specific components.
    Magnetic whirls store and process information
    Science often does not simply consider the spin of an individual electron, but rather magnetic whirls composed of numerous spins. These whirls called skyrmions emerge in magnetic metallic thin layers and can be considered as two-dimensional quasi-particles. On the one hand, the whirls can be deliberately moved by applying a small electric current to the thin layers; on the other hand, they move randomly and extremely efficiently due to diffusion. The feasibility of creating a functional computer based on skyrmions was demonstrated by a team of researchers from Johannes Gutenberg University Mainz (JGU), led by Professor Dr. Mathias Kläui, using an initial prototype. This prototype consisted of thin, stacked metallic layers, some only a few atomic layers thick.
    Energy efficiency: Tenfold increase in whirl diffusion
    In collaboration with the University of Konstanz and Tohoku University in Japan, researchers of Mainz University have now achieved another step towards spin-based, unconventional computing: They were able to increase the diffusion of skyrmions by a factor of about ten using synthetic antiferromagnets, which drastically reduces the energy consumption and increases the speed of such a potential computer. “The reduction of energy usage in electronic devices is one of the biggest challenges in fundamental research,” emphasized Professor Dr. Ulrich Nowak, who led the theoretical part of the project in Konstanz.
    But what is an antiferromagnet and what is it used for? Normal ferromagnets consist of many small spins, all coupled together to point in the same direction, thereby creating a large magnetic moment. In antiferromagnets, the spins are aligned alternatingly antiparallel, i.e., a spin and its direct neighbors point in the opposite direction. As a result, there is no net magnetic moment, even though the spins remain antiferromagnetically well-ordered. Antiferromagnets have significant advantages, such as three magnitudes of faster dynamics for switching, better stability, and the potential for higher storage densities. These properties are intensively studied in multiple research projects. More