More stories

  • in

    ChatGPT is still no match for humans when it comes to accounting

    Last month, OpenAI launched its newest AI chatbot product, GPT-4. According to the folks at OpenAI, the bot, which uses machine learning to generate natural language text, passed the bar exam with a score in the 90th percentile, passed 13 of 15 AP exams and got a nearly perfect score on the GRE Verbal test.
    Inquiring minds at BYU and 186 other universities wanted to know how OpenAI’s tech would fare on accounting exams. So, they put the original version, ChatGPT, to the test. The researchers say that while it still has work to do in the realm of accounting, it’s a game changer that will change the way everyone teaches and learns — for the better.
    “When this technology first came out, everyone was worried that students could now use it to cheat,” said lead study author David Wood, a BYU professor of accounting. “But opportunities to cheat have always existed. So for us, we’re trying to focus on what we can do with this technology now that we couldn’t do before to improve the teaching process for faculty and the learning process for students. Testing it out was eye-opening.”
    Since its debut in November 2022, ChatGPT has become the fastest growing technology platform ever, reaching 100 million users in under two months. In response to intense debate about how models like ChatGPT should factor into education, Wood decided to recruit as many professors as possible to see how the AI fared against actual university accounting students.
    His co-author recruiting pitch on social media exploded: 327 co-authors from 186 educational institutions in 14 countries participated in the research, contributing 25,181 classroom accounting exam questions. They also recruited undergrad BYU students (including Wood’s daughter, Jessica) to feed another 2,268 textbook test bank questions to ChatGPT. The questions covered accounting information systems (AIS), auditing, financial accounting, managerial accounting and tax, and varied in difficulty and type (true/false, multiple choice, short answer, etc.).
    Although ChatGPT’s performance was impressive, the students performed better. Students scored an overall average of 76.7%, compared to ChatGPT’s score of 47.4%. On a 11.3% of questions, ChatGPT scored higher than the student average, doing particularly well on AIS and auditing. But the AI bot did worse on tax, financial, and managerial assessments, possibly because ChatGPT struggled with the mathematical processes required for the latter type.
    When it came to question type, ChatGPT did better on true/false questions (68.7% correct) and multiple-choice questions (59.5%), but struggled with short-answer questions (between 28.7% and 39.1%). In general, higher-order questions were harder for ChatGPT to answer. In fact, sometimes ChatGPT would provide authoritative written descriptions for incorrect answers, or answer the same question different ways.
    “It’s not perfect; you’re not going to be using it for everything,” said Jessica Wood, currently a freshman at BYU. “Trying to learn solely by using ChatGPT is a fool’s errand.”
    The researchers also uncovered some other fascinating trends through the study, including: ChatGPT doesn’t always recognize when it is doing math and makes nonsensical errors such as adding two numbers in a subtraction problem, or dividing numbers incorrectly. ChatGPT often provides explanations for its answers, even if they are incorrect. Other times, ChatGPT’s descriptions are accurate, but it will then proceed to select the wrong multiple-choice answer. ChatGPT sometimes makes up facts. For example, when providing a reference, it generates a real-looking reference that is completely fabricated. The work and sometimes the authors do not even exist.That said, authors fully expect GPT-4 to improve exponentially on the accounting questions posed in their study, and the issues mentioned above. What they find most promising is how the chatbot can help improve teaching and learning, including the ability to design and test assignments, or perhaps be used for drafting portions of a project.
    “It’s an opportunity to reflect on whether we are teaching value-added information or not,” said study coauthor and fellow BYU accounting professor Melissa Larson. “This is a disruption, and we need to assess where we go from here. Of course, I’m still going to have TAs, but this is going to force us to use them in different ways.” More

  • in

    Reinforcement learning: From board games to protein design

    Scientists have successfully applied reinforcement learning to a challenge in molecular biology.
    The team of researchers developed powerful new protein design software adapted from a strategy proven adept at board games like Chess and Go. In one experiment, proteins made with the new approach were found to be more effective at generating useful antibodies in mice.
    The findings, reported April 21 in Science, suggest that this breakthrough may soon lead to more potent vaccines. More broadly, the approach could lead to a new era in protein design.
    “Our results show that reinforcement learning can do more than master board games. When trained to solve long-standing puzzles in protein science, the software excelled at creating useful molecules,” said senior author David Baker, professor of biochemistry at the UW School of Medicine in Seattle and a recipient of the 2021 Breakthrough Prize in Life Sciences.
    “If this method is applied to the right research problems,” he said, “it could accelerate progress in a variety of scientific fields.”
    The research is a milestone in tapping artificial intelligence to conduct protein science research. The potential applications are vast, from developing more effective cancer treatments to creating new biodegradable textiles.

    Reinforcement learning is a type of machine learning in which a computer program learns to make decisions by trying different actions and receiving feedback. Such an algorithm can learn to play chess, for example, by testing millions of different moves that lead to victory or defeat on the board. The program is designed to learn from these experiences and become better at making decisions over time.
    To make a reinforcement learning program for protein design, the scientists gave the computer millions of simple starting molecules. The software then made ten thousand attempts at randomly improving each toward a predefined goal. The computer lengthened the proteins or bent them in specific ways until it learned how to contort them into desired shapes.
    Isaac D. Lutz, Shunzhi Wang, and Christoffer Norn, all members of the Baker Lab, led the research. Their team’s Science manuscript is titled “Top-down design of protein architectures with reinforcement learning.”
    “Our approach is unique because we use reinforcement learning to solve the problem of creating protein shapes that fit together like pieces of a puzzle,” explained co-lead author Lutz, a doctoral student at the UW Medicine Institute for Protein Design. “This simply was not possible using prior approaches and has the potential to transform the types of molecules we can build.”
    As part of this study, the scientists manufactured hundreds of AI-designed proteins in the lab. Using electron microscopes and other instruments, they confirmed that many of the protein shapes created by the computer were indeed realized in the lab.

    “This approach proved not only accurate but also highly customizable. For example, we asked the software to make spherical structures with no holes, small holes, or large holes. Its potential to make all kinds of architectures has yet to be fully explored,” said co-lead author Shunzhi Wang, a postdoctoral scholar at the UW Medicine Institute for Protein Design.
    The team concentrated on designing new nano-scale structures composed of many protein molecules. This required designing both the protein components themselves and the chemical interfaces that allow the nano-structures to self-assemble.
    Electron microscopy confirmed that numerous AI-designed nano-structures were able to form in the lab. As a measure of how accurate the design software had become, the scientists observed many unique nano-structures in which every atom was found to be in the intended place. In other words, the deviation between the intended and realized nano-structure was on average less than the width of a single atom. This is called atomically accurate design.
    The authors foresee a future in which this approach could enable them and others to create therapeutic proteins, vaccines, and other molecules that could not have been made using prior methods.
    Researchers from the UW Medicine Institute for Stem Cell and Regenerative Medicine used primary cell models of blood vessel cells to show that the designed protein scaffolds outperformed previous versions of the technology. For example, because the receptors that help cells receive and interpret signals were clustered more densely on the more compact scaffolds, they were more effective at promoting blood vessel stability.
    Hannele Ruohola-Baker, a UW School of Medicine professor of biochemistry and one of the study’s authors, spoke to the implications of the investigation for regenerative medicine: “The more accurate the technology becomes, the more it opens up potential applications, including vascular treatments for diabetes, brain injuries, strokes, and other cases where blood vessels are at risk. We can also imagine more precise delivery of factors that we use to differentiate stem cells into various cell types, giving us new ways to regulate the processes of cell development and aging.”
    This work was funded by the National Institutes of Health (P30 GM124169, S10OD018483, 1U19AG065156-01, T90 DE021984, 1P01AI167966); Open Philanthropy Project and The Audacious Project at the Institute for Protein Design; Novo Nordisk Foundation (NNF170C0030446); Microsoft; and Amgen. Research was in part conducted at the Advanced Light Source, a national user facility operated by Lawrence Berkeley National Laboratory on behalf of the Department of Energy
    News release written by Ian Haydon, UW Medicine Institute for Protein Design. More

  • in

    AI system can generate novel proteins that meet structural design targets

    MIT researchers are using artificial intelligence to design new proteins that go beyond those found in nature.
    They developed machine-learning algorithms that can generate proteins with specific structural features, which could be used to make materials that have certain mechanical properties, like stiffness or elasticity. Such biologically inspired materials could potentially replace materials made from petroleum or ceramics, but with a much smaller carbon footprint.
    The researchers from MIT, the MIT-IBM Watson AI Lab, and Tufts University employed a generative model, which is the same type of machine-learning model architecture used in AI systems like DALL-E 2. But instead of using it to generate realistic images from natural language prompts, like DALL-E 2 does, they adapted the model architecture so it could predict amino acid sequences of proteins that achieve specific structural objectives.
    In a paper to be published in Chem, the researchers demonstrate how these models can generate realistic, yet novel, proteins. The models, which learn biochemical relationships that control how proteins form, can produce new proteins that could enable unique applications, says senior author Markus Buehler, the Jerry McAfee Professor in Engineering and professor of civil and environmental engineering and of mechanical engineering.
    For instance, this tool could be used to develop protein-inspired food coatings, which could keep produce fresh longer while being safe for humans to eat. And the models can generate millions of proteins in a few days, quickly giving scientists a portfolio of new ideas to explore, he adds.
    “When you think about designing proteins nature has not discovered yet, it is such a huge design space that you can’t just sort it out with a pencil and paper. You have to figure out the language of life, the way amino acids are encoded by DNA and then come together to form protein structures. Before we had deep learning, we really couldn’t do this,” says Buehler, who is also a member of the MIT-IBM Watson AI Lab.

    Joining Buehler on the paper are lead author Bo Ni, a postdoc in Buehler’s Laboratory for Atomistic and Molecular Mechanics; and David Kaplan, the Stern Family Professor of Engineering and professor of bioengineering at Tufts.
    Adapting new tools for the task
    Proteins are formed by chains of amino acids, folded together in 3D patterns. The sequence of amino acids determines the mechanical properties of the protein. While scientists have identified thousands of proteins created through evolution, they estimate that an enormous number of amino acid sequences remain undiscovered.
    To streamline protein discovery, researchers have recently developed deep learning models that can predict the 3D structure of a protein for a set of amino acid sequences. But the inverse problem — predicting a sequence of amino acid structures that meet design targets — has proven even more challenging.
    A new advent in machine learning enabled Buehler and his colleagues to tackle this thorny challenge: attention-based diffusion models.

    Attention-based models can learn very long-range relationships, which is key to developing proteins because one mutation in a long amino acid sequence can make or break the entire design, Buehler says. A diffusion model learns to generate new data through a process that involves adding noise to training data, then learning to recover the data by removing the noise. They are often more effective than other models at generating high-quality, realistic data that can be conditioned to meet a set of target objectives to meet a design demand.
    The researchers used this architecture to build two machine-learning models that can predict a variety of new amino acid sequences which form proteins that meet structural design targets.
    “In the biomedical industry, you might not want a protein that is completely unknown because then you don’t know its properties. But in some applications, you might want a brand-new protein that is similar to one found in nature, but does something different. We can generate a spectrum with these models, which we control by tuning certain knobs,” Buehler says.
    Common folding patterns of amino acids, known as secondary structures, produce different mechanical properties. For instance, proteins with alpha helix structures yield stretchy materials while those with beta sheet structures yield rigid materials. Combining alpha helices and beta sheets can create materials that are stretchy and strong, like silks.
    The researchers developed two models, one that operates on overall structural properties of the protein and one that operates at the amino acid level. Both models work by combining these amino acid structures to generate proteins. For the model that operates on the overall structural properties, a user inputs a desired percentage of different structures (40 percent alpha-helix and 60 percent beta sheet, for instance). Then the model generates sequences that meet those targets. For the second model, the scientist also specifies the order of amino acid structures, which gives much finer-grained control.
    The models are connected to an algorithm that predicts protein folding, which the researchers use to determine the protein’s 3D structure. Then they calculate its resulting properties and check those against the design specifications.
    Realistic yet novel designs
    They tested their models by comparing the new proteins to known proteins that have similar structural properties. Many had some overlap with existing amino acid sequences, about 50 to 60 percent in most cases, but also some entirely new sequences. The level of similarity suggests that many of the generated proteins are synthesizable, Buehler adds.
    To ensure the predicted proteins are reasonable, the researchers tried to trick the models by inputting physically impossible design targets. They were impressed to see that, instead of producing improbable proteins, the models generated the closest synthesizable solution.
    “The learning algorithm can pick up the hidden relationships in nature. This gives us confidence to say that whatever comes out of our model is very likely to be realistic,” Ni says.
    Next, the researchers plan to experimentally validate some of the new protein designs by making them in a lab. They also want to continue augmenting and refining the models so they can develop amino acid sequences that meet more criteria, such as biological functions.
    “For the applications we are interested in, like sustainability, medicine, food, health, and materials design, we are going to need to go beyond what nature has done. Here is a new design tool that we can use to create potential solutions that might help us solve some of the really pressing societal issues we are facing,” Buehler says.
    This research was supported, in part, by the MIT-IBM Watson AI Lab, the U.S. Department of Agriculture, the U.S. Department of Energy, the Army Research Office, the National Institutes of Health, and the Office of Naval Research. More

  • in

    Quantum entanglement could make accelerometers and dark matter sensors more accurate

    The “spooky action at a distance” that once unnerved Einstein may be on its way to being as pedestrian as the gyroscopes that currently measure acceleration in smartphones.
    Quantum entanglement significantly improves the precision of sensors that can be used to navigate without GPS, according to a new study in Nature Photonics.
    “By exploiting entanglement, we improve both measurement sensitivity and how quickly we can make the measurement,” said Zheshen Zhang, associate professor of electrical and computer engineering at the University of Michigan and co-corresponding author of the study. The experiments were done at the University of Arizona, where Zhang was working at the time.
    Optomechanical sensors measure forces that disturb a mechanical sensing device that moves in response. That motion is then measured with light waves. In this experiment, the sensors were membranes, which act like drum heads that vibrate after experiencing a push. Optomechanical sensors can function as accelerometers, which can be used for inertial navigation on a planet that doesn’t have GPS satellites or within a building as a person navigates different floors.
    Quantum entanglement could make optomechanical sensors more accurate than inertial sensors currently in use. It could also enable optomechanical sensors to look for very subtle forces, such as identifying the presence of dark matter. Dark matter is invisible matter believed to account for five times more of the mass in the universe than what we can sense with light. It would tug on the sensor with gravitational force.
    Here’s how entanglement improves optomechanical sensors:
    Optomechanical sensors rely on two synchronized laser beams. One of them is reflected from a sensor, and any movement in the sensor changes the distance that the light travels on its way to the detector. That difference in distance traveled shows up when the second wave overlaps with the first. If the sensor is still, the two waves are perfectly aligned. But if the sensor is moving, they create an interference pattern as the peaks and troughs of their waves cancel each other out in places. That pattern reveals the size and speed of vibrations in the sensor.

    Usually in interferometry systems, the further the light travels, the more accurate the system becomes. The most sensitive interferometry system on the planet, the Laser Interferometer Gravitational-Wave Observatory, sends light on 8-kilometer journeys. But that’s not going to fit in a smartphone.
    To enable high accuracy in miniaturized optomechanical sensors, Zhang’s team explored quantum entanglement. Rather than splitting the light once so that it bounced off a sensor and a mirror, they split each beam a second time so that the light bounced off two sensors and two mirrors. Dalziel Wilson, an assistant professor of optical sciences at the University of Arizona, along with his doctoral students Aman Agrawal and Christian Pluchar, built the membrane devices. These membranes, just 100 nanometers — or 0.0001 millimeters — thick, move in response to very small forces.
    Doubling the sensors improves the accuracy, as the membranes should be vibrating in sync with each other, but the entanglement adds an extra level of coordination. Zhang’s group created the entanglement by “squeezing” the laser light. In quantum mechanical objects, such as the photons that make up light, there is a fundamental limit on how well the position and momentum of a particle can be known. Because photons are also waves, this translates to the phase of the wave (where it is in its oscillation) and its amplitude (how much energy it carries).
    “Squeezing redistributes the uncertainty, so that the squeezed component is known more precisely, and the anti-squeezed component carries more of the uncertainty. We squeezed the phase because that is what we needed to know for our measurement,” said Yi Xia, a recent Ph.D. graduate from Zhang’s lab at the University of Arizona and co-corresponding author of the paper.
    In squeezed light, the photons are more closely related to one another. Zhang contrasted what happens when the photons go through a beam splitter with cars coming to a fork in the freeway.

    “You have three cars going one way and three cars going the other way. But in quantum superposition, each car goes both ways. Now the cars on the left are entangled with the cars on the right,” he said.
    Because the fluctuations in the two entangled beams are linked, the uncertainties in their phase measurements are correlated. As a result, with some mathematical wizardry, the team was able to get measurements that are 40% more precise than with two unentangled beams, and they can do it 60% faster. What’s more, the precision and speed is expected to rise in proportion to the number of sensors.
    “It is envisioned that an array of entanglement-enhanced sensors will offer orders-of-magnitude performance gain over existing sensing technology to enable the detection of particles beyond the present physical model, opening the door to a new world that is yet to be observed,” said Zhang.
    The team’s next steps are to miniaturize the system. Already, they can put a squeezed-light source on a chip that is just half a centimeter to a side. They expect to have a prototype chip with the squeezed-light source, beam splitters, waveguides and inertial sensors within a year or two.
    The study was funded by the Office of Naval Research, National Science Foundation, Department of Energy and Defense Advanced Research Projects Agency. More

  • in

    Versatile, high-speed, and efficient crystal actuation with photothermally resonated natural vibrations

    Mechanically responsive molecular crystals are extremely useful in soft robotics, which requires a versatile actuation technology. Crystals driven by the photothermal effect are particularly promising for achieving high-speed actuation. However, the response (bending) observed in these crystals is usually small. Now, scientists from Japan address this issue by inducing large resonated natural vibrations in anisole crystals with UV light illumination at the natural vibration frequency of the crystal.
    Every material possesses a unique natural vibration frequency such that when an external periodic force is applied to this material close to this frequency, the vibrations are greatly amplified. In the parlance of physics, this phenomenon is known as “resonance.” Resonance is ubiquitous in our daily life, and, depending on the context, could be deemed desirable or undesirable. For instance, musical instruments like the guitar relies on resonance for sound amplification. On the other hand, buildings and bridges are more likely to collapse under an earthquake if the ground vibration frequency matches their natural frequency.
    Interestingly, natural vibration has not received much attention in material actuation, which relies on the action of mechanically responsive crystals. Versatile actuation technologies are highly desirable in the field of soft robotics. Although crystal actuation based on processes like photoisomerisation and phase transitions have been widely studied, these processes lack versatility since they require specific crystals to work. One way to improve versatility is by employing photothermal crystals, which show bending due to light-induced heating. While promising for achieving high-speed actuation, the bending angle is usually small ( More

  • in

    Two qudits fully entangled

    In the world of computing, we typically think of information as being stored as ones and zeros — also known as binary encoding. However, in our daily life we use ten digits to represent all possible numbers. In binary the number 9 is written as 1001 for example, requiring three additional digits to represent the same thing.
    The quantum computers of today grew out of this binary paradigm, but in fact the physical systems that encode their quantum bits (qubit) often have the potential to also encode quantum digits (qudits), as recently demonstrated by a team led by Martin Ringbauer at the Department of Experimental Physics at the University of Innsbruck. According to experimental physicist Pavel Hrmo at ETH Zurich: “The challenge for qudit-based quantum computers has been to efficiently create entanglement between the high-dimensional information carriers.”
    In a study published in the journal Nature Communications the team at the University of Innsbruck now reports, how two qudits can be fully entangled with each other with unprecedented performance, paving the way for more efficient and powerful quantum computers.
    Thinking like a quantum computer
    The example of the number 9 shows that, while humans are able calculate 9 x 9 = 81 in one single step, a classical computer (or calculator) has to take 1001 x 1001 and perform many steps of binary multiplication behind the scenes before it is able to display 81 on the screen. Classically, we can afford to do this, but in the quantum world where computations are inherently sensitive to noise and external disturbances, we need to reduce the number of operations required to make the most of available quantum computers.
    Crucial to any calculation on a quantum computer is quantum entanglement. Entanglement is one of the unique quantum features that underpin the potential for quantum to greatly outperform classical computers in certain tasks. Yet, exploiting this potential requires the generation of robust and accurate higher-dimensional entanglement.
    The natural language of quantum systems
    The researchers at the University of Innsbruck were now able to fully entangle two qudits, each encoded in up to 5 states of individual Calcium ions. This gives both theoretical and experimental physicists a new tool to move beyond binary information processing, which could lead to faster and more robust quantum computers.
    Martin Ringbauer explains: “Quantum systems have many available states waiting to be used for quantum computing, rather than limiting them to work with qubits.” Many of today’s most challenging problems, in fields as diverse as chemistry, physics or optimisation, can benefit from this more natural language of quantum computing.
    The research was financially supported by the Austrian Science Fund FWF, the Austrian Research Promotion Agency FFG, the European Research Council ERC, the European Union and the Federation of Austrian Industries Tyrol, among others. More

  • in

    Quantum computer applied to chemistry

    There are high expectations that quantum computers may deliver revolutionary new possibilities for simulating chemical processes. This could have a major impact on everything from the development of new pharmaceuticals to new materials. Researchers at Chalmers University have now, for the first time in Sweden, used a quantum computer to undertake calculations within a real-life case in chemistry.
    “Quantum computers could in theory be used to handle cases where electrons and atomic nuclei move in more complicated ways. If we can learn to utilise their full potential, we should be able to advance the boundaries of what is possible to calculate and understand,” says Martin Rahm, Associate Professor in Theoretical Chemistry at the Department of Chemistry and Chemical Engineering, who has led the study.
    Within the field of quantum chemistry, the laws of quantum mechanics are used to understand which chemical reactions are possible, which structures and materials can be developed, and what characteristics they have. Such studies are normally undertaken with the help of super computers, built with conventional logical circuits. There is however a limit for which calculations conventional computers can handle. Because the laws of quantum mechanics describe the behaviour of nature on a subatomic level, many researchers believe that a quantum computer should be better equipped to perform molecular calculations than a conventional computer.
    “Most things in this world are inherently chemical. For example, our energy carriers, within biology as well as in old or new cars, are made up of electrons and atomic nuclei arranged in different ways in molecules and materials. Some of the problems we solve in the field of quantum chemistry are to calculate which of these arrangements are more likely or advantageous, along with their characteristics,” says Martin Rahm.
    A new method minimises errors in the quantum chemical calculations
    There is still a way to go before quantum computers can achieve what the researchers are aiming for. This field of research is still young and the small model calculations that are run are complicated by noise from the quantum computer’s surroundings. However, Martin Rahm and his colleagues have now found a method that they see as an important step forward. The method is called Reference-State Error Mitigation (REM) and works by correcting for the errors that occur due to noise by utilising the calculations from both a quantum computer and a conventional computer.

    “The study is a proof-of-concept that our method can improve the quality of quantum-chemical calculations. It is a useful tool that we will use to improve our calculations on quantum computers moving forward,” says Martin Rahm.
    The principle behind the method is to first consider a reference state by describing and solving the same problem on both a conventional and a quantum computer. This reference state represents a simpler description of a molecule than the original problem intended to be solved by the quantum computer. A conventional computer can solve this simpler version of the problem quickly. By comparing the results from both computers, an exact estimate can be made for the amount of error caused by noise. The difference between the two computers’ solutions for the reference problem can then be used to correct the solution for the original, more complex, problem when it is run on the quantum processor. By combining this new method with data from Chalmers’ quantum computer Särimner* the researchers have succeeded in calculating the intrinsic energy of small example molecules such as hydrogen and lithium hydride. Equivalent calculations can be carried out more quickly on a conventional computer, but the new method represents an important development and is the first demonstration of a quantum chemical calculation on a quantum computer in Sweden.
    “We see good possibilities for further development of the method to allow calculations of larger and more complex molecules, when the next generation of quantum computers are ready,” says Martin Rahm.
    Quantum computer built at Chalmers
    The research has been conducted in close collaboration with colleagues at the Department of Microtechnology and Nanoscience. They have built the quantum computers that are used in the study, and helped perform the sensitive measurements that are needed for the chemical calculations.

    “It is only by using real quantum algorithms that we can understand how our hardware really works and how we can improve it. Chemical calculations are one of the first areas where we believe that quantum computers will be useful, so our collaboration with Martin Rahm’s group is especially valuable,” says Jonas Bylander, Associate Professor in Quantum Technology at the Department of Microtechnology and Nanoscience.
    More about the research
    Read the article Reference-State Error Mitigation: A Strategy for High Accuracy Quantum Computation of Chemistry in the Journal of Chemical Theory and Computation.
    The article is written by Phalgun Lolur, Mårten Skogh, Werner Dobrautz, Christopher Warren, Janka Biznárová, Amr Osman, Giovanna Tancredi, Göran Wendin, Jonas Bylander, and Martin Rahm. The researchers are active at Chalmers University of Technology.
    The research has been conducted in cooperation with the Wallenberg Centre for Quantum Technology (WACQT) and the EU-project OpensuperQ. OpensuperQ connects universities and companies in 10 European countries with the aim of building a quantum computer, and its extension will contribute further funding to researchers at Chalmers for their work with quantum chemical calculations.
    *Särimner is the name of a quantum processor with five qubits, or quantum bits, built by Chalmers within the framework of the Wallenberg Center for Quantum Technology (WACQT). Its name is borrowed from Nordic mythology, in which the pig Särimner was butchered and eaten every day, only to be resurrected. Särimner has now been replaced by a larger computer with 25 qubits and the goal for WACQT is to build a quantum computer with 100 qubits that can solve problems far beyond the capacity of today’s best conventional super-computers. More

  • in

    Surface steers signals for next-gen networks

    5G communications’ superfast download speeds rely on the high frequencies that drive the transmissions. But the highest frequencies come with a tradeoff.
    Frequencies at the upper end of the 5G spectrum hold the greatest amount of data and could be critical to high-resolution augmented and virtual reality, video streaming, video conferencing, and services in crowded urban areas. But those high-end frequencies are easily blocked by walls, furniture and even people. This has been a hurdle to achieving the technology’s full potential.
    Now, a team led by Princeton researchers has developed a new device to help higher-frequency 5G signals, known as millimeter-wave or mmWave, overcome this obstacle. The device, called mmWall, is about the size of a small tablet. It can steer mmWave signals to reach all corners of a large room, and, when installed in a window, can bring signals from an outdoor transmitter indoors. The researchers presented their work on mmWall at the USENIX Symposium on Networked Systems Design and Implementation in Boston on April 19.
    While computers and smartphones often connect to Wi-Fi indoors to get the best data speeds, outdoor 5G base stations could someday replace Wi-Fi systems and provide high-speed connectivity both indoors and outdoors, preventing glitches when devices switch between networks, said Kun Woo Cho, a Ph.D. student in Princeton’s Department of Computer Science and the lead author of the research. Boosting 5G signals with technology like mmWall will be crucial to this broader adoption, she said.
    The mmWall is an accordion-like array of 76 vertical panels that can both reflect and refract radio waves at frequencies above 24 gigahertz, the lower bound of mmWave signals. These frequencies can provide a bandwidth five to 10 times greater than the maximum capability of 4G networks. The device can steer beams around obstacles, as well as efficiently align the beams of transmitter and receiver to establish connections quickly and maintain them seamlessly.
    “Wireless transmissions at these higher frequencies resemble beams of light more than a broadcast in all directions, and so get blocked easily by humans and other obstacles,” said senior study author Kyle Jamieson, a professor of computer science who leads the Princeton Advanced Wireless Systems Lab (PAWS).
    The mmWall surface is the first to be able to reflect such transmissions in such a way that the angle of reflection does not equal the angle of incidence, sidestepping a classic law of physics. The device can also “refract transmissions that hit one side of the surface through at a different angle of departure, and is fully electronically reconfigurable within microseconds, allowing it to keep up with the ‘line rate’ of tomorrow’s ultra-fast networks,” said Jamieson.
    Each panel of mmWall holds two meandering lines of thin copper wire, flanking a line of 28 broken circles made of thicker wire, which constitute meta-atoms — materials whose geometry is designed to achieve tunable electrical and magnetic properties. Applying controlled electrical current to these meta-atoms can change the behavior of the mmWave signals that interact with the mmWall surface — dynamically steering the signals around obstacles by shifting their paths by up to 135 degrees.
    “Just by changing the voltage, we can tune the phase,” or the relationship between the incoming and outgoing radio waves, said Cho. “We can basically steer to any angle for transmission and reflection. State-of-the-art surfaces generally only work for reflection or only work for transmission, but with this we can do both at any arbitrary angle with high amplitude.”
    The process is analogous to light waves slowing down when they pass through a glass of water, said Cho. The water changes the direction of the light waves and makes objects appear distorted when viewed through the water.
    Cho mathematically analyzed different parameters of the meta-atoms’ geometry to arrive at the optimal size, shape and arrangement for the copper meta-atoms and the pathways between them, which were fabricated with standard printed circuit board technology and mounted on a 3D-printed frame. In designing mmWall, the team aimed to use the smallest possible meta-atoms (each has a diameter of less than a millimeter), in order to optimize their interaction with mmWaves, as well as to simplify the device’s fabrication and minimize the amount of copper. The mmWall also uses only microwatts of electricity, about 1,000 times less than Wi-Fi routers which use an average of about 6 watts.
    Cho tested mmWall’s ability to transmit and steer mmWave signals in a 900-square-foot lab in Princeton’s Computer Science building. With a transmitter in the room, mmWall improved the signal-to-noise ratio at nearly all of the 23 spots tested around the room. And when the transmitter was placed outdoors, mmWall again boosted signals all around the room, including in roughly 40% of spots that had been completely blocked without the use of mmWall. More