More stories

  • in

    Major milestone achieved in new quantum computing architecture

    Coherence stands as a pillar of effective communication, whether it is in writing, speaking or information processing. This principle extends to quantum bits, or qubits, the building blocks of quantum computing. A quantum computer could one day tackle previously insurmountable challenges in climate prediction, material design, drug discovery and more.
    A team led by the U.S. Department of Energy’s (DOE) Argonne National Laboratory has achieved a major milestone toward future quantum computing. They have extended the coherence time for their novel type of qubit to an impressive 0.1 milliseconds — nearly a thousand times better than the previous record.
    In everyday life, 0.1 milliseconds is as fleeting as a blink of an eye. However, in the quantum world, it represents a long enough window for a qubit to perform many thousands of operations.
    Unlike classical bits, qubits seemingly can exist in both states, 0 and 1. For any working qubit, maintaining this mixed state for a sufficiently long coherence time is imperative. The challenge is to safeguard the qubit against the constant barrage of disruptive noise from the surrounding environment.
    The team’s qubits encode quantum information in the electron’s motional (charge) states. Because of that, they are called charge qubits.
    “Among various existing qubits, electron charge qubits are especially attractive because of their simplicity in fabrication and operation, as well as compatibility with existing infrastructures for classical computers,” said Dafei Jin, a professor at the University of Notre Dame with a joint appointment at Argonne and the lead investigator of the project. “This simplicity should translate into low cost in building and running large-scale quantum computers.”
    Jin is a former staff scientist at the Center for Nanoscale Materials (CNM), a DOE Office of Science user facility at Argonne. While there, he led the discovery of their new type of qubit, reported last year. More

  • in

    Energy-saving AI chip

    Hussam Amrouch has developed an AI-ready architecture that is twice as powerful as comparable in-memory computing approaches. As reported in the journal Nature, the professor at the Technical University of Munich (TUM) applies a new computational paradigm using special circuits known as ferroelectric field effect transistors (FeFETs). Within a few years, this could prove useful for generative AI, deep learning algorithms and robotic applications.
    The basic idea is simple: unlike previous chips, where only calculations were carried out on transistors, they are now the location of data storage as well. That saves time and energy. “As a result, the performance of the chips is also boosted,” says Hussam Amrouch, a professor of AI processor design at the Technical University of Munich (TUM). The transistors on which he performs calculations and stores data measure just 28 nanometers, with millions of them placed on each of the new AI chips. The chips of the future will have to be faster and more efficient than earlier ones. Consequently, they cannot heat up as quickly. This is essential if they are to support such applications as real-time calculations when a drone is in flight, for example. “Tasks like this are extremely complex and energy-hungry for a computer,” explains the professor.
    Modern chips: many steps, low energy consumption
    These key requirements for a chip are summed up mathematically by the parameter TOPS/W: “tera-operations per second per watt.” This can be seen as the currency for the chips of the future. The question is how many trillion operations (TOP) a processor can perform per second (S) when provided with one watt (W) of power. The new AI chip, developed in a collaboration between Bosch and Fraunhofer IMPS and supported in the production process by the US company GlobalFoundries, can deliver 885 TOPS/W. This makes it twice as powerful as comparable AI chips, including a MRAM chip by Samsung. CMOS chips, which are now commonly used, operate in the range of 10-20 TOPS/W. This is demonstrated in results recently published in Nature.
    In-memory computing works like the human brain
    The researchers borrowed the principle of modern chip architecture from humans. “In the brain, neurons handle the processing of signals, while synapses are capable of remembering this information,” says Amrouch, describing how people are able to learn and recall complex interrelationships. To do this, the chip uses “ferroelectric” (FeFET) transistors. These are electronic switches that incorporate special additional characteristics (reversal of poles when a voltage is applied) and can store information even when cut off from the power source. In addition, they guarantee the simultaneous storage and processing of data within the transistors. “Now we can build highly efficient chipsets that can be used for such applications as deep learning, generative AI or robotics, for example where data have to be processed where they are generated,” believes Amrouch.
    Market-ready chips will require interdisciplinary collaboration
    The goal is to use the chip to run deep learning algorithms, recognize objects in space or process data from drones in flight with no time lag. However, the professor from the integrated Munich Institute of Robotics and Machine Intelligence (MIRMI) at TUM believes that it will be a few years before this is achieved. He thinks that it will be three to five years, at the soonest, before the first in-memory chips suitable for real-world applications become available. One reason for this, among others, lies in the security requirements of industry. Before a technology of this kind can be used in the automotive industry, for example, it is not enough for it to function reliably. It also has to meet the specific criteria of the sector. “This again highlights the importance of interdisciplinary collaboration with researchers from various disciplines such as computer science, informatics and electrical engineering,” says the hardware expert Amrouch. He sees this as a special strength of MIRMI. More

  • in

    New quantum effect demonstrated for the first time: Spinaron, a rugby in a ball pit

    For the first time, experimental physicists from the Würzburg-Dresden Cluster of Excellence ct.qmat have demonstrated a new quantum effect aptly named the “spinaron.” In a meticulously controlled environment and using an advanced set of instruments, they managed to prove the unusual state a cobalt atom assumes on a copper surface. This revelation challenges the long-held Kondo effect — a theoretical concept developed in the 1960s, and which has been considered the standard model for the interaction of magnetic materials with metals since the 1980s. These groundbreaking findings were published today in the  journal Nature Physics.
    Extreme conditions prevail in the Würzburg laboratory of experimental physicists Professor Matthias Bode and Dr. Artem Odobesko. Affiliated with the Cluster of Excellence ct.qmat, a collaboration between JMU Würzburg and TU Dresden, these visionaries are setting new milestones in quantum research. Their latest endeavor is unveiling the spinaron effect. They strategically placed individual cobalt atoms onto a copper surface, brought the temperature down to 1.4 Kelvin (-271.75° Celsius), and then subjected them to a powerful external magnetic field. “The magnet we use costs half a million euros. It’s not something that’s widely available,” explains Bode. Their subsequent analysis yielded unexpected revelations.
    Tiny Atom, Massive Effect
    “We can see the individual cobalt atoms by usinga scanning tunneling microscope. Each atom has a spin, which can be thought of as a magnetic north or south pole. Measuring it was crucial to our surprising discoveries,” explains Bode. “We vapor-deposited a magnetic cobalt atom onto a non-magnetic copper base, causing the atom to interact with the copper’s electrons. Researching such correlation effects within quantum materials is at the heart of ct.qmat’s mission — a pursuit that promises transformative tech innovations down the road.
    Like a Rugby in a Ball Pit
    Since the 1960s, solid-state physicists have assumed that the interaction between cobalt and copper can be explained by the Kondo effect, with the different magnetic orientations of the cobalt atom and copper electrons canceling each other out. This leads to a state in which the copper electrons are bound to the cobalt atom, forming what’s termed a “Kondo cloud.” However, Bode and his team delved deeper in their laboratory. And they validated an alternate theory proposed in 2020 by theorist Samir Lounis from research institute Forschungszentrum Jülich.
    By harnessing the power of an intense external magnetic field and using an iron tip in the scanning tunneling microscope, the Würzburg physicists managed to determine the magnetic orientation of the cobalt’s spin. This spin isn’t rigid, but switches permanently back and forth, i.e. from “spin-up” (positive) to “spin-down” (negative), and vice versa. This switching excites the copper electrons, a phenomenon called the spinaron effect. Bode elucidates it with a vivid analogy: “Because of the constant change in spin alignment, the state of the cobalt atom can be compared to a rugby ball. When a rugby ball spins continuously in a ball pit, the surrounding balls are displaced in a wave-like manner. That’s precisely what we observed — the copper electrons started oscillating in response and bonded with the cobalt atom.” Bode continues: “This combination of the cobalt atom’s changing magnetization and the copper electrons bound to it is the spinaron predicted by our Jülich colleague.”
    The first experimental validation of the spinaron effect, courtesy of the Würzburg team, casts doubt on the Kondo effect. Until now, it was considered the universal model to explain the interaction between magnetic atoms and electrons in quantum materials such as the cobalt-copper duo. Bode quips: “Time to pencil in a significant asterisk in those physics textbooks!” More

  • in

    Engineers develop breakthrough ‘robot skin’

    Smart, stretchable and highly sensitive, a new soft sensor developed by UBC and Honda researchers opens the door to a wide range of applications in robotics and prosthetics.
    When applied to the surface of a prosthetic arm or a robotic limb, the sensor skin provides touch sensitivity and dexterity, enabling tasks that can be difficult for machines such as picking up a piece of soft fruit. The sensor is also soft to the touch, like human skin, which helps make human interactions safer and more lifelike.
    “Our sensor can sense several types of forces, allowing a prosthetic or robotic arm to respond to tactile stimuli with dexterity and precision. For instance, the arm can hold fragile objects like an egg or a glass of water without crushing or dropping them,” said study author Dr. Mirza Saquib Sarwar, who created the sensor as part of his PhD work in electrical and computer engineering at UBC’s faculty of applied science.
    Giving machines a sense of touch
    The sensor is primarily composed of silicone rubber, the same material used to make many skin special effects in movies. The team’s unique design gives it the ability to buckle and wrinkle, just like human skin.
    “Our sensor uses weak electric fields to sense objects, even at a distance, much as touchscreens do. But unlike touchscreens, this sensor is supple and can detect forces into and along its surface. This unique combination is key to adoption of the technology for robots that are in contact with people,” explained Dr. John Madden, senior study author and a professor of electrical and computer engineering who leads the Advanced Materials and Process Engineering Laboratory (AMPEL) at UBC.
    The UBC team developed the technology in collaboration with Frontier Robotics, Honda’s research institute. Honda has been innovating in humanoid robotics since the 1980s, and developed the well-known ASIMO robot. It has also developed devices to assist walking, and the emerging Honda Avatar Robot. More

  • in

    Vision via sound for the blind

    ustralian researchers have developed cutting-edge technology known as “acoustic touch” that helps people ‘see’ using sound. The technology has the potential to transform the lives of those who are blind or have low vision.
    Around 39 million people worldwide are blind, according to the World Health Organisation, and an additional 246 million people live with low vision, impacting their ability to participate in everyday life activities.
    The next generation smart glasses, which translate visual information into distinct sound icons, were developed by researchers from the University of Technology Sydney and the University of Sydney, together with Sydney start-up ARIA Research.
    “Smart glasses typically use computer vision and other sensory information to translate the wearer’s surrounding into computer-synthesized speech,” said Distinguished Professor Chin-Teng Lin, a global leader in brain-computer interface research from the University of Technology Sydney.
    “However, acoustic touch technology sonifies objects, creating unique sound representations as they enter the device’s field of view. For example, the sound of rustling leaves might signify a plant, or a buzzing sound might represent a mobile phone,” he said.
    A study into the efficacy and usability of acoustic touch technology to assist people who are blind, led by Dr Howe Zhu from the University of Technology Sydney, has just been published in the journal PLOS ONE.
    The researchers tested the device with 14 participants; seven individuals with blindness or low vision and seven blindfolded sighted individuals who served as a control group. More

  • in

    Using sound to test devices, control qubits

    Acoustic resonators are everywhere. In fact, there is a good chance you’re holding one in your hand right now. Most smart phones today use bulk acoustic resonators as radio frequency filters to filter out noise that could degrade a signal. These filters are also used in most Wi-Fi and GPS systems.
    Acoustic resonators are more stable than their electrical counterparts, but they can degrade over time. There is currently no easy way to actively monitor and analyze the degradation of the material quality of these widely used devices.
    Now, researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), in collaboration with researchers at the OxideMEMS Lab at Purdue University, have developed a system that uses atomic vacancies in silicon carbide to measure the stability and quality of acoustic resonators. What’s more, these vacancies could also be used for acoustically-controlled quantum information processing, providing a new way to manipulate quantum states embedded in this commonly-used material.
    “Silicon carbide, which is the host for both the quantum reporters and the acoustic resonator probe, is a readily available commercial semiconductor that can be used at room temperature,” said Evelyn Hu, the Tarr-Coyne Professor of Applied Physics and of Electrical Engineering and the Robin Li and Melissa Ma Professor of Arts and Sciences, and senior author of the paper. “As an acoustic resonator probe, this technique in silicon carbide could be used in monitoring the performance of accelerometers, gyroscopes and clocks over their lifetime and, in a quantum scheme, has potential for hybrid quantum memories and quantum networking.”
    The research was published in Nature Electronics.
    A look inside acoustic resonators
    Silicon carbide is a common material for microelectromechanical systems (MEMS), which includes bulk acoustic resonators. More

  • in

    Clinical trials could yield better data with fewer patients thanks to new tool

    University of Alberta researchers have developed a new statistical tool to evaluate the results of clinical trials, with the aim of allowing smaller trials to ask more complex research questions and get effective treatments to patients more quickly.
    In their paper, the team reports on their new “Chauhan Weighted Trajectory Analysis,” which they developed to improve on the Kaplan-Meier estimator, the standard tool since 1959.
    The Kaplan-Meier test limits researchers because it can only assess binary questions, such as whether patients survived or died on a treatment. It can’t include other factors such as adverse drug reactions or quality-of-life measures such as being able to walk or care for yourself. The new tool allows simultaneous evaluation and visualization of multiple outcomes in one graph.
    “In general, diseases aren’t binary,” explains first author Karsh Chauhan, a fourth-year MD student at the U of A. “Now we can capture the severity of diseases — whether they make patients sick, whether they put them in hospital, whether they lead to death — and we can capture both the rise and the fall of how patients do on different treatments.”
    John Mackey, a breast cancer medical oncologist and professor emeritus of oncology, added this tool allows researchers to do a smaller, less expensive, quicker trial with fewer patients, and get the overall benefit of a new treatment more rapidly out there in the world.
    The two began working on the statistical tool three years ago when they were designing a clinical trial for a new device to prevent bedsores, which affect many patients with long-term illness. They wanted to look at how the severity of illness changed during treatment, but the Kaplan-Meier test wasn’t going to help.
    “Dr. Mackey said to me, ‘If the tool doesn’t exist, then why don’t you build it yourself?’ That was very exciting,” says Chauhan, who also has a BSc in engineering physics, which he calls a degree in “problem-solving.” More

  • in

    Can AI grasp related concepts after learning only one?

    Humans have the ability to learn a new concept and then immediately use it to understand related uses of that concept — once children know how to “skip,” they understand what it means to “skip twice around the room” or “skip with your hands up.”
    But are machines capable of this type of thinking? In the late 1980s, Jerry Fodor and Zenon Pylyshyn, philosophers and cognitive scientists, posited that artificial neural networks — the engines that drive artificial intelligence and machine learning — are not capable of making these connections, known as “compositional generalizations.” However, in the decades since, scientists have been developing ways to instill this capacity in neural networks and related technologies, but with mixed success, thereby keeping alive this decades-old debate.
    Researchers at New York University and Spain’s Pompeu Fabra University have now developed a technique — reported in the journal Nature — that advances the ability of these tools, such as ChatGPT, to make compositional generalizations. This technique, Meta-learning for Compositionality (MLC), outperforms existing approaches and is on par with, and in some cases better than, human performance. MLC centers on training neural networks — the engines driving ChatGPT and related technologies for speech recognition and natural language processing — to become better at compositional generalization through practice.
    Developers of existing systems, including large language models, have hoped that compositional generalization will emerge from standard training methods, or have developed special-purpose architectures in order to achieve these abilities. MLC, in contrast, shows how explicitly practicing these skills allow these systems to unlock new powers, the authors note.
    “For 35 years, researchers in cognitive science, artificial intelligence, linguistics, and philosophy have been debating whether neural networks can achieve human-like systematic generalization,” says Brenden Lake, an assistant professor in NYU’s Center for Data Science and Department of Psychology and one of the authors of the paper. “We have shown, for the first time, that a generic neural network can mimic or exceed human systematic generalization in a head-to-head comparison.”
    In exploring the possibility of bolstering compositional learning in neural networks, the researchers created MLC, a novel learning procedure in which a neural network is continuously updated to improve its skills over a series of episodes. In an episode, MLC receives a new word and is asked to use it compositionally — for instance, to take the word “jump” and then create new word combinations, such as “jump twice” or “jump around right twice.” MLC then receives a new episode that features a different word, and so on, each time improving the network’s compositional skills.
    To test the effectiveness of MLC, Lake, co-director of NYU’s Minds, Brains, and Machines Initiative, and Marco Baroni, a researcher at the Catalan Institute for Research and Advanced Studies and professor at the Department of Translation and Language Sciences of Pompeu Fabra University, conducted a series of experiments with human participants that were identical to the tasks performed by MLC.
    In addition, rather than learn the meaning of actual words — terms humans would already know — they also had to learn the meaning of nonsensical terms (e.g., “zup” and “dax”) as defined by the researchers and know how to apply them in different ways. MLC performed as well as the human participants — and, in some cases, better than its human counterparts. MLC and people also outperformed ChatGPT and GPT-4, which despite its striking general abilities, showed difficulties with this learning task.
    “Large language models such as ChatGPT still struggle with compositional generalization, though they have gotten better in recent years,” observes Baroni, a member of Pompeu Fabra University’s Computational Linguistics and Linguistic Theory research group. “But we think that MLC can further improve the compositional skills of large language models.” More