More stories

  • in

    Opening up the potential of thin-film electronics for flexible chip design

    The mass production of conventional silicon chips relies on a successful business model with large ‘semiconductor fabrication plants’ or ‘foundries’. New research by KU Leuven and imec shows that this ‘foundry’ model can also be applied to the field of flexible, thin-film electronics. Adopting this approach would give innovation in the field a huge boost.
    Silicon semiconductors have become the ‘oil’ of the computer age, which was also demonstrated recently by the chip shortage crisis. However, one of the disadvantages of conventional silicon chips is that they’re not mechanically flexible. On the other hand you have the field of flexible electronics, which is driven by an alternative semiconductor technology: the thin-film transistor, or TFT. The applications in which TFTs can be used are legion: from wearable healthcare patches and neuroprobes over digital microfluidics and robotic interfaces to bendable displays and Internet of Things (IoT) electronics.
    TFT technology has well evolved, but unlike with conventional semiconductor technology the potential to use it in various applications has barely been exploited. In fact, TFTs are currently mainly mass-produced with the purpose of integrating them in displays of smartphones, laptops and smart TVs — where they are used to control pixels individually. This limits the freedom of chip designers who dream of using TFTs in flexible microchips and to come up with innovative, TFT-based applications. “This field can benefit hugely from a foundry business model similar to that of the conventional chip industry,” says Kris Myny, professor at the KU Leuven’s Emerging technologies, Systems and Security unit in Diepenbeek, and also a guest professor at imec.
    Foundry business model
    At the heart of the worldwide microchip market is the so-called foundry model. In this business model, large ‘semiconductor fabrication plants’ or ‘foundries’ (like TSMC from Taiwan) focus on the mass production of chips on silicon wafers. These are then used by the foundries’ clients — the companies that design and order the chips — to integrate them in specific applications. Thanks to this business model, the latter companies have access to complex semiconductor manufacturing to design the chips they need.
    Myny’s group has now shown that such a business model is also viable in the field of thin-film electronics. They designed a specific TFT-based microprocessor and let it be produced in two foundries, after which they tested it in their lab, with success. The same chip was produced in two versions, based on two separate TFT technologies (using different substrates) that are both mainstream. Their research paper is published in Nature.
    Multi-project approach
    The microprocessor Myny and his colleagues built is the iconic MOS 6502. Today this chip is a ‘museum piece’, but in the 70s it was the driver of the first Apple, Commodore and Nintendo computers. The group developed the 6502 chip on a wafer (using amorphous indium-gallium-zinc-oxide) and on a plate (using low-temperature polycrystalline silicon). In both cases the chips were manufactured on the substrate together with other chips, or ‘projects’. This ‘multi-project’ approach enables foundries to produce different chips on-demand from designers on single substrates.
    The chip Myny’s group made is less than 30 micrometer thick, less than a human hair. That makes it ideal for, for example, medical applications like wearable patches. Such ultra-thin wearables can be used to make electrocardiograms or electromyograms, to study the condition of respectively the heart and muscles. They would feel just like a sticker, while patches with a silicon-based chip always feel knobbly.
    Although the performance of the 6502 microprocessor is not comparable with modern ones, this research demonstrates that also flexible chips can be designed and produced in a multi-project approach, analogue to the way this happens in the conventional chip industry. Myny concludes: “We will not compete with silicon-based chips, we want to stimulate and accelerate innovation based on flexible, thin-film electronics.” More

  • in

    A simple ‘twist’ improves the engine of clean fuel generation

    Researchers have found a way to super-charge the ‘engine’ of sustainable fuel generation — by giving the materials a little twist.
    The researchers, led by the University of Cambridge, are developing low-cost light-harvesting semiconductors that power devices for converting water into clean hydrogen fuel, using just the power of the sun. These semiconducting materials, known as copper oxides, are cheap, abundant and non-toxic, but their performance does not come close to silicon, which dominates the semiconductor market.
    However, the researchers found that by growing the copper oxide crystals in a specific orientation so that electric charges move through the crystals at a diagonal, the charges move much faster and further, greatly improving performance. Tests of a copper oxide light harvester, or photocathode, based on this fabrication technique showed a 70% improvement over existing state-of-the-art oxide photocathodes, while also showing greatly improved stability.
    The researchers say their results, reported in the journal Nature, show how low-cost materials could be fine-tuned to power the transition away from fossil fuels and toward clean, sustainable fuels that can be stored and used with existing energy infrastructure.
    Copper (I) oxide, or cuprous oxide, has been touted as a cheap potential replacement for silicon for years, since it is reasonably effective at capturing sunlight and converting it into electric charge. However, much of that charge tends to get lost, limiting the material’s performance.
    “Like other oxide semiconductors, cuprous oxide has its intrinsic challenges,” said co-first author Dr Linfeng Pan from Cambridge’s Department of Chemical Engineering and Biotechnology. “One of those challenges is the mismatch between how deep light is absorbed and how far the charges travel within the material, so most of the oxide below the top layer of material is essentially dead space.”
    “For most solar cell materials, it’s defects on the surface of the material that cause a reduction in performance, but with these oxide materials, it’s the other way round: the surface is largely fine, but something about the bulk leads to losses,” said Professor Sam Stranks, who led the research. “This means the way the crystals are grown is vital to their performance.”
    To develop cuprous oxides to the point where they can be a credible contender to established photovoltaic materials, they need to be optimised so they can efficiently generate and move electric charges — made of an electron and a positively-charged electron ‘hole’ — when sunlight hits them.

    One potential optimisation approach is single-crystal thin films — very thin slices of material with a highly-ordered crystal structure, which are often used in electronics. However, making these films is normally a complex and time-consuming process.
    Using thin film deposition techniques, the researchers were able to grow high-quality cuprous oxide films at ambient pressure and room temperature. By precisely controlling growth and flow rates in the chamber, they were able to ‘shift’ the crystals into a particular orientation. Then, using high temporal resolution spectroscopic techniques, they were able to observe how the orientation of the crystals affected how efficiently electric charges moved through the material.
    “These crystals are basically cubes, and we found that when the electrons move through the cube at a body diagonal, rather than along the face or edge of the cube, they move an order of magnitude further,” said Pan. “The further the electrons move, the better the performance.”
    “Something about that diagonal direction in these materials is magic,” said Stranks. “We need to carry out further work to fully understand why and optimise it further, but it has so far resulted in a huge jump in performance.” Tests of a cuprous oxide photocathode made using this technique showed an increase in performance of more than 70% over existing state-of-the-art electrodeposited oxide photocathodes.
    “In addition to the improved performance, we found that the orientation makes the films much more stable, but factors beyond the bulk properties may be at play,” said Pan.
    The researchers say that much more research and development is still needed, but this and related families of materials could have a vital role in the energy transition.
    “There’s still a long way to go, but we’re on an exciting trajectory,” said Stranks. “There’s a lot of interesting science to come from these materials, and it’s interesting for me to connect the physics of these materials with their growth, how they form, and ultimately how they perform.”
    The research was a collaboration with École Polytechnique Fédérale de Lausanne, Nankai University and Uppsala University. The research was supported in part by the European Research Council, the Swiss National Science Foundation, and the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI). Sam Stranks is Professor of Optoelectronics in the Department of Chemical Engineering and Biotechnology, and a Fellow of Clare College, Cambridge. More

  • in

    Child pedestrians, self-driving vehicles: What’s the safest scenario for crossing the road?

    Crossing a busy street safely typically is a result of a social exchange. Pedestrians look for cues — a wave, a head nod, a winking flash of the headlights, and, of course, a full vehicle stop — to know it’s safe to cross.
    But those clues could be absent or different with self-driving vehicles. How will children and adults know when it’s safe to cross the road?
    In a new study, University of Iowa researchers investigated how pre-teenage children determined when it was safe to cross a residential street with oncoming self-driving cars. The researchers found children made the safest choices when self-driving cars indicated via a green light on top of the vehicle that it was safe to cross when the vehicle arrived at the intersection, then stopped. When self-driving cars turned on the green light farther away from the crossing point — and even when they slowed down — children engaged in riskier intersection crossings, the researchers learned.
    “Children exhibited much safer behavior when the light turned green later,” says Jodie Plumert, professor in the Department of Psychological and Brain Sciences and the study’s senior author. “They seemed to treat it like a walk light and waited for that light to come on before starting to cross. Our recommendation, then, for autonomous vehicle design is that their signals should turn on when the car comes to a stop, but not before.”
    The difference in the timing of the green light signal from the self-driving car is important: Children are inclined to use the light as the vehicle’s clearance to go ahead and cross, trusting that it will stop as it gets closer to the intersection. But as Plumert and co-author Elizabeth O’Neal point out, that could invite peril.
    “This could be dangerous if the car for some reason does not stop, though pedestrians will have the benefit of getting across the road sooner,” says Plumert, who is the Russell B. and Florence D. Day Chair in Liberal Arts and Sciences.
    “So, even though it may be tempting to make the traffic flow more efficient by having these signals come on early, it’s probably pretty dangerous for kids in particular,” adds O’Neal, assistant professor in the Department of Community and Behavioral Health and the study’s corresponding author.

    Some may see self-driving vehicles as a futuristic technology, but they are operating right now in American cities. The Insurance Institute for Highway Safety projects there will be 3.5 million vehicles with self-driving functionality on U.S. roads by next year, and 4.5 million by 2030. This year, an autonomous-vehicle taxi service, called Waymo One, will operate in four cities, including new routes in Los Angeles and Austin, Texas.
    This comes as pedestrian deaths from motor vehicles remains a serious concern. According to the Governors Highway Safety Association, more than 7,500 pedestrians were killed by drivers in 2022, a 40-year high.
    “The fact is drivers don’t always come to a complete stop, even with stop signs,” notes Plumert, who has studied vehicle-pedestrian interactions since 2012. “People are running stop signs all the time. Sometimes drivers don’t see people. Sometimes they’re just spacing out.”
    The researchers aimed to understand how children respond to two different cues from self-driving cars when deciding when to cross a road: gradual versus a sudden (later) slowing; and the distance from the crossing point when a green light signal atop the vehicle was activated. The researchers placed nearly 100 children ages 8 to 12 in a realistic simulated environment and asked them to cross one lane of a road with oncoming driverless vehicles. The crossings took place in an immersive, 3D interactive space at the Hank Virtual Environments Lab on the UI campus.
    Researchers observed and recorded the children’s crossing actions and spoke with them after the sessions to learn more about how they responded to the green light signaling and the timing of the vehicle slowing.
    One major difference in crossing behavior: When the car’s green light turned on farther away from the crossing point, child participants entered the intersection on average 1.5 seconds sooner than the kids whose scenario included the light coming on later and the vehicle had stopped at the crossing point.

    “That time difference is actually quite significant,” Plumert notes. “A green light signal that flashes early is potentially dangerous because kids and even adults will use it as a cue to begin crossing, trusting that the car is going to come to a stop.”
    The results build on findings published in 2017 by Plumert and O’Neal that children up to their early teenage years had difficulty consistently crossing a street safely in a virtual environment, with accident rates as high as 8% with 6-year-olds.
    That danger underscores the need for clear, easy-to-understand signaling to children from self-driving vehicles, the researchers say. Researchers are testing various communicative signals, including flashing lights, projecting eyes on the windshield, splashing racer stripes on the edge of the windshield, and written words (like walk/don’t walk).
    “All have some utility, but children are a special case,” says O’Neal, who earned a doctorate in psychology at Iowa in 2018 and had been working as a postdoctoral researcher in Plumert’s lab before joining the faculty in the College of Public Health. “They may not always be able to incorporate a flashing light or a racing light to indicate that it’s slowing or that it’s going to yield to you.”
    Children naturally understood signaling using a green light and a red light, the researchers found. But timing is critical, they learned.
    “We think vehicle manufacturers should not consider the idea of turning the light on early or having the signal present early,” Plumert says, “because people will definitely use that, and they’ll get out there in front of the approaching vehicle. People hate to wait.”
    The study is titled, “Deciding when to cross in front of an autonomous vehicle: How child and adult pedestrians respond to eHMI timing and vehicle kinematics.” It published online on April 24 in the journal Accident Analysis and Prevention.
    Lakshmi Subramanian, who earned a doctorate from Iowa and now is at Kean University in New Jersey, shares first authorship on the study. Joseph Kearney, professor emeritus in the Department of Computer Science, is a senior author. Contributing authors include Nam-Yoon Kim and Megan Noonan in the Department of Psychological and Brain Sciences.
    The U.S. National Science Foundation and the U.S. Department of Transportation funded the research. More

  • in

    Condensed matter physics: Novel one-dimensional superconductor

    In a significant development in the field of superconductivity, researchers at The University of Manchester have successfully achieved robust superconductivity in high magnetic fields using a newly created one-dimensional (1D) system. This breakthrough offers a promising pathway to achieving superconductivity in the quantum Hall regime, a longstanding challenge in condensed matter physics.
    Superconductivity, the ability of certain materials to conduct electricity with zero resistance, holds profound potential for advancements of quantum technologies. However, achieving superconductivity in the quantum Hall regime, characterised by quantised electrical conductance, has proven to be a mighty challenge.
    The research, published this week (25 April 2024) in Nature, details extensive work of the Manchester team led by Professor Andre Geim, Dr Julien Barrier and Dr Na Xin to achieve superconductivity in the quantum Hall regime. Their initial efforts followed the conventional route where counterpropagating edge states were brought into close proximity of each other. However, this approach proved to be limited.
    “Our initial experiments were primarily motivated by the strong persistent interest in proximity superconductivity induced along quantum Hall edge states,” explains Dr Barrier, the paper’s lead author. “This possibility has led to numerous theoretical predictions regarding the emergence of new particles known as non-abelian anyons.”
    The team then explored a new strategy inspired by their earlier work demonstrating that boundaries between domains in graphene could be highly conductive. By placing such domain walls between two superconductors, they achieved the desired ultimate proximity between counterpropagating edge states while minimising effects of disorder.
    “We were encouraged to observe large supercurrents at relatively ‘balmy’ temperatures up to one Kelvin in every device we fabricated,” Dr Barrier recalls.
    Further investigation revealed that the proximity superconductivity originated not from the quantum Hall edge states propagating along domain walls, but rather from strictly 1D electronic states existing within the domain walls themselves. These 1D states, proven to exist by the theory group of Professor Vladimir Fal’ko’s at the National Graphene Institute, exhibited a greater ability to hybridise with superconductivity as compared to quantum Hall edge states. The inherent one-dimensional nature of the interior states is believed to be responsible for the observed robust supercurrents at high magnetic fields.

    This discovery of single-mode 1D superconductivity shows exciting avenues for further research. “In our devices, electrons propagate in two opposite directions within the same nanoscale space and without scattering,” Dr Barrier elaborates. “Such 1D systems are exceptionally rare and hold promise for addressing a wide range of problems in fundamental physics.”
    The team has already demonstrated the ability to manipulate these electronic states using gate voltage and observe standing electron waves that modulated the superconducting properties.
    “It is fascinating to think what this novel system can bring us in the future. The 1D superconductivity presents an alternative path towards realising topological quasiparticles combining the quantum Hall effect and superconductivity,” concludes Dr Xin. This is just one example of the vast potential our findings holds.”
    20 years after the advent of the first 2D material graphene, this research by The University of Manchester represents another step forward in the field of superconductivity. The development of this novel 1D superconductor is expected to open doors for advancements in quantum technologies and pave the way for further exploration of new physics, attracting interest from various scientific communities. More

  • in

    A novel universal light-based technique to control valley polarization in bulk materials

    An ICFO team, together with international collaborators, report in Nature a new method that achieves for the first time valley polarization in centrosymmetric bulk materials in a non-material-specific way.
    This “universal technique” may have major applications linked to the control and analysis of different properties for 2D and 3D materials, which can in turn enable the advancement of cutting-edge fields such us information processing and quantum computing.
    Electrons inside solid materials can only take certain values of energy. The allowed energy ranges are called “bands” and the space between them, the forbidden energies, are known as “band-gaps.” Both of them together constitute the “band structure” of the material, which is a unique characteristic of each specific material.
    When physicists plot the band structure, they usually see that the resulting curves resemble mountains and valleys. In fact, the technical term for a local energy maximum or minimum in the bands is called a “valley,” and the field which studies and exploits how electrons in the material switch from one valley to the other is coined “valleytronics.”
    In standard semiconductor electronics, the electric charge of the electrons is the most used property exploited to encode and manipulate information. But these particles have other properties that could also be used for the same purpose, such as the valley they are in. In the past decade, the main aim of valleytronics has been to reach the control of valley population (also known as valley polarization) in materials. Such an achievement could be used to create classical and quantum gates and bits, something that could really drive the development of computing and quantum information processing.
    Previous attempts presented several drawbacks. For example, the light used to manipulate and change valley polarization had to be resonant, that is, the energy of its photons (the particles that constitute light) had to correspond exactly to the energy of the band-gap of that particular material. Any small deviation reduced the efficiency of the method so, provided that each material has their own band-gaps, generalizing the proposed mechanism seemed something out of reach. Moreover, this process had only been achieved for monolayer structures (2D materials, just one-atom-thick). This requirement hindered its practical implementation, as monolayers are usually limited in size, quality and difficult to engineer.
    Now, ICFO researchers Igor Tyulnev, Julita Poborska, Dr. Lenard Vamos, led by Prof. ICREA Jens Biegert, in collaboration with researchers from the Max-Born-Institute, the Max-Planck Institute for the Science of Light, and Instituto de Ciencia de Materiales de Madrid have found a new universal method to induce valley polarization in centrosymmetric bulk materials. The discovery, published in Nature, unlocks the possibility to control and manipulate valley population without being restricted by the specific chosen material. At the same time, the method can be used to obtain a more detailed characterization of crystals and 2D materials.

    Valley polarization in bulk materials is possible
    The adventure began with the experimental group led by ICREA Prof. at ICFO Jens Biegert who initially wanted to experimentally produce valley polarization using their particular method in 2D materials, following the lines of what had been theoretically proved in a previous theoretical paper by Álvaro Jiménez, Rui Silva and Misha Ivanov. To set up the experiment, the initial measurement was tried on bulk MoS2 (a bulk material is made of many monolayers stacked together) with the surprising result that they saw the signature of valley polarization. “When we started working on this project, we were told by our theory collaborators that showing valley polarization in bulk materials was rather impossible,” explains Julita Poborska.
    The theoretical team remarks as well how, at the very beginning, their model was only suitable for single 2D layers. “At a first glance, it seemed that adding more layers would hinder the selection of specific valleys in the sample. But after the first experimental results, we adjusted the simulation to bulk materials and it validated the observations surprisingly well. We did not even try to fit anything. It is just the way it came out,” adds Prof. Misha Ivanov, the theorist leader. In the end, “it turned out that yes, you can actually valley polarize bulk materials that are central symmetric, because of the symmetry conditions,” concludes Poborska.
    As Igor Tyulnev, first author of the article, explains, “our experiment consisted in creating an intense light pulse with a polarization that fitted this internal structure. The result was the so-called “trefoil field,” whose symmetry matched the triangular sub-lattices that constitute hetero-atomic hexagonal materials.”
    This symmetry-matched strong field breaks the space and time symmetry within the material, and, more importantly, the resulting configuration depends on the orientation of the trefoil field with respect to the material. Therefore, “by simply rotating the incident light field, we were able to modulate the valley polarization,” concludes Tyulnev, a major achievement in the field and a confirmation of a novel universal technique that can control and manipulate the electron valleys in bulk materials.
    The experimental process
    The experiment can be explained in three main steps: First, the synthesis of the trefoil field; then its characterization; and finally, the actual production of valley polarization.

    The researchers emphasize the incredibly high precision that the characterization process required, as the trefoil field is made of not just one, but two coherently combined optical fields. One of them had to be circularly polarized in one direction, and the other needed to be the second harmonic of the first beam, polarized with the opposite handedness. They superimposed these fields onto each other, so that the total polarization in time traced the desired trefoil shape.
    Three years after the initial experimental attempts, Igor Tyulnev is thrilled by the recent Nature publication.
    The new universal method he states, “can be used not only to control the properties of a wide variety of chemical species, but also to characterize crystals and 2D materials.”
    As Prof. ICREA at ICFO Jens Biegert remarks: “Our method may provide an important ingredient to engineer energy efficient materials for efficient information storage and fast switching. This addresses the pressing need for low-energy consumption devices and increased computational speed. I cannot promise that what we have provided is THE solution, but it is probably one solution on this big challenge.” More

  • in

    Lead-vacancy centers in diamond as building blocks for large-scale quantum networks

    Much like how electric circuits use components to control electronic signals, quantum networks rely on special components and nodes to transfer quantum information between different points, forming the foundation for building quantum systems. In the case of quantum networks, color centers in diamond, which are defects intentionally added to a diamond crystal, are crucial for generating and maintaining stable quantum states over long distances.
    When stimulated by external light, these color centers in diamond emit photons carrying information about their internal electronic states, especially the spin states. The interaction between the emitted photons and the spin states of the color centers enables quantum information to be transferred between different nodes in quantum networks.
    A well-known example of color centers in diamond is the nitrogen-vacancy (NV) center, where a nitrogen atom is added adjacent to missing carbon atoms in the diamond lattice. However, the photons emitted from NV color centers do not have well-defined frequencies and are affected by interactions with the surrounding environment, making it challenging to maintain a stable quantum system.
    To address this, an international group of researchers, including Associate Professor Takayuki Iwasaki from Tokyo Institute of Technology, has developed a single negatively charged lead-vacancy (PbV) center in diamond, where a lead atom is inserted between neighboring vacancies in a diamond crystal. In the study published in the journal Physical Review Letters on February 15, 2024, the researchers reveal that the PbV center emits photons of specific frequencies that are not influenced by the crystal’s vibrational energy. These characteristics make the photons dependable carriers of quantum information for large-scale quantum networks.
    For stable and coherent quantum states, the emitted photon must be transform-limited, which means that it should have the minimum possible spread in its frequency. Additionally, it should have emission into zero-phonon-line (ZPL), meaning that the energy associated with the emission of photons is only used to change the electronic configuration of the quantum system, and not exchanged with the vibrational lattice modes (phonons) in the crystal lattice.
    To fabricate the PbV center, the researchers introduced lead ions beneath the diamond surface through ion implantation. An annealing process was then carried out to repair any damage caused by the lead ion implantation. The resulting PbV center exhibits a spin 1/2 system, with four distinct energy states with the ground and the excited state split into two energy levels. On photoexciting the PbV center, electron transitions between the energy levels produced four distinct ZPLs, classified by the researchers as A, B, C, and D based on the decreasing energy of the associated transitions. Among these, the C transition was found to have a transform-limited linewidth of 36 MHz.
    “We investigated the optical properties of single PbV centers under resonant excitation and demonstrated that the C-transition, one of the ZPLs, reaches the nearly transform-limit at 6.2 K without prominent phonon-induced relaxation and spectral diffusion,” says Dr. Iwasaki.
    The PbV center stands out by being able to maintain its linewidth at approximately 1.2 times the transform-limit at temperatures as high as 16 K. This is important to achieve around 80% visibility in two-photon interference. In contrast, color centers like SiV, GeV, and SnV need to be cooled to much lower temperatures (4 K to 6 K) for similar conditions. By generating well-defined photons at relatively high temperatures compared to other color centers, the PbV center can function as an efficient quantum light-matter interface, which enables quantum information to be carried long distances by photons via optical fibers.
    “These results can pave the way for the PbV center to become a building block to construct large-scale quantum networks,” concludes Dr. Iwasaki. More

  • in

    This tiny chip can safeguard user data while enabling efficient computing on a smartphone

    Health-monitoring apps can help people manage chronic diseases or stay on track with fitness goals, using nothing more than a smartphone. However, these apps can be slow and energy-inefficient because the vast machine-learning models that power them must be shuttled between a smartphone and a central memory server.
    Engineers often speed things up using hardware that reduces the need to move so much data back and forth. While these machine-learning accelerators can streamline computation, they are susceptible to attackers who can steal secret information.
    To reduce this vulnerability, researchers from MIT and the MIT-IBM Watson AI Lab created a machine-learning accelerator that is resistant to the two most common types of attacks. Their chip can keep a user’s health records, financial information, or other sensitive data private while still enabling huge AI models to run efficiently on devices.
    The team developed several optimizations that enable strong security while only slightly slowing the device. Moreover, the added security does not impact the accuracy of computations. This machine-learning accelerator could be particularly beneficial for demanding AI applications like augmented and virtual reality or autonomous driving.
    While implementing the chip would make a device slightly more expensive and less energy-efficient, that is sometimes a worthwhile price to pay for security, says lead author Maitreyi Ashok, an electrical engineering and computer science (EECS) graduate student at MIT.
    “It is important to design with security in mind from the ground up. If you are trying to add even a minimal amount of security after a system has been designed, it is prohibitively expensive. We were able to effectively balance a lot of these tradeoffs during the design phase,” says Ashok.
    Her co-authors include Saurav Maji, an EECS graduate student; Xin Zhang and John Cohn of the MIT-IBM Watson AI Lab; and senior author Anantha Chandrakasan, MIT’s chief innovation and strategy officer, dean of the School of Engineering, and the Vannevar Bush Professor of EECS. The research will be presented at the IEEE Custom Integrated Circuits Conference.

    Side-channel susceptibility
    The researchers targeted a type of machine-learning accelerator called digital in-memory compute. A digital IMC chip performs computations inside a device’s memory, where pieces of a machine-learning model are stored after being moved over from a central server.
    The entire model is too big to store on the device, but by breaking it into pieces and reusing those pieces as much as possible, IMC chips reduce the amount of data that must be moved back and forth.
    But IMC chips can be susceptible to hackers. In a side-channel attack, a hacker monitors the chip’s power consumption and uses statistical techniques to reverse-engineer data as the chip computes. In a bus-probing attack, the hacker can steal bits of the model and dataset by probing the communication between the accelerator and the off-chip memory.
    Digital IMC speeds computation by performing millions of operations at once, but this complexity makes it tough to prevent attacks using traditional security measures, Ashok says.
    She and her collaborators took a three-pronged approach to blocking side-channel and bus-probing attacks.

    First, they employed a security measure where data in the IMC are split into random pieces. For instance, a bit zero might be split into three bits that still equal zero after a logical operation. The IMC never computes with all pieces in the same operation, so a side-channel attack could never reconstruct the real information.
    But for this technique to work, random bits must be added to split the data. Because digital IMC performs millions of operations at once, generating so many random bits would involve too much computing. For their chip, the researchers found a way to simplify computations, making it easier to effectively split data while eliminating the need for random bits.
    Second, they prevented bus-probing attacks using a lightweight cipher that encrypts the model stored in off-chip memory. This lightweight cipher only requires simple computations. In addition, they only decrypted the pieces of the model stored on the chip when necessary.
    Third, to improve security, they generated the key that decrypts the cipher directly on the chip, rather than moving it back and forth with the model. They generated this unique key from random variations in the chip that are introduced during manufacturing, using what is known as a physically unclonable function.
    “Maybe one wire is going to be a little bit thicker than another. We can use these variations to get zeros and ones out of a circuit. For every chip, we can get a random key that should be consistent because these random properties shouldn’t change significantly over time,” Ashok explains.
    They reused the memory cells on the chip, leveraging the imperfections in these cells to generate the key. This requires less computation than generating a key from scratch.
    “As security has become a critical issue in the design of edge devices, there is a need to develop a complete system stack focusing on secure operation. This work focuses on security for machine-learning workloads and describes a digital processor that uses cross-cutting optimization. It incorporates encrypted data access between memory and processor, approaches to preventing side-channel attacks using randomization, and exploiting variability to generate unique codes. Such designs are going to be critical in future mobile devices,” says Chandrakasan.
    Safety testing
    To test their chip, the researchers took on the role of hackers and tried to steal secret information using side-channel and bus-probing attacks.
    Even after making millions of attempts, they couldn’t reconstruct any real information or extract pieces of the model or dataset. The cipher also remained unbreakable. By contrast, it took only about 5,000 samples to steal information from an unprotected chip.
    The addition of security did reduce the energy efficiency of the accelerator, and it also required a larger chip area, which would make it more expensive to fabricate.
    The team is planning to explore methods that could reduce the energy consumption and size of their chip in the future, which would make it easier to implement at scale.
    “As it becomes too expensive, it becomes harder to convince someone that security is critical. Future work could explore these tradeoffs. Maybe we could make it a little less secure but easier to implement and less expensive,” Ashok says.
    The research is funded, in part, by the MIT-IBM Watson AI Lab, the National Science Foundation, and a Mathworks Engineering Fellowship. More

  • in

    Super Mario hackers’ tricks could protect software from bugs

    Video gamers who exploit glitches in games can help experts better understand buggy software, students at the University of Bristol suggest.
    Known as ‘speedrunners’, these types of gamers can complete games quickly by working out their malfunctions.
    The students examined four classic Super Mario games, and analysed 237 known glitches within them, classifying a variety of weaknesses. This research explores whether these are the same as the bugs exploited in more conventional software.
    Nintendo’s Super Mario is the quintessential video game. To understand the sorts of glitches speedrunners exploit, they examined four of the earliest Mario platforming games — Super Mario Bros (1985), Super Mario Bros. 3 (1988), Super Mario World (1990) and Super Mario 64 (1996). Whilst these games are old, they are still competitively run by speedrunners with new records reported in the news. The games are well also well understood, having been studied by speedrunners for decades, ensuring that there are large numbers of well researched glitches for analysis.
    Currently the world record time for conquering Super Mario World stands at a blistering 41 seconds. The team set out to understand 237 known glitches within them, classifying a variety of weaknesses to see if they can help software engineers make applications more robust.
    In the Super Mario platforming games Mario must rescue Princess Peach by jumping through an obstacle course of various platforms to reach a goal, avoiding baddies or defeating them by jumping on their heads. Players can collect power-ups along the way to unlock special abilities, and coins to increase their score. The Mario series of games is one of Nintendo’s flagship products, and one of the most influential video game series of all time.
    Dr Joseph Hallett from Bristol’s School of Computer Science explained: “Many early video games, such as the Super Mario games we have examined, were written for consoles that differ from the more uniform PC-like hardware of modern gaming systems.

    “Constraints stemming from the hardware, such as limited memory and buses, meant that aggressive optimization and tricks were required to make games run well.
    “Many of these techniques (for example, the NES’s memory mapping) are niche and can lead to bugs, by being so different to how many programmers usually expect things to work.”
    “Programming for these systems is closer to embedded development than most modern software, as it requires working around the limits of the hardware to create games. Despite the challenges of programming these systems, new games are still released and retro-inspired.”
    Categorizing bugs in software allows developers to understand similar problems and bugs.
    The Common Weakness Enumeration (CWE) is a category system for hardware and software weaknesses and vulnerabilities. The team identified seven new categories of weakness previously unspecified.
    Dr Joseph Hallett from Bristol’s School of Computer Science explained: “We found that some of the glitches speed runners use don’t have neat categorizations in existing software defect taxonomies and that there may be new kinds of bugs to look for in more general software.”
    The team thematically analysed with a code book of existing software weaknesses (CWE) — a qualitative research method to help categorize complex phenomena.

    Dr Hallett continued: “The cool bit of this research is that academia is starting to treat and appreciate the work speedrunners do and study something that hasn’t really been treated seriously before.
    “By studying speedrunners’ glitches we can better understand how they do it and whether the bugs they use are the same ones other software gets hacked with.
    “It turns out the speedrunners have some tricks that we didn’t know about before.”
    Now the team have turned their hand to studying Pokémon video games. More