More stories

  • in

    Researchers develop solid-state thermal transistor for better heat management

    A team of researchers from UCLA has unveiled a first-of-its-kind stable and fully solid-state thermal transistor that uses an electric field to control a semiconductor device’s heat movement.
    The group’s study, which will be published in the Nov. 3 issue of Science, details how the device works and its potential applications. With top speed and performance, the transistor could open new frontiers in heat management of computer chips through an atomic-level design and molecular engineering. The advance could also further the understanding of how heat is regulated in the human body.
    “The precision control of how heat flows through materials has been a long-held but elusive dream for physicists and engineers,” said the study’s co-author Yongjie Hu, a professor of mechanical and aerospace engineering at the UCLA Samueli School of Engineering.” This new design principle takes a big leap toward that, as it manages the heat movement with the on-off switching of an electric field, just like how it has been done with electrical transistors for decades.”
    Electrical transistors are the foundational building blocks of modern information technology. They were first developed by Bell Labs in the 1940s and have three terminals — a gate, a source and a sink. When an electrical field is applied through the gate, it regulates how electricity (in the form of electrons) moves through the chip. These semiconductor devices can amplify or switch electrical signals and power. But as they continue to shrink in size over the years, billions of transistors can fit on one chip, resulting in more heat generated from the movement of electrons, which affects chip performance. Conventional heat sinks passively draw heat away from hotspots, but it has remained a challenge to find a more dynamic control to actively regulate heat.
    While there have been efforts in tuning thermal conductivity, their performances have suffered due to reliance on moving parts, ionic motions, or liquid solution components. This has resulted in slow switching speeds for heat movement on the order of minutes or far slower, creating issues in performance reliability as well as incompatibility with semiconductor manufacturing.
    The new thermal transistor, which boasts a field effect (the modulation of the thermal conductivity of a material by the application of an external electric field) and a full solid state (no moving parts), offers high performance and compatibility with integrated circuits in semiconductor manufacturing processes. The team’s design incorporates the field effect on charge dynamics at an atomic interface to allow high performance using a negligible power to switch and amplify a heat flux continuously.
    The UCLA team demonstrated electrically gated thermal transistors that achieved record-high performance with switching speed of more than 1 megahertz, or 1 million cycles per second. They also offered a 1,300% tunability in thermal conductance and reliable performance for more than 1 million switching cycles. More

  • in

    AI trained to identify least green homes

    ‘Hard-to-decarbonize’ (HtD) houses are responsible for over a quarter of all direct housing emissions — a major obstacle to achieving net zero — but are rarely identified or targeted for improvement.
    Now a new ‘deep learning’ model trained by researchers from Cambridge University’s Department of Architecture promises to make it far easier, faster and cheaper to identify these high priority problem properties and develop strategies to improve their green credentials.
    Houses can be ‘hard to decarbonize’ for various reasons including their age, structure, location, social-economic barriers and availability of data. Policymakers have tended to focus mostly on generic buildings or specific hard-to-decarbonise technologies but the study, published in the journal Sustainable Cities and Society, could help to change this.
    Maoran Sun, an urban researcher and data scientist, and his PhD supervisor Dr Ronita Bardhan, who leads Cambridge’s Sustainable Design Group, show that their AI model can classify HtD houses with 90% precision and expect this to rise as they add more data, work which is already underway.
    Dr Bardhan said: “This is the first time that AI has been trained to identify hard-to-decarbonize buildings using open source data to achieve this.
    “Policymakers need to know how many houses they have to decarbonize, but they often lack the resources to perform detail audits on every house. Our model can direct them to high priority houses, saving them precious time and resources.”
    The model also helps authorities to understand the geographical distribution of HtD houses, enabling them to efficiently target and deploy interventions efficiently. More

  • in

    What a ‘2D’ quantum superfluid feels like to the touch

    Researchers from Lancaster University in the UK have discovered how superfluid helium 3He would feel if you could put your hand into it.
    The interface between the exotic world of quantum physics and classical physics of the human experience is one of the major open problems in modern physics.
    Dr Samuli Autti is the lead author of the research published in Nature Communications.
    Dr Autti said: “In practical terms, we don’t know the answer to the question ‘how does it feel to touch quantum physics?’
    “These experimental conditions are extreme and the techniques complicated, but I can now tell you how it would feel if you could put your hand into this quantum system.
    “Nobody has been able to answer this question during the 100-year history of quantum physics. We now show that, at least in superfluid 3He, this question can be answered.”
    The experiments were carried out at about a 10000th of a degree above absolute zero in a special refrigerator and made use of mechanical resonator the size of a finger to probe the very cold superfluid. More

  • in

    Optical-fiber based single-photon light source at room temperature for next-generation quantum processing

    Quantum-based systems promise faster computing and stronger encryption for computation and communication systems. These systems can be built on fiber networks involving interconnected nodes which consist of qubits and single-photon generators that create entangled photon pairs.
    In this regard, rare-earth (RE) atoms and ions in solid-state materials are highly promising as single-photon generators. These materials are compatible with fiber networks and emit photons across a broad range of wavelengths. Due to their wide spectral range, optical fibers doped with these RE elements could find use in various applications, such as free-space telecommunication, fiber-based telecommunications, quantum random number generation, and high-resolution image analysis. However, so far, single-photon light sources have been developed using RE-doped crystalline materials at cryogenic temperatures, which limits the practical applications of quantum networks based on them.
    In a study published in Volume 20, Issue 4 of the journal Physical Review Applied on 16 October 2023, a team of researchers from Japan, led by Associate Professor Kaoru Sanaka from Tokyo University of Science (TUS) has successfully developed a single-photon light source consisting of doped ytterbium ions (Yb3+) in an amorphous silica optical fiber at room temperature. Associate Professor Mark Sadgrove and Mr. Kaito Shimizu from TUS and Professor Kae Nemoto from the Okinawa Institute of Science and Technology Graduate University were also a part of this study. This newly developed single-photon light source eliminates the need for expensive cooling systems and has the potential to make quantum networks more cost-effective and accessible.
    “Single-photon light sources are devices that control the statistical properties of photons, which represent the smallest energy units of light,” explains Dr. Sanaka. “In this study, we have developed a single-photon light source using an optical fiber material doped with optically active RE elements. Our experiments also reveal that such a source can be generated directly from an optical fiber at room temperature.”
    Ytterbium is an RE element with favorable optical and electronic properties, making it a suitable candidate for doping the fiber. It has a simple energy-level structure, and ytterbium ion in its excited state has a long fluorescence lifetime of around one millisecond.
    To fabricate the ytterbium-doped optical fiber, the researchers tapered a commercially available ytterbium-doped fiber using a heat-and-pull technique, where a section of the fiber is heated and then pulled with tension to gradually reduce its diameter.
    Within the tapered fiber, individual RE atoms emit photons when excited with a laser. The separation between these RE atoms plays a crucial role in defining the fiber’s optical properties. For instance, if the average separation between the individual RE atoms exceeds the optical diffraction limit, which is determined by the wavelength of the emitted photons, the emitted light from these atoms appears as though it is coming from clusters rather than distinct individual sources.
    To confirm the nature of these emitted photons, the researchers employed an analytical method known as auto-correlation, which assesses the similarity between a signal and its delayed version. By analyzing the emitted photon pattern using autocorrelation, the researchers observed non-resonant emissions and further obtained evidence of photon emission from the single ytterbium ion in the doped filter.
    While quality and quantity of emitted photons can be enhanced further, the developed optical fiber with ytterbium atoms can be manufactured without the need for expensive cooling systems. This overcomes a significant hurdle and opens doors to various next-generation quantum information technologies. “We have demonstrated a low-cost single-photon light source with selectable wavelength and without the need for a cooling system. Going ahead, it can enable various next-generation quantum information technologies such as true random number generators, quantum communication, quantum logic operations, and high-resolution image analysis beyond the diffraction limit,” concludes Dr. Sanaka. More

  • in

    Learning to forget — a weapon in the arsenal against harmful AI

    With the AI summit well underway, researchers are keen to raise the very real problem associated with the technology — teaching it how to forget.
    Society is now abuzz with modern AI and its exceptional capabilities; we are constantly reminded its potential benefits, across so many areas, permeating practically all facets of our lives — but also its dangers.
    In an emerging field of research, scientists are highlighting an important weapon in our arsenal towards mitigating the risks of AI — ‘machine unlearning’. They are helping to figure out new ways of making AI models known as Deep Neural Networks (DNNs) forget data which poses a risk to society.
    The problem is re-training AI programmes to ‘forget’ data is a very expensive and an arduous task. Modern DNNs such as those based on ‘Large Language Models’ (like ChatGPT, Bard, etc.) require massive resources to be trained — and take weeks or months to do so. They also require tens of Gigawatt-hours of energy for every training programme, some research estimating as much energy as to power thousands on households for one year.
    Machine Unlearning is a burgeoning field of research that could remove troublesome data from DNNs quickly, cheaply and using less resources. The goal is to do so while continuing to ensure high accuracy. Computer Science experts at the University of Warwick, in collaboration with Google DeepMind, are at the forefront of this research.
    Professor Peter Triantafillou, Department of Computer Science, University of Warwick, recently co-authored a publication ‘Towards Unbounded Machine Unlearning’. He said: “DNNs are extremely complex structures, composed of up to trillions of parameters. Often, we lack a solid understanding of exactly how and why they achieve their goals. Given their complexity, and the complexity and size of the datasets they are trained on, DNNs may be harmful to society.
    “DNNs may be harmful, for example, by being trained on data with biases — thus propagating negative stereotypes. The data might reflect existing prejudices, stereotypes and faulty societal assumptions — such as a bias that doctors are male, nurses female — or even racial prejudices. More

  • in

    Researchers discover new ultra strong material for microchip sensors

    Researchers at Delft University of Technology, led by assistant professor Richard Norte, have unveiled a remarkable new material with potential to impact the world of material science: amorphous silicon carbide (a-SiC). Beyond its exceptional strength, this material demonstrates mechanical properties crucial for vibration isolation on a microchip. Amorphous silicon carbide is therefore particularly suitable for making ultra-sensitive microchip sensors.
    The range of potential applications is vast. From ultra-sensitive microchip sensors and advanced solar cells, to pioneering space exploration and DNA sequencing technologies. The advantages of this material’s strength combined with its scalability make it exceptionally promising.
    Ten medium-sized cars
    “To better understand the crucial characteristic of “amorphous,” think of most materials as being made up of atoms arranged in a regular pattern, like an intricately built Lego tower,” explains Norte. “These are termed as “crystalline” materials, like for example, a diamond. It has carbon atoms perfectly aligned, contributing to its famed hardness.” However, amorphous materials are akin to a randomly piled set of Legos, where atoms lack consistent arrangement. But contrary to expectations, this randomisation doesn’t result in fragility. In fact, amorphous silicon carbide is a testament to strength emerging from such randomness.
    The tensile strength of this new material is 10 GigaPascal (GPa). “To grasp what this means, imagine trying to stretch a piece of duct tape until it breaks. Now if you’d want to simulate the tensile stress equivalent to 10 GPa, you’d need to hang about ten medium-sized cars end-to-end off that strip before it breaks,” says Norte.
    Nanostrings
    The researchers adopted an innovative method to test this material’s tensile strength. Instead of traditional methods that might introduce inaccuracies from the way the material is anchored, they turned to microchip technology. By growing the films of amorphous silicon carbide on a silicon substrate and suspending them, they leveraged the geometry of the nanostrings to induce high tensile forces. By fabricating many such structures with increasing tensile forces, they meticulously observed the point of breakage. This microchip-based approach not only ensures unprecedented precision but also paves the way for future material testing. More

  • in

    AI should be better understood and managed — new research warns

    Artificial Intelligence (AI) and algorithms can and are being used to radicalize, polarize, and spread racism and political instability, says a Lancaster University academic.
    Professor of International Security at Lancaster University Joe Burton argues that AI and algorithms are not just tools deployed by national security agencies to prevent malicious activity online, but can be contributors to polarization, radicalism and political violence — posing a threat to national security.
    Further to this, he says, securitization processes (presenting technology as an existential threat) have been instrumental in how AI has been designed, used and to the harmful outcomes it has generated.
    Professor Burton’s article ‘Algorithmic extremism? The securitization of Artificial Intelligence (AI) and its impact on radicalism, polarization and political violence’ is published in Elsevier’s high impact Technology in Society Journal.
    “AI is often framed as a tool to be used to counter violent extremism,” says Professor Burton. “Here is the other side of the debate.”
    The paper looks at how AI has been securitized throughout its history, and in media and popular culture depictions, and by exploring modern examples of AI having polarizing, radicalizing effects that have contributed to political violence.
    The article cites the classic film series, The Terminator, which depicted a holocaust committed by a ‘sophisticated and malignant’ artificial intelligence, as doing more than anything to frame popular awareness of Artificial Intelligence and the fear that machine consciousness could lead to devastating consequences for humanity — in this case a nuclear war and a deliberate attempt to exterminate a species. More

  • in

    Contrary to common belief, artificial intelligence will not put you out of work

    New research in theINFORMS journal Management Science is providing insights for business leaders on how work experience affects employees interacting with AI.
    The study, “Friend or Foe? Teaming Between Artificial Intelligence and Workers with Variation in Experience,” looks at the influence of two major types of human work experience (narrow experience based on the specific task volume and broad experience based on seniority) on the human-AI team dynamics.
    “We developed an AI solution for medical chart coding in a publicly traded company and conducted a field study among the knowledge workers,” says Weiguang Wang of the University of Rochester and leading author of the study. “We were surprised by what we found in the study. The different dimensions of work experience have distinct interactions with AI and play unique roles in human-AI teaming.”
    “While one might think that less experienced workers should benefit more from the help of AI, we find the opposite — AI benefits workers with greater task-based experience. At the same time, senior workers, despite their greater experience, gain less from AI than their junior colleagues,” says Guodong (Gordon) Gao of Johns Hopkins Carey Business School, and study co-author.
    Further investigation reveals that the relatively lower productivity lift from AI is not a result of seniority per se but rather their higher sensitivity to the imperfection of AI, which lowers their trust in AI.
    “This finding presents a dilemma: Employees with greater experience are in a better position to leverage AI for productivity, but the senior employees who assume greater responsibilities and care about the organization tend to shy away from AI because they see the risks of relying on AI’s assistance. As a result, they are not effectively leveraging AI,” says Ritu Agarwal of Johns Hopkins Carey Business School, a co-author of the study.
    The researchers urge employers to carefully consider different worker experience types and levels when introducing AI into the work. New workers with less task experience are disadvantaged in leveraging AI. Meanwhile, senior workers with more organizational experience may be concerned about the potential risks imposed by AI. Addressing these unique challenges are key to productive human-AI teaming. More