More stories

  • in

    Measuring trust in AI

    Prompted by the increasing prominence of artificial intelligence (AI) in society, University of Tokyo researchers investigated public attitudes toward the ethics of AI. Their findings quantify how different demographics and ethical scenarios affect these attitudes. As part of this study, the team developed an octagonal visual metric, analogous to a rating system, which could be useful to AI researchers who wish to know how their work may be perceived by the public.
    Many people feel the rapid development of technology often outpaces that of the social structures that implicitly guide and regulate it, such as law or ethics. AI in particular exemplifies this as it has become so pervasive in everyday life for so many, seemingly overnight. This proliferation, coupled with the relative complexity of AI compared to more familiar technology, can breed fear and mistrust of this key component of modern living. Who distrusts AI and in what ways are matters that would be useful to know for developers and regulators of AI technology, but these kinds of questions are not easy to quantify.
    Researchers at the University of Tokyo, led by Professor Hiromi Yokoyama from the Kavli Institute for the Physics and Mathematics of the Universe, set out to quantify public attitudes toward ethical issues around AI. There were two questions in particular the team, through analysis of surveys, sought to answer: how attitudes change depending on the scenario presented to a respondent, and how the demographic of the respondent themself changed attitudes.
    Ethics cannot really be quantified, so to measure attitudes toward the ethics of AI, the team employed eight themes common to many AI applications that raised ethical questions: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values. These, which the group has termed “octagon measurements,” were inspired by a 2020 paper by Harvard University researcher Jessica Fjeld and her team.
    Survey respondents were given a series of four scenarios to judge according to these eight criteria. Each scenario looked at a different application of AI. They were: AI-generated art, customer service AI, autonomous weapons and crime prediction.
    The survey respondents also gave the researchers information about themselves such as age, gender, occupation and level of education, as well as a measure of their level of interest in science and technology by way of an additional set of questions. This information was essential for the researchers to see what characteristics of people would correspond to certain attitudes.
    “Prior studies have shown that risk is perceived more negatively by women, older people, and those with more subject knowledge. I was expecting to see something different in this survey given how commonplace AI has become, but surprisingly we saw similar trends here,” said Yokoyama. “Something we saw that was expected, however, was how the different scenarios were perceived, with the idea of AI weapons being met with far more skepticism than the other three scenarios.”
    The team hopes the results could lead to the creation of a sort of universal scale to measure and compare ethical issues around AI. This survey was limited to Japan, but the team has already begun gathering data in several other countries.
    “With a universal scale, researchers, developers and regulators could better measure the acceptance of specific AI applications or impacts and act accordingly,” said Assistant Professor Tilman Hartwig. “One thing I discovered while developing the scenarios and questionnaire is that many topics within AI require significant explanation, more so than we realized. This goes to show there is a huge gap between perception and reality when it comes to AI.”
    Story Source:
    Materials provided by University of Tokyo. Note: Content may be edited for style and length. More

  • in

    Fully 3D-printed, flexible OLED display

    In a groundbreaking new study, researchers at the University of Minnesota Twin Cities used a customized printer to fully 3D print a flexible organic light-emitting diode (OLED) display. The discovery could result in low-cost OLED displays in the future that could be widely produced using 3D printers by anyone at home, instead of by technicians in expensive microfabrication facilities.
    The OLED display technology is based on the conversion of electricity into light using an organic material layer. OLEDs function as high quality digital displays, which can be made flexible and used in both large-scale devices such as television screens and monitors as well as handheld electronics such as smartphones. OLED displays have gained popularity because they are lightweight, power-efficient, thin and flexible, and offer a wide viewing angle and high contrast ratio.
    “OLED displays are usually produced in big, expensive, ultra-clean fabrication facilities,” said Michael McAlpine, a University of Minnesota Kuhrmeyer Family Chair Professor in the Department of Mechanical Engineering and the senior author of the study. “We wanted to see if we could basically condense all of that down and print an OLED display on our table-top 3D printer, which was custom built and costs about the same as a Tesla Model S.”
    The group had previously tried 3D printing OLED displays, but they struggled with the uniformity of the light-emitting layers. Other groups partially printed displays but also relied on spin-coating or thermal evaporation to deposit certain components and create functional devices.
    In this new study, the University of Minnesota research team combined two different modes of printing to print the six device layers that resulted in a fully 3D-printed, flexible organic light-emitting diode display. The electrodes, interconnects, insulation, and encapsulation were all extrusion printed, while the active layers were spray printed using the same 3D printer at room temperature. The display prototype was about 1.5 inches on each side and had 64 pixels. Every pixel worked and displayed light.
    “I thought I would get something, but maybe not a fully working display,” said Ruitao Su, the first author of the study and a 2020 University of Minnesota mechanical engineering Ph.D. graduate who is now a postdoctoral researcher at MIT. “But then it turns out all the pixels were working, and I can display the text I designed. My first reaction was ‘It is real!’ I was not able to sleep, the whole night.”
    Su said the 3D-printed display was also flexible and could be packaged in an encapsulating material, which could make it useful for a wide variety of applications. More

  • in

    Light–matter interactions simulated on the world’s fastest supercomputer

    Light-matter interactions form the basis of many important technologies, including lasers, light-emitting diodes (LEDs), and atomic clocks. However, usual computational approaches for modeling such interactions have limited usefulness and capability. Now, researchers from Japan have developed a technique that overcomes these limitations.
    In a study published this month in The International Journal of High Performance Computing Applications, a research team led by the University of Tsukuba describes a highly efficient method for simulating light-matter interactions at the atomic scale.
    What makes these interactions so difficult to simulate? One reason is that phenomena associated with the interactions encompass many areas of physics, involving both the propagation of light waves and the dynamics of electrons and ions in matter. Another reason is that such phenomena can cover a wide range of length and time scales.
    Given the multiphysics and multiscale nature of the problem, light-matter interactions are typically modeled using two separate computational methods. The first is electromagnetic analysis, whereby the electromagnetic fields of the light are studied; the second is a quantum-mechanical calculation of the optical properties of the matter. But these methods assume that the electromagnetic fields are weak and that there is a difference in the length scale.
    “Our approach provides a unified and improved way to simulate light-matter interactions,” says senior author of the study Professor Kazuhiro Yabana. “We achieve this feat by simultaneously solving three key physics equations: the Maxwell equation for the electromagnetic fields, the time-dependent Kohn-Sham equation for the electrons, and the Newton equation for the ions.”
    The researchers implemented the method in their in-house software SALMON (Scalable Ab initio Light-Matter simulator for Optics and Nanoscience), and they thoroughly optimized the simulation computer code to maximize its performance. They then tested the code by modeling light-matter interactions in a thin film of amorphous silicon dioxide, composed of more than 10,000 atoms. This simulation was carried out using almost 28,000 nodes of the fastest supercomputer in the world, Fugaku, at the RIKEN Center for Computational Science in Kobe, Japan.
    “We found that our code is extremely efficient, achieving the goal of one second per time step of the calculation that is needed for practical applications,” says Professor Yabana. “The performance is close to its maximum possible value, set by the bandwidth of the computer memory, and the code has the desirable property of excellent weak scalability.”
    Although the team simulated light-matter interactions in a thin film in this work, their approach could be used to explore many phenomena in nanoscale optics and photonics.
    Story Source:
    Materials provided by University of Tsukuba. Note: Content may be edited for style and length. More

  • in

    Integrated photonics for quantum technologies

    An international team of leading scientists, headed up by Paderborn physicist Professor Klaus Jöns, has compiled a comprehensive overview of the potential, global outlook, background and frontiers of integrated photonics. The paper — a roadmap for integrated photonic circuits for quantum technologies — has now been published by journal Nature Reviews Physics. The review outlines underlying technologies, presents the current state of play of research and describes possible future applications.
    “Photonic quantum technologies have reached a number of important milestones over the last 20 years. However, scalability remains a major challenge when it comes to translating results from the lab to everyday applications. Applications often require more than 1,000 optical components, all of which have to be individually optimised. Photonic quantum technologies can, though, benefit from the parallel developments in classical photonic integration,” explains Jöns. According to the scientists, more research is required. “The integrated photonic platforms, which require a variety of multiple materials, component designs and integration strategies, bring multiple challenges, in particular signal losses, which are not easily compensated for in the quantum world,” continues Jöns. In their paper, the authors state that the complex innovation cycle for integrated photonic quantum technologies (IPQT) requires investments, the resolution of specific technological challenges, the development of the necessary infrastructure and further structuring towards a mature ecosystem. They conclude that there is an increasing demand for scientists and engineers with substantial knowledge of quantum mechanics and its technological applications.
    Integrated quantum photonics uses classical integrated photonic technologies and devices for quantum applications, whereby chip-level integration is critical for scaling up and translating laboratory demonstrators to real-life technologies. Jöns explains: “Efforts in the field of integrated quantum photonics are broad-ranging and include the development of quantum photonic circuits, which can be monolithically, hybrid or heterogeneously integrated. In our paper, we discuss what applications may become possible in the future by overcoming the current roadblocks.” The scientists also provide an overview of the research landscape and discuss the innovation and market potential. The aim is to stimulate further research and research funding by outlining not only the scientific issues, but also the challenges related to the development of the necessary manufacturing infrastructure and supply chains for bringing the technologies to market.
    According to the scientists, there is an urgent need to invest heavily in education in order to train the next generation of IPQT engineers. Jöns says: “Regardless of the type of technology that will be used in commercial quantum devices, the underlying principles of quantum mechanics are the same. We predict an increasing demand for scientists and engineers with substantial knowledge of both quantum mechanics and its technological applications. Investing in educating the next generation will contribute to pushing the scientific and technological frontiers.”
    Story Source:
    Materials provided by Universität Paderborn. Note: Content may be edited for style and length. More

  • in

    Seeking a way of preventing audio models for AI machine learning from being fooled

    Warnings have emerged about the unreliability of the metrics used to detect whether an audio perturbation designed to fool AI models can be perceived by humans. Researchers at the UPV/EHU-University of the Basque Country show that the distortion metrics used to detect intentional perturbations in audio signals are not a reliable measure of human perception, and have proposed a series of improvements. These perturbations, designed to be imperceptible, can be used to cause erroneous predictions in artificial intelligence. Distortion metrics are applied to assess how effective the methods are in generating such attacks.
    Artificial intelligence (AI) is increasingly based on machine learning models, trained using large datasets. Likewise, human-computer interaction is increasingly dependent on speech communication, mainly due to the remarkable performance of machine learning models in speech recognition tasks.
    However, these models can be fooled by “adversarial” examples, in other words, inputs intentionally perturbed to produce a wrong prediction without the changes being noticed by humans. “Suppose we have a model that classifies audio (e.g. voice command recognition) and we want to deceive it, in other words, generate a perturbation that maliciously prevents the model from working properly. If a signal is heard properly, a person is able to notice whether a signal says ‘yes’, for example. When we add an adversarial perturbation we will still hear ‘yes’, but the model will start to hear ‘no’, or ‘turn right’ instead of left or any other command we don’t want to execute,” explained Jon Vadillo, researcher in the UPV/EHU’s Departament of Computer Science and Artificial Intelligence.
    This could have “very serious implications at the level of applying these technologies to real-world or highly sensitive problems,” added Vadillo. It remains unclear why this happens. Why would a model that behaves so intelligently suddenly stop working properly when it receives even slightly altered signals?
    Deceiving the model by using an undetectable perturbation
    “It is important to know whether a model or a programme has vulnerabilities,” added the researcher from the Faculty of Informatics. “Firstly, we investigate these vulnerabilities, to check that they exist, and because that is the first step in eventually fixing them.” While much research has focused on the development of new techniques for generating adversarial perturbations, less attention has been paid to the aspects that determine whether these perturbations can be perceived by humans and what these aspects are like. This issue is important, as the adversarial perturbation strategies proposed only pose a threat if the perturbations cannot be detected by humans.
    This study has investigated the extent to which the distortion metrics proposed in the literature for audio adversarial examples can reliably measure the human perception of perturbations. In an experiment in which 36 people evaluated adversarial examples or audio perturbations according to various factors, the researchers showed that “the metrics that are being used by convention in the literature are not completely robust or reliable. In other words, they do not adequately represent the auditory perception of humans; they may tell you that a perturbation cannot be detected, but then when we evaluate it with humans, it turns out to be detectable. So we want to issue a warning that due to the lack of reliability of these metrics, the study of these audio attacks is not being conducted very well,” said the researcher.
    In addition, the researchers have proposed a more robust evaluation method that is the outcome of the “analysis of certain properties or factors in the audio that are relevant when assessing detectability, for example, the parts of the audio in which a perturbation is most detectable.” Even so, “this problem remains open because it is very difficult to come up with a mathematical metric that is capable of modelling auditory perception. Depending on the type of audio signal, different metrics will probably be required or different factors will need to be considered. Achieving general audio metrics that are representative is a complex task,” concluded Vadillo.
    Story Source:
    Materials provided by University of the Basque Country. Note: Content may be edited for style and length. More

  • in

    Magnetic surprise revealed in 'magic-angle' graphene

    When two sheets of the carbon nanomaterial graphene are stacked together at a particular angle with respect to each other, it gives rise to some fascinating physics. For instance, when this so-called “magic-angle graphene” is cooled to near absolute zero, it suddenly becomes a superconductor, meaning it conducts electricity with zero resistance.
    Now, a research team from Brown University has found a surprising new phenomenon that can arise in magic-angle graphene. In research published in the journal Science, the team showed that by inducing a phenomenon known as spin-orbit coupling, magic-angle graphene becomes a powerful ferromagnet.
    “Magnetism and superconductivity are usually at opposite ends of the spectrum in condensed matter physics, and it’s rare for them to appear in the same material platform,” said Jia Li, an assistant professor of physics at Brown and senior author of the research. “Yet we’ve shown that we can create magnetism in a system that originally hosts superconductivity. This gives us a new way to study the interplay between superconductivity and magnetism, and provides exciting new possibilities for quantum science research.”
    Magic-angle graphene has caused quite a stir in physics in recent years. Graphene is a two-dimensional material made of carbon atoms arranged in a honeycomb-like pattern. Single sheets of graphene are interesting on their own — displaying remarkable material strength and extremely efficient electrical conductance. But things get even more interesting when graphene sheets are stacked. Electrons begin to interact not only with other electrons within a graphene sheet, but also with those in the adjacent sheet. Changing the angle of the sheets with respect to each other changes those interactions, giving rise to interesting quantum phenomena like superconductivity.
    This new research adds a new wrinkle — spin-orbit coupling — to this already interesting system. Spin-orbit coupling is a state of electron behavior in certain materials in which each electron’s spin — its tiny magnetic moment that points either up or down — becomes linked to its orbit around the atomic nucleus.
    “We know that spin-orbit coupling gives rise to a wide range of interesting quantum phenomena, but it’s not normally present in magic-angle graphene,” said Jiang-Xiazi Lin, a postdoctoral researcher at Brown and the study’s lead author. “We wanted to introduce spin-orbit coupling, and then see what effect it had on the system.”
    To do that, Li and his team interfaced magic-angle graphene with a block of tungsten diselenide, a material that has strong spin-orbit coupling. Aligning the stack precisely induces spin-orbit coupling in the graphene. From there, the team probed the system with external electrical currents and magnetic fields. More

  • in

    Mass production of revolutionary computer memory moves closer with ULTRARAM™ on silicon wafers for the first time

    A pioneering type of patented computer memory known as ULTRARAM™ has been demonstrated on silicon wafers in what is a major step towards its large-scale manufacture.
    ULTRARAM™ is a novel type of memory with extraordinary properties. It combines the non-volatility of a data storage memory, like flash, with the speed, energy-efficiency and endurance of a working memory, like DRAM. To do this it utilises the unique properties of compound semiconductors, commonly used in photonic devices such as LEDS, laser diodes and infrared detectors, but not in digital electronics, which is the preserve of silicon.
    Initially patented in the US, further patents on the technology are currently being progressed in key technology markets around the world.
    Now, in a collaboration between the Physics and Engineering Departments at Lancaster University and the Department of Physics at Warwick, ULTRARAM™ has been implemented on silicon wafers for the very first time.
    Professor Manus Hayne of the Department of Physics at Lancaster, who leads the work said, “ULTRARAM™ on silicon is a huge advance for our research, overcoming very significant materials challenges of large crystalline lattice mismatch, the change from elemental to compound semiconductor and differences in thermal contraction.”
    Digital electronics, which is the core of all gadgetry from smart watches and smart phones through to personal computers and datacentres, uses processor and memory chips made from the semiconductor element silicon.
    Due to the maturity of the silicon chip-making industry and the multi-billion dollar cost of building chip factories, implementation of any digital electronic technology on silicon wafers is essential for its commercialisation.
    Remarkably, the ULTRARAM™ on silicon devices actually outperform previous incarnations of the technology on GaAs compound semiconductor wafers, demonstrating (extrapolated) data storage times of at least 1000 years, fast switching speed (for device size) and program-erase cycling endurance of at least 10 million, which is one hundred to one thousand times better than flash.
    Story Source:
    Materials provided by Lancaster University. Note: Content may be edited for style and length. More

  • in

    Physicists watch as ultracold atoms form a crystal of quantum tornadoes

    The world we experience is governed by classical physics. How we move, where we are, and how fast we’re going are all determined by the classical assumption that we can only exist in one place at any one moment in time.
    But in the quantum world, the behavior of individual atoms is governed by the eerie principle that a particle’s location is a probability. An atom, for instance, has a certain chance of being in one location and another chance of being at another location, at the same exact time.
    When particles interact, purely as a consequence of these quantum effects, a host of odd phenomena should ensue. But observing such purely quantum mechanical behavior of interacting particles amid the overwhelming noise of the classical world is a tricky undertaking.
    Now, MIT physicists have directly observed the interplay of interactions and quantum mechanics in a particular state of matter: a spinning fluid of ultracold atoms. Researchers have predicted that, in a rotating fluid, interactions will dominate and drive the particles to exhibit exotic, never-before-seen behaviors.
    In a study published today in Nature, the MIT team has rapidly rotated a quantum fluid of ultracold atoms. They watched as the initially round cloud of atoms first deformed into a thin, needle-like structure. Then, at the point when classical effects should be suppressed, leaving solely interactions and quantum laws to dominate the atoms’ behavior, the needle spontaneously broke into a crystalline pattern, resembling a string of miniature, quantum tornadoes.
    “This crystallization is driven purely by interactions, and tells us we’re going from the classical world to the quantum world,” says Richard Fletcher, assistant professor of physics at MIT. More