More stories

  • in

    Creating artificial intelligence that acts more human by 'knowing that it knows'

    A research group from the Graduate School of Informatics, Nagoya University, has taken a big step towards creating a neural network with metamemory through a computer-based evolution experiment.
    In recent years, there has been rapid progress in designing artificial intelligence technology using neural networks that imitate brain circuits. One goal of this field of research is understanding the evolution of metamemory to use it to create artificial intelligence with a human-like mind.
    Metamemory is the process by which we ask ourselves whether we remember what we had for dinner yesterday and then use that memory to decide whether to eat something different tonight. While this may seem like a simple question, answering it involves a complex process. Metamemory is important because it involves a person having knowledge of their own memory capabilities and adjusting their behavior accordingly.
    “In order to elucidate the evolutionary basis of the human mind and consciousness, it is important to understand metamemory,” explains lead author Professor Takaya Arita. “A truly human-like artificial intelligence, which can be interacted with and enjoyed like a family member in a person’s home, is an artificial intelligence that has a certain amount of metamemory, as it has the ability to remember things that it once heard or learned.”
    When studying metamemory, researchers often employ a ‘delayed matching-to-sample task’. In humans, this task consists of the participant seeing an object, such as a red circle, remembering it, and then taking part in a test to select the thing that they had previously seen from multiple similar objects. Correct answers are rewarded and wrong answers punished. However, the subject can choose not to do the test and still earn a smaller reward.
    A human performing this task would naturally use their metamemory to consider if they remembered seeing the object. If they remembered it, they would take the test to get the bigger reward, and if they were unsure, they would avoid risking the penalty and receive the smaller reward instead. Previous studies reported that monkeys could perform this task as well.
    The Nagoya University team comprising Professor Takaya Arita, Yusuke Yamato, and Reiji Suzuki of the Graduate School of Informatics created an artificial neural network model that performed the delayed matching-to-sample task and analyzed how it behaved.
    Despite starting from random neural networks that did not even have a memory function, the model was able to evolve to the point that it performed similarly to the monkeys in previous studies. The neural network could examine its memories, keep them, and separate outputs. The intelligence was able to do this without requiring any assistance or intervention by the researchers, suggesting the plausibility of it having metamemory mechanisms. “The need for metamemory depends on the user’s environment. Therefore, it is important for artificial intelligence to have a metamemory that adapts to its environment by learning and evolving,” says Professor Arita of the finding. “The key point is that the artificial intelligence learns and evolves to create a metamemory that adapts to its environment.”
    Creating an adaptable intelligence with metamemory is a big step towards making machines that have memories like ours. The team is enthusiastic about the future, “This achievement is expected to provide clues to the realization of artificial intelligence with a ‘human-like mind’ and even consciousness.”
    The research results were published in the online edition of the international scientific journal Scientific Reports. The study was partly supported by a JSPS/MEXT Grants-in-Aid for Scientific Research KAKENHI (JP17H06383 in #4903).
    Story Source:
    Materials provided by Nagoya University. Note: Content may be edited for style and length. More

  • in

    Scientists develop a 'fabric' that turns body movement into electricity

    Scientists at Nanyang Technological University, Singapore (NTU Singapore) have developed a stretchable and waterproof ‘fabric’ that turns energy generated from body movements into electrical energy.
    A crucial component in the fabric is a polymer that, when pressed or squeezed, converts mechanical stress into electrical energy. It is also made with stretchable spandex as a base layer and integrated with a rubber-like material to keep it strong, flexible, and waterproof.
    In a proof-of-concept experiment reported in the scientific journal Advanced Materials in April, the NTU Singapore team showed that tapping on a 3cm by 4cm piece of the new fabric generated enough electrical energy to light up 100 LEDs.
    Washing, folding, and crumpling the fabric did not cause any performance degradation, and it could maintain stable electrical output for up to five months, demonstrating its potential for use as a smart textile and wearable power source.
    Materials scientist and NTU Associate Provost (Graduate Education) Professor Lee Pooi See, who led the study, said: “There have been many attempts to develop fabric or garments that can harvest energy from movement, but a big challenge has been to develop something that does not degrade in function after being washed, and at the same time retains excellent electrical output. In our study, we demonstrated that our prototype continues to function well after washing and crumpling. We think it could be woven into t-shirts or integrated into soles of shoes to collect energy from the body’s smallest movements, piping electricity to mobile devices.”
    Harvesting an alternative source of energy
    The electricity-generating fabric developed by the NTU team is an energy harvesting device that turns vibrations produced from the smallest body movements in everyday life into electricity. More

  • in

    6G component provides speed, efficiency needed for next-gen network

    Even though consumers won’t see it for years, researchers around the world are already laying the foundation for the next generation of wireless communications, 6G. An international team led by researchers at The University of Texas at Austin has developed components that will allow future devices to achieve increased speeds necessary for such a technological jump.
    In a new paper published in Nature Electronics, the researchers demonstrated new radio frequency switches that are responsible for keeping devices connected by jumping between networks and frequencies while receiving data. In contrast with the switches present in most electronics today, these new devices are made of two-dimensional materials that take significantly less energy to operate, which means more speed and better battery life for the device.
    “Anything that is battery-operated and needs to access the cloud or the 5G and eventually 6G network, these switches can provide those low-energy, high-speed functions,” said Deji Akinwande, professor in the Cockrell School of Engineering’s Department of Electrical and Computer Engineering and the principal leader of the project.
    Because of the increased demand for speed and power, 6G devices will probably have hundreds of switches in them, many more than the electronics currently on the market. To reach increased speeds, 6G devices will have to access higher frequency spectrum bands than today’s electronics, and these switches are key to achieving that.
    Making these switches, and other components, more efficient is another important part of cracking the code for 6G. That efficiency goes beyond battery life. Because the potential uses for 6G are so vast, including driverless cars and smart cities, every device will need to virtually eliminate latency.
    Akinwande previously developed switches for 5G devices. One of the main differences this time is the materials used. These new switches use molybdenum disulfide, also known as MOS2, stuck between two electrodes. More

  • in

    Time crystals 'impossible' but obey quantum physics

    Scientists have created the first “time-crystal” two-body system in an experiment that seems to bend the laws of physics.
    It comes after the same team recently witnessed the first interaction of the new phase of matter.
    Time crystals were long believed to be impossible because they are made from atoms in never-ending motion. The discovery, published in Nature Communications, shows that not only can time crystals be created, but they have potential to be turned into useful devices.
    Time crystals are different from a standard crystal — like metals or rocks — which is composed of atoms arranged in a regularly repeating pattern in space.
    First theorised in 2012 by Nobel Laureate Frank Wilczek and identified in 2016, time crystals exhibit the bizarre property of being in constant, repeating motion in time despite no external input. Their atoms are constantly oscillating, spinning, or moving first in one direction, and then the other.
    EPSRC Fellow Dr Samuli Autti, lead author from Lancaster University’s Department of Physics, explained: “Everybody knows that perpetual motion machines are impossible. However, in quantum physics perpetual motion is okay as long as we keep our eyes closed. By sneaking through this crack we can make time crystals.”
    “It turns out putting two of them together works beautifully, even if time crystals should not exist in the first place. And we already know they also exist at room temperature.”
    A “two-level system” is a basic building block of a quantum computer. Time crystals could be used to build quantum devices that work at room temperature.
    An international team of researchers from Lancaster University, Royal Holloway London, Landau Institute, and Aalto University in Helsinki observed time crystals by using Helium-3 which is a rare isotope of helium with one missing neutron. The experiment was carried out in Aalto University.
    They cooled superfluid helium-3 to about one ten thousandth of a degree from absolute zero (0.0001K or -273.15°C). The researchers created two time crystals inside the superfluid, and brought them to touch. The scientists then watched the two time crystals interacting as described by quantum physics.
    Story Source:
    Materials provided by Lancaster University. Note: Content may be edited for style and length. More

  • in

    'Fruitcake' structure observed in organic polymers

    Researchers have analysed the properties of an organic polymer with potential applications in flexible electronics and uncovered variations in hardness at the nanoscale, the first time such a fine structure has been observed in this type of material.
    The field of organic electronics has benefited from the discovery of new semiconducting polymers with molecular backbones that are resilient to twists and bends, meaning they can transport charge even if they are flexed into different shapes.
    It had been assumed that these materials resemble a plate of spaghetti at the molecular scale, without any long-range order. However, an international team of researchers found that for at least one such material, there are tiny pockets of order within. These ordered pockets, just a few ten-billionths of a metre across, are stiffer than the rest of the material, giving it a ‘fruitcake’ structure with harder and softer regions.
    The work was led by the University of Cambridge and Park Systems UK Limited, with KTH Stockholm in Sweden, the Universities of Namur and Mons in Belgium, and Wake Forest University in the USA. Their results, reported in the journal Nature Communications, could be used in the development of next-generation microelectronic and bioelectronic devices.
    Studying and understanding the mechanical properties of these materials at the nanoscale — a field known as nanomechanics — could help scientists fine-tune those properties and make the materials suitable for a wider range of applications.
    “We know that the fabric of nature on the nanoscale isn’t uniform, but finding uniformity and order where we didn’t expect to see it was a surprise,” said Dr Deepak Venkateshvaran from Cambridge’s Cavendish Laboratory, who led the research.
    The researchers used an imaging technique called higher eigen mode imaging to take nanoscale pictures of the regions of order within a semiconducting polymer called indacenodithiophene-co-benzothiadiazole (C16-IDTBT). These pictures showed clearly how individual polymer chains line up next to each other in some regions of the polymer film. These regions of order are between 10 and 20 nanometres across.
    “The sensitivity of these detection methods allowed us to map out the self-organisation of polymers down to the individual molecular strands,” said co-author Dr Leszek Spalek, also from the Cavendish Laboratory. “Higher eigen mode imaging is a valuable method for characterising nanomechanical properties of materials, given the relatively easy sample preparation that is required.”
    Further measurements of the stiffness of the material on the nanoscale showed that the areas where the polymers self-organised into ordered regions were harder, while the disordered regions of the material were softer. The experiments were performed in ambient conditions as opposed to an ultra-high vacuum, which had been a requirement in earlier studies.
    “Organic polymers are normally studied for their applications in large area, centimetre scale, flexible electronics,” said Venkateshvaran. “Nanomechanics can augment these studies by developing an understanding of their mechanical properties at ultra-small scales with unprecedented resolutions.
    “Together, the fundamental knowledge gained from both types of studies could inspire a new generation of soft microelectronic and bioelectronic devices. These futuristic devices will combine the benefits of centimetre scale flexibility, micrometre scale homogeneity, and nanometre scale electrically controlled mechanical motion of polymer chains with superior biocompatibility.”
    The research was funded in part by the Royal Society.
    Story Source:
    Materials provided by University of Cambridge. The original text of this story is licensed under a Creative Commons License. Note: Content may be edited for style and length. More

  • in

    Machine learning models: In bias we trust?

    When the stakes are high, machine-learning models are sometimes used to aid human decision-makers. For instance, a model could predict which law school applicants are most likely to pass the bar exam to help an admissions officer determine which students should be accepted.
    These models often have millions of parameters, so how they make predictions is nearly impossible for researchers to fully understand, let alone an admissions officer with no machine-learning experience. Researchers sometimes employ explanation methods that mimic a larger model by creating simple approximations of its predictions. These approximations, which are far easier to understand, help users determine whether to trust the model’s predictions.
    But are these explanation methods fair? If an explanation method provides better approximations for men than for women, or for white people than for Black people, it may encourage users to trust the model’s predictions for some people but not for others.
    MIT researchers took a hard look at the fairness of some widely used explanation methods. They found that the approximation quality of these explanations can vary dramatically between subgroups and that the quality is often significantly lower for minoritized subgroups.
    In practice, this means that if the approximation quality is lower for female applicants, there is a mismatch between the explanations and the model’s predictions that could lead the admissions officer to wrongly reject more women than men.
    Once the MIT researchers saw how pervasive these fairness gaps are, they tried several techniques to level the playing field. They were able to shrink some gaps, but couldn’t eradicate them. More

  • in

    VoxLens: Adding one line of code can make some interactive visualizations accessible to screen-reader users

    Interactive visualizations have changed the way we understand our lives. For example, they can showcase the number of coronavirus infections in each state.
    But these graphics often are not accessible to people who use screen readers, software programs that scan the contents of a computer screen and make the contents available via a synthesized voice or Braille. Millions of Americans use screen readers for a variety of reasons, including complete or partial blindness, learning disabilities or motion sensitivity.
    University of Washington researchers worked with screen-reader users to design VoxLens, a JavaScript plugin that — with one additional line of code — allows people to interact with visualizations. VoxLens users can gain a high-level summary of the information described in a graph, listen to a graph translated into sound or use voice-activated commands to ask specific questions about the data, such as the mean or the minimum value.
    The team presented this project May 3 at CHI 2022 in New Orleans.
    “If I’m looking at a graph, I can pull out whatever information I am interested in, maybe it’s the overall trend or maybe it’s the maximum,” said lead author Ather Sharif, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “Right now, screen-reader users either get very little or no information about online visualizations, which, in light of the COVID-19 pandemic, can sometimes be a matter of life and death. The goal of our project is to give screen-reader users a platform where they can extract as much or as little information as they want.”
    Screen readers can inform users about the text on a screen because it’s what researchers call “one-dimensional information.”
    “There is a start and an end of a sentence and everything else comes in between,” said co-senior author Jacob O. Wobbrock, UW professor in the Information School. “But as soon as you move things into two dimensional spaces, such as visualizations, there’s no clear start and finish. It’s just not structured in the same way, which means there’s no obvious entry point or sequencing for screen readers.” More

  • in

    Study evaluates how to eliminate telemedicine's virtual waiting room

    Your virtual visit with your doctor is at 1:00 p.m. It’s now 1:20 p.m. and your physician has not yet logged in. Do you call the clinic? Hang up and log back in? Groan in frustration?
    Being stuck in a virtual waiting room and staring at a blank computer or device screen is a huge dissatisfier among telemedicine patients. To respect patients’ time, and provide the optimal experience, UC San Diego Health conducted a 10-week quality improvement study to evaluate how text messaging a link to a patient when their doctor is ready provides a way to connect patients and doctors most efficiently, without relying on the virtual waiting room.
    Results of the study published in the May 27 online issue of Quality Management in Health Care.
    “Borrowing from the airline and restaurant industries, we tested whether we could contact patients via text to log into their appointment when their doctor is ready. The goal of the feasibility study was to determine if this flexibility lead to improved perception of waiting time and an enhanced experience, while assessing for time saving for both patients and providers,” said Brett C. Meyer, MD, neurologist, co-director of the UC San Diego Heath Stroke Center, and clinical director of telehealth at UC San Diego Health.
    “We stepped back and asked, ‘Do we need a virtual waiting room at all? Can we let patients know when their provider is available instead of making them wait online?'” said Emily S. Perrinez, RN, MSN, MPH, study co-author and director of telehealth operations at UC San Diego Health. “The reality is that wait times and lack of timely communication both correlate with patient experience. Real-time text notification that the provider is ready improved patient satisfaction and this experience is the kind of feedback we love to see.”
    Twenty-two patients at a stroke clinic participated in the two-and-a-half month study. Patients chose to either receive a text, which included a visit link when their provider was ready for their visit or the standard telehealth routine of logging in at a scheduled time and waiting in front of a camera in a virtual waiting room. More