More stories

  • in

    Overcoming the limitations of scanning electron microscopy with AI

    What if a super-resolution imaging technique used in the latest 8K premium TVs is applied to scanning electron microscopy, essential equipment for materials research?
    A joint research team from POSTECH and the Korea Institute of Materials Science (KIMS) applied deep learning to the scanning electron microscopy (SEM) to develop a super-resolution imaging technique that can convert a low-resolution electron backscattering diffraction (EBSD) microstructure images obtained from conventional analysis equipment into super-resolution images. The findings from this study were recently published in the npj Computational Materials.
    In modern-day materials research, SEM images play a crucial role in developing new materials, from microstructure visualization and characterization, and in numerical material behavior analysis. However, acquiring high-quality microstructure image data may be exhaustive or highly time-consuming due to the hardware limitations of the SEM. This may affect the accuracy of subsequent material analysis, and therefore, it is paramount to overcome the technical limitations of the equipment.
    To this, the joint research team developed a faster and more accurate microstructure imaging technique using deep learning. In particular, by using a convolutional neural network, the resolution of the existing microstructure image was enhanced by 4 times, 8 times, and 16 times, which reduces the imaging time up to 256 times compared to the conventional SEM system.
    In addition, super-resolution imaging verified that the morphological details of the microstructure can be restored with high accuracy through microstructure characterization and finite element analysis.
    “Through the EBSD technique developed in this study, we anticipate the time it takes to develop new materials will be drastically reduced,” explained Professor Hyoung Seop Kim of POSTECH who led the research.
    This research was conducted with the support from the Mid-career Researcher Program of the National Research Foundation of Korea, the AI Graduate School Program of the Institute for Information & Communications Technology Promotion (IITP), and Phase 4 of the Brain Korea 21 Program of the Ministry of Education, and with the support from the Korea Materials Research Institute.
    Story Source:
    Materials provided by Pohang University of Science & Technology (POSTECH). Note: Content may be edited for style and length. More

  • in

    Unlocking the AI algorithm ‘black box’ – new machine learning technology to find out what makes plants and humans tick

    The inner 24-hour cycles — or circadian rhythms — are key to maintaining human, plant and animal health, which could provide valuable insight into how broken clocks impact health.
    Circadian rhythms, such as the sleep-wake cycle, are innate to most living organisms and critical to life on Earth. The word circadian originates from the Latin phrase ‘circa diem’ which means ‘around a day’.
    Biologically, the circadian clock temporally orchestrates physiology, biochemistry, and metabolism across the 24-hour day-night cycle. This is why being out of kilter can affect our fitness levels, our health, or our ability to survive. For example, experiencing jet lag is a chronobiological problem — our body clocks are out of sync because the normal external cues such as light or temperature have changed.
    The circadian clock isn’t unique to humans. In plants, an accurate clock helps to regulate flowering and is crucial to synchronising metabolism and physiology with the rising and setting sun. Understanding circadian rhythms can help to improve plant growth and yields, not to mention revealing new avenues for tackling human diseases.
    Beyond plants
    For this latest research, the team applied ML to predict complex temporal circadian gene expression patterns in model plant Arabidopsis thaliana. Taking newly generated datasets, published temporal datasets, and Arabidopsis genomes, the team of scientists trained ML models to make predictions about circadian gene regulation and expression patterns. More

  • in

    Brain connectivity can build better AI

    A new study shows that artificial intelligence networks based on human brain connectivity can perform cognitive tasks efficiently.
    By examining MRI data from a large Open Science repository, researchers reconstructed a brain connectivity pattern, and applied it to an artificial neural network (ANN). An ANN is a computing system consisting of multiple input and output units, much like the biological brain. A team of researchers from The Neuro (Montreal Neurological Institute-Hospital) and the Quebec Artificial Intelligence Institute trained the ANN to perform a cognitive memory task and observed how it worked to complete the assignment.
    This is a unique approach in two ways. Previous work on brain connectivity, also known as connectomics, focused on describing brain organization, without looking at how it actually performs computations and functions. Secondly, traditional ANNs have arbitrary structures that do not reflect how real brain networks are organized.By integrating brain connectomics into the construction of ANN architectures, researchers hoped to both learn how the wiring of the brain supports specific cognitive skills, and to derive novel design principles for artificial networks.
    They found that ANNs with human brain connectivity, known as neuromorphic neural networks, performed cognitive memory tasks more flexibly and efficiently than other benchmark architectures. The neuromorphic neural networks were able to use the same underlying architecture to support a wide range of learning capacities across multiple contexts.
    “The project unifies two vibrant and fast-paced scientificdisciplines,” says Bratislav Misic, a researcher at The Neuro and the paper’s senior author. “Neuroscience and AI share common roots, but have recently diverged. Using artificial networks will help us to understand how brain structure supports brain function. In turn, using empirical data to make neural networks will reveal design principles for building better AI. So, the two will help inform each other and enrich our understanding of the brain.”
    This study, published in the journal Nature Machine Intelligence on Aug. 9, 2021, was funded with the help of the Canada First Research Excellence Fund, awarded to McGill University for the Healthy Brains, Healthy Lives initiative, the Natural Sciences and Engineering Research Council of Canada, Fonds de Recherche du Quebec — Santé, Canadian Institute for Advanced Research, Canada Research Chairs, Fonds de Recherche du Quebec — Nature et Technologies, and Centre UNIQUE (Union of Neuroscience and Artificial Intelligence).
    Story Source:
    Materials provided by McGill University. Note: Content may be edited for style and length. More

  • in

    New study examines privacy and security perceptions of online education proctoring services

    In response to the COVID-19 pandemic, educational institutions have had to quickly transition to remote learning and exam taking. This has led to an increase in the use of online proctoring services to curb student cheating, including restricted browser modes, video/screen monitoring, local network traffic analysis and eye tracking.
    In a first-of-its-kind study, researchers led by Adam Aviv, an associate professor of computer science at the George Washington University, explored the security and privacy perceptions of students taking proctored exams. After analyzing user reviews of eight proctoring services’ browser extensions and subsequently performing an online survey of students, the researchers found: Exam proctoring browser extensions use a technique called “URL match patterns” to turn on whenever they find a given URL. These URL patterns match a wide variety of URLs, most associated with online course content. However, generic URL patterns (e.g., any URL that has /courses/ or /quizzes/) can also activate the browser extension regardless of whether the student is taking an exam. As a result, the data collection and monitoring features of proctoring browser extensions could be active on a number of websites, even when a student is not taking an exam. Students understood they would need to give up some privacy aspects in order to take exams safely from home during the pandemic. However, a large number of students had concerns about sharing personal information with proctoring companies in order to take an exam. These concerns include the process of identity verification, the amount of information collected by these companies and having to install third party online exam proctoring software on their personal computers. When reviewing exam proctoring browser extensions in the Google Chrome web store, there was a noticeable increase in February 2020 in the total number of ratings combined with a sharp decrease in the “star ratings” for these extensions. This likely indicates an extreme dislike for exam proctoring services.”Institutional support for third-party proctoring software conveys credibility and makes the exam proctoring software appear safer and less potentially problematic because students assume that institutions have done proper vetting of both the software and the methods employed by the proctoring services,” David Balash, a PhD student at GW and a lead researcher on the study, said. “We recommend that institutions and educators follow a principle of least monitoring when using exam proctoring tools by using the minimum number of monitoring types necessary, given the class size and knowledge of expected student behavior.”
    “As many universities and colleges return to the classroom, students may be less willing to trade their privacy for personal safety going forward,” Rahel Fainchtein, a PhD student at Georgetown University and a lead researcher on the study, said. “However, at the same time, online exam proctoring technology appears here to stay.”
    The paper, “Examining the Examiners: Students’ Privacy and Security Perceptions of Online Proctoring Services,” will be presented at the 17th Symposium on Usable Privacy and Security on August 10, 2021. In addition to Aviv, Balash and Fainchtein, the research team included Dongkun Kim and Darikia Shaibekova at GW and Micah Sherr at Georgetown.
    Story Source:
    Materials provided by George Washington University. Note: Content may be edited for style and length. More

  • in

    This touchy-feely glove senses and maps tactile stimuli

    When you pick up a balloon, the pressure to keep hold of it is different from what you would exert to grasp a jar. And now engineers at MIT and elsewhere have a way to precisely measure and map such subtleties of tactile dexterity.
    The team has designed a new touch-sensing glove that can “feel” pressure and other tactile stimuli. The inside of the glove is threaded with a system of sensors that detects, measures, and maps small changes in pressure across the glove. The individual sensors are highly attuned and can pick up very weak vibrations across the skin, such as from a person’s pulse.
    When subjects wore the glove while picking up a balloon versus a beaker, the sensors generated pressure maps specific to each task. Holding a balloon produced a relatively even pressure signal across the entire palm, while grasping a beaker created stronger pressure at the fingertips.
    The researchers say the tactile glove could help to retrain motor function and coordination in people who have suffered a stroke or other fine motor condition. The glove might also be adapted to augment virtual reality and gaming experiences. The team envisions integrating the pressure sensors not only into tactile gloves but also into flexible adhesives to track pulse, blood pressure, and other vital signs more accurately than smart watches and other wearable monitors.
    “The simplicity and reliability of our sensing structure holds great promise for a diversity of health care applications, such as pulse detection and recovering the sensory capability in patients with tactile dysfunction,” says Nicholas Fang, professor of mechanical engineering at MIT.
    Fang and his collaborators detail their results in a study appearing today in Nature Communications. The study’s co-authors include Huifeng Du and Liu Wang at MIT, along with professor Chuanfei Guo’s group at the Southern University of Science and Technology (SUSTech) in China. More

  • in

    Decades of research brings quantum dots to brink of widespread use

    A new article in Science magazine gives an overview of almost three decades of research into colloidal quantum dots, assesses the technological progress for these nanometer-sized specs of semiconductor matter, and weighs the remaining challenges on the path to widespread commercialization for this promising technology with applications in everything from TVs to highly efficient sunlight collectors.
    “Thirty years ago, these structures were just a subject of scientific curiosity studied by a small group of enthusiasts. Over the years, quantum dots have become industrial-grade materials exploited in a range of traditional and emerging technologies, some of which have already found their way into commercial markets,” said Victor I. Klimov, a coauthor of the paper and leader of the team conducting quantum dot research at Los Alamos National Laboratory.
    Many advances described in the Science article originated at Los Alamos, including the first demonstration of colloidal quantum dot lasing, the discovery of carrier multiplication, pioneering research into quantum dot light emitting diodes (LEDs) and luminescent solar concentrators, and recent studies of single-dot quantum emitters.
    Using modern colloidal chemistry, the dimensions and internal structure of quantum dots can be manipulated with near-atomic precision, which allows for highly accurate control of their physical properties and thereby behaviors in practical devices.
    A number of ongoing efforts on practical applications of colloidal quantum dots have exploited size-controlled tunability of their emission color and high-emission quantum yields near the ideal 100 percent limit. These properties are attractive for screen displays and lighting, the technologies where quantum dots are used as color converting phosphors. Due to their narrowband, spectrally tunable emission, quantum dots allow for improved color purity and more complete coverage of the entire color space compared to the existing phosphor materials. Some of these devices, such as quantum dot TVs, have already reached technological maturity and are available in commercial markets.
    The next frontier is creating technologically viable LEDs, powered by electrically driven quantum dots. The Science review describes various approaches to implement these devices and discusses the existing challenges. Quantum LEDs have already reached impressive brightness and almost ideal efficiencies near the theoretically defined limits. Much of this progress has been driven by continuing advances in understanding the performance-limiting factors such as nonradiative Auger recombination. More

  • in

    All in your head: Exploring human-body communications with binaural hearing aids

    Modern portable devices are the result of great progress in miniaturization and wireless communications. Now that these devices can be made even smaller and lighter without loss of functionality, it’s likely that a great part of next-generation electronics will revolve around wearable technology. However, for wearables to truly transcend portables, we will need to rethink the way in which devices communicate with each other as “wireless body area networks” (or WBANs). The usual approach of using an antenna to radiate signals into the surrounding area while hoping to reach a receiver won’t cut it for wearables. But, this method of transmission not only demands a lot of energy but can also be unsafe from a cybersecurity standpoint. Moreover, the human body itself also constitutes a large obstacle because it absorbs electromagnetic radiation and blocks signals.
    But what alternatives do we have for wearable technology? One promising approach is “human body communication” (HBC), which involves using the body itself as a medium to transmit signals. The main idea is that some electric fields can propagate inside the body very efficiently without leaking to the surrounding area. By interfacing skin-worn devices with electrodes, we can enable them to communicate with each other using relatively lower frequencies than those used in conventional wireless protocols like Bluetooth. However, even research on HBC began over two decades, this technology hasn’t been put to use on a large scale.
    To explore the full potential of HBC, researchers from Japan, including Dr. Dairoku Muramatsu from Tokyo University of Science and Professor Ken Sasaki from The University of Tokyo focused on using HBC for a yet unexplored use: binaural hearing aids. Such hearing aid devices come in pairs — one for each ear — and greatly improve intelligibility and sound localization for the wearer by communicating with each other to adapt to the sound field. Because these hearing aids are in direct contact with the skin, they made for a perfect candidate application for HBC. In a recent study, which was published in the journal Electronics, the researchers investigated, through detailed numerical simulations, how electric fields emitted from an electrode in one ear distribute themselves in the human head and reach a receiving electrode on the opposite ear, and whether it could be leveraged in a digital communication system. In fact, the researchers had previously conducted an experimental study on HBC with real human subjects, the results of which were also published in Electronics.
    Using human-body models of different degrees of complexity, the researchers first determined the best representation to ensure accurate results in their simulations and then Once this was settled, they proceeded to explore the effects of various system parameters and characteristics, as Dr. Muramatsu puts it, “We calculated the input impedance characteristics of the transceiver electrodes, the transmission characteristics between transceivers, and the electric field distributions in and around the head. In this way, we clarified the transmission mechanisms of the proposed HBC system.” Finally, with these results, they determined the best electrode structure out of the ones they tested. They also calculated the levels of electromagnetic exposure caused by their system and found that it would be completely safe for humans, according to modern safety standards.
    Overall, this study showcases the potential of HBC and extends the applicability of this promising technology. After all, hearing aids are but one of all modern head-worn wireless devices. For example, HBC could be implemented in wireless earphones to enable them to communicate with each other using far less power. Moreover, because the radio waves used in HBC attenuate quickly outside of the body, HBC-based devices on separate people could operate at similar frequencies in the same space without causing noise or interference. “With our results, we have made great progress towards reliable, low-power communication systems that are not limited to hearing aids but also applicable to other head-mounted wearable devices. Not just this, accessories such as earrings and piercings could also be used to create new communication systems,” concludes Dr. Muramatsu.
    Story Source:
    Materials provided by Tokyo University of Science. Note: Content may be edited for style and length. More

  • in

    Brain-inspired highly scalable neuromorphic hardware

    KAIST researchers fabricated a brain-inspired highly scalable neuromorphic hardware by co-integrating single transistor neurons and synapses. Using standard silicon complementary metal-oxide-semiconductor (CMOS) technology, the neuromorphic hardware is expected to reduce chip cost and simplify fabrication procedures.
    The research team led by Yang-Kyu Choi and Sung-Yool Choi produced a neurons and synapses based on single transistor for highly scalable neuromorphic hardware and showed the ability to recognize text and face images. This research was featured in Science Advances on August 4.
    Neuromorphic hardware has attracted a great deal of attention because of its artificial intelligence functions, but consuming ultra-low power of less than 20 watts by mimicking the human brain. To make neuromorphic hardware work, a neuron that generates a spike when integrating a certain signal, and a synapse remembering the connection between two neurons are necessary, just like the biological brain. However, since neurons and synapses constructed on digital or analog circuits occupy a large space, there is a limit in terms of hardware efficiency and costs. Since the human brain consists of about 1011 neurons and 1014 synapses, it is necessary to improve the hardware cost in order to apply it to mobile and IoT devices.
    To solve the problem, the research team mimicked the behavior of biological neurons and synapses with a single transistor, and co-integrated them onto an 8-inch wafer. The manufactured neuromorphic transistors have the same structure as the transistors for memory and logic that are currently mass-produced. In addition, the neuromorphic transistors proved for the first time that they can be implemented with a ‘Janus structure’ that functions as both neuron and synapse, just like coins have heads and tails.
    Professor Yang-Kyu Choi said that this work can dramatically reduce the hardware cost by replacing the neurons and synapses that were based on complex digital and analog circuits with a single transistor. “We have demonstrated that neurons and synapses can be implemented using a single transistor,” said Joon-Kyu Han, the first author. “By co-integrating single transistor neurons and synapses on the same wafer using a standard CMOS process, the hardware cost of the neuromorphic hardware has been improved, which will accelerate the commercialization of neuromorphic hardware,” Han added.This research was supported by the National Research Foundation (NRF) and IC Design Education Center (IDEC).
    Story Source:
    Materials provided by The Korea Advanced Institute of Science and Technology (KAIST). Note: Content may be edited for style and length. More