More stories

  • in

    Decades of research brings quantum dots to brink of widespread use

    A new article in Science magazine gives an overview of almost three decades of research into colloidal quantum dots, assesses the technological progress for these nanometer-sized specs of semiconductor matter, and weighs the remaining challenges on the path to widespread commercialization for this promising technology with applications in everything from TVs to highly efficient sunlight collectors.
    “Thirty years ago, these structures were just a subject of scientific curiosity studied by a small group of enthusiasts. Over the years, quantum dots have become industrial-grade materials exploited in a range of traditional and emerging technologies, some of which have already found their way into commercial markets,” said Victor I. Klimov, a coauthor of the paper and leader of the team conducting quantum dot research at Los Alamos National Laboratory.
    Many advances described in the Science article originated at Los Alamos, including the first demonstration of colloidal quantum dot lasing, the discovery of carrier multiplication, pioneering research into quantum dot light emitting diodes (LEDs) and luminescent solar concentrators, and recent studies of single-dot quantum emitters.
    Using modern colloidal chemistry, the dimensions and internal structure of quantum dots can be manipulated with near-atomic precision, which allows for highly accurate control of their physical properties and thereby behaviors in practical devices.
    A number of ongoing efforts on practical applications of colloidal quantum dots have exploited size-controlled tunability of their emission color and high-emission quantum yields near the ideal 100 percent limit. These properties are attractive for screen displays and lighting, the technologies where quantum dots are used as color converting phosphors. Due to their narrowband, spectrally tunable emission, quantum dots allow for improved color purity and more complete coverage of the entire color space compared to the existing phosphor materials. Some of these devices, such as quantum dot TVs, have already reached technological maturity and are available in commercial markets.
    The next frontier is creating technologically viable LEDs, powered by electrically driven quantum dots. The Science review describes various approaches to implement these devices and discusses the existing challenges. Quantum LEDs have already reached impressive brightness and almost ideal efficiencies near the theoretically defined limits. Much of this progress has been driven by continuing advances in understanding the performance-limiting factors such as nonradiative Auger recombination. More

  • in

    All in your head: Exploring human-body communications with binaural hearing aids

    Modern portable devices are the result of great progress in miniaturization and wireless communications. Now that these devices can be made even smaller and lighter without loss of functionality, it’s likely that a great part of next-generation electronics will revolve around wearable technology. However, for wearables to truly transcend portables, we will need to rethink the way in which devices communicate with each other as “wireless body area networks” (or WBANs). The usual approach of using an antenna to radiate signals into the surrounding area while hoping to reach a receiver won’t cut it for wearables. But, this method of transmission not only demands a lot of energy but can also be unsafe from a cybersecurity standpoint. Moreover, the human body itself also constitutes a large obstacle because it absorbs electromagnetic radiation and blocks signals.
    But what alternatives do we have for wearable technology? One promising approach is “human body communication” (HBC), which involves using the body itself as a medium to transmit signals. The main idea is that some electric fields can propagate inside the body very efficiently without leaking to the surrounding area. By interfacing skin-worn devices with electrodes, we can enable them to communicate with each other using relatively lower frequencies than those used in conventional wireless protocols like Bluetooth. However, even research on HBC began over two decades, this technology hasn’t been put to use on a large scale.
    To explore the full potential of HBC, researchers from Japan, including Dr. Dairoku Muramatsu from Tokyo University of Science and Professor Ken Sasaki from The University of Tokyo focused on using HBC for a yet unexplored use: binaural hearing aids. Such hearing aid devices come in pairs — one for each ear — and greatly improve intelligibility and sound localization for the wearer by communicating with each other to adapt to the sound field. Because these hearing aids are in direct contact with the skin, they made for a perfect candidate application for HBC. In a recent study, which was published in the journal Electronics, the researchers investigated, through detailed numerical simulations, how electric fields emitted from an electrode in one ear distribute themselves in the human head and reach a receiving electrode on the opposite ear, and whether it could be leveraged in a digital communication system. In fact, the researchers had previously conducted an experimental study on HBC with real human subjects, the results of which were also published in Electronics.
    Using human-body models of different degrees of complexity, the researchers first determined the best representation to ensure accurate results in their simulations and then Once this was settled, they proceeded to explore the effects of various system parameters and characteristics, as Dr. Muramatsu puts it, “We calculated the input impedance characteristics of the transceiver electrodes, the transmission characteristics between transceivers, and the electric field distributions in and around the head. In this way, we clarified the transmission mechanisms of the proposed HBC system.” Finally, with these results, they determined the best electrode structure out of the ones they tested. They also calculated the levels of electromagnetic exposure caused by their system and found that it would be completely safe for humans, according to modern safety standards.
    Overall, this study showcases the potential of HBC and extends the applicability of this promising technology. After all, hearing aids are but one of all modern head-worn wireless devices. For example, HBC could be implemented in wireless earphones to enable them to communicate with each other using far less power. Moreover, because the radio waves used in HBC attenuate quickly outside of the body, HBC-based devices on separate people could operate at similar frequencies in the same space without causing noise or interference. “With our results, we have made great progress towards reliable, low-power communication systems that are not limited to hearing aids but also applicable to other head-mounted wearable devices. Not just this, accessories such as earrings and piercings could also be used to create new communication systems,” concludes Dr. Muramatsu.
    Story Source:
    Materials provided by Tokyo University of Science. Note: Content may be edited for style and length. More

  • in

    Brain-inspired highly scalable neuromorphic hardware

    KAIST researchers fabricated a brain-inspired highly scalable neuromorphic hardware by co-integrating single transistor neurons and synapses. Using standard silicon complementary metal-oxide-semiconductor (CMOS) technology, the neuromorphic hardware is expected to reduce chip cost and simplify fabrication procedures.
    The research team led by Yang-Kyu Choi and Sung-Yool Choi produced a neurons and synapses based on single transistor for highly scalable neuromorphic hardware and showed the ability to recognize text and face images. This research was featured in Science Advances on August 4.
    Neuromorphic hardware has attracted a great deal of attention because of its artificial intelligence functions, but consuming ultra-low power of less than 20 watts by mimicking the human brain. To make neuromorphic hardware work, a neuron that generates a spike when integrating a certain signal, and a synapse remembering the connection between two neurons are necessary, just like the biological brain. However, since neurons and synapses constructed on digital or analog circuits occupy a large space, there is a limit in terms of hardware efficiency and costs. Since the human brain consists of about 1011 neurons and 1014 synapses, it is necessary to improve the hardware cost in order to apply it to mobile and IoT devices.
    To solve the problem, the research team mimicked the behavior of biological neurons and synapses with a single transistor, and co-integrated them onto an 8-inch wafer. The manufactured neuromorphic transistors have the same structure as the transistors for memory and logic that are currently mass-produced. In addition, the neuromorphic transistors proved for the first time that they can be implemented with a ‘Janus structure’ that functions as both neuron and synapse, just like coins have heads and tails.
    Professor Yang-Kyu Choi said that this work can dramatically reduce the hardware cost by replacing the neurons and synapses that were based on complex digital and analog circuits with a single transistor. “We have demonstrated that neurons and synapses can be implemented using a single transistor,” said Joon-Kyu Han, the first author. “By co-integrating single transistor neurons and synapses on the same wafer using a standard CMOS process, the hardware cost of the neuromorphic hardware has been improved, which will accelerate the commercialization of neuromorphic hardware,” Han added.This research was supported by the National Research Foundation (NRF) and IC Design Education Center (IDEC).
    Story Source:
    Materials provided by The Korea Advanced Institute of Science and Technology (KAIST). Note: Content may be edited for style and length. More

  • in

    Leaping squirrels! Parkour is one of their many feats of agility

    Videos of squirrels leaping from bendy branches across impossibly large gaps, parkouring off walls, scrambling to recover from tricky landings.
    Just more YouTube content documenting the crazy antics of squirrels hell-bent on reaching peanuts?
    No, these videos are part of a research study to understand the split-second decisions squirrels make routinely as they race through the tree canopy, jumping from branch to branch, using skills honed to elude deadly predators.
    The payoff to understanding how squirrels learn the limits of their agility could be robots with better control to nimbly move through varied landscapes, such as the rubble of a collapsed building in search of survivors or to quickly access an environmental threat.
    Biologists like Robert Full at the University of California, Berkeley, have shown over the last few decades how animals like geckos, cockroaches and squirrels physically move and how their bodies and limbs help them in sticky situations — all of which have been applied to making more agile robots. But now they are tackling a harder problem: How do animals decide whether or not to take a leap? How do they assess their biomechanical abilities to know whether they can stick the landing?
    “I see this as the next frontier: How are the decisions of movement shaped by our body? This is made far more challenging, because you also must assess your environment,” said Full, a professor of integrative biology. “That’s an important fundamental biology question. Fortunately, now we can understand how to embody control and explain innovation by creating physical models, like the most agile smart robots ever built.”
    In a paper appearing this week in the journal Science, Full and former UC Berkeley doctoral student Nathaniel Hunt, now an assistant professor of biomechanics at the University of Nebraska, Omaha, report on their most recent experiments on free-ranging squirrels, quantifying how they learn to leap from different types of launching pads — some bendy, some not — in just a few attempts, how they change their body orientation in midair based on the quality of their launch, and how they alter their landing maneuvers in real-time, depending on the stability of the final perch. More

  • in

    Researchers develop a new AI-powered tool to identify and recommend jobs

    Car manufacturing workers, long haul airline pilots, coal workers, shop assistants — many employees are forced to undertake the difficult and sometimes distressing challenge of finding a new occupation quickly due to technological and economic change, or crises such as the COVID-19 pandemic.
    To make the job transition process easier, and increase the chances of success, researchers from the University of Technology Sydney (UTS) and UNSW Sydney have developed a machine learning-based method that can identify and recommend jobs with similar underlying skill sets to someone’s current occupation.
    The system can also respond in real-time to changes in job demand and provide recommendations of the precise skills needed to transition to a new occupation.
    Developed by Dr Nikolas Dawson and Dr Marian-Andrei Rizoiu from the UTS Data Science Institute and Professor Mary-Anne Williams, the Michael J Crouch Chair in Innovation at UNSW Business School, the system is based on findings from their new study, “Skill-driven Recommendations for Job Transition Pathways,” published in the international journal PLOS ONE.
    What are the benefits of using AI to find a job?
    Dr Dawson says while workplace change is inevitable, if we can make the job transition process easier and more efficient, there are significant productivity and equity benefits not only for individuals, but also for businesses and government. More

  • in

    Solving solar puzzle could help save Earth from planet-wide blackouts

    Scientists in Australia and in the USA have solved a long-standing mystery about the Sun that could help astronomers predict space weather and help us prepare for potentially devastating geomagnetic storms if they were to hit Earth.
    The Sun’s internal magnetic field is directly responsible for space weather — streams of high-energy particles from the Sun that can be triggered by solar flares, sunspots or coronal mass ejections that produce geomagnetic storms. Yet it is unclear how these happen and it has been impossible to predict when these events will occur.
    Now, a new study led by Dr Geoffrey Vasil from the School of Mathematics & Statistics at the University of Sydney could provide a strong theoretical framework to help improve our understanding of the Sun’s internal magnetic dynamo that helps drive near-Earth space weather.
    The Sun is made up of several distinct regions. The convection zone is one of the most important — a 200,000-kilometre-deep ocean of super-hot rolling, turbulent fluid plasma taking up the outer 30 percent of the star’s diameter.
    Existing solar theory suggests the largest swirls and eddies take up the convection zone, imagined as giant circular convection cells as pictured here by NASA.
    However, these cells have never been found, a long-standing problem known as the ‘Convective Conundrum’. More

  • in

    Building blocks: New evidence-based system predicts element combination forming high entropy alloy

    Building prediction models for high entropy alloys (HEAs) using material data is challenging, as datasets are often lacking or heavily biased. Now, researchers have developed a new evidence-based recommender system that determines various element combinations for potential HEAs. Unlike conventional data-driven techniques, this method has the added ability to recommend potential HEA candidates from limited amounts of experimental data. Their method can facilitate the development of alloys that have applications as advanced functional materials.
    High entropy alloys (HEAs) have desirable physical and chemical properties such as a high tensile strength, and corrosion and oxidation resistance, which make them suitable for a wide range of applications. HEAs are a recent development and their synthesis methods are an area of active research. But before these alloys can be synthesized, it is necessary to predict the various element combinations that would result in an HEA, in order to expedite and reduce the cost of materials research. One of the methods of doing this is by the inductive approach.
    The inductive method relies on theory-derived “descriptors” and parameters fitted from experimental data to represent an alloy of a particular element combination and predict their formation. Being data-dependent, this method is only as good as the data. However, experimental data regarding HEA formation is often biased. Additionally, different datasets might not be directly comparable for integration, making the inductive approach challenging and mathematically difficult.
    These drawbacks have led researchers to develop a novel evidence-based material recommender system (ERS) that can predict the formation of HEA without the need for material descriptors. In a collaborative work published in Nature Computational Science, researchers from Japan Advanced Institute of Science and Technology (JAIST), National Institute for Materials Science, Japan, National Institute of Advanced Industrial Science and Technology, Japan, HPC SYSTEMS Inc., Japan, and Université de technologie de Compiègne, France introduced a method that rationally transforms materials data into evidence about similarities between material compositions, and combines this evidence to draw conclusions about the properties of new materials.
    The research team consisted of Professor Hieu-Chi Dam from JAIST and his colleagues, Professor Van-Nam Huynh, Assistant Professor Duong-Nguyen Nguyen, and Minh-Quyet Ha, PhD student (JAIST); Dr. Takahiro Nagata, Dr. Toyohiro Chikyow, and Dr. Hiori Kino (National Institute for Materials Science, Japan); Dr. Takashi Miyake, (National Institute of Advanced Industrial Science and Technology, Japan); Dr. Viet-Cuong Nguyen (HPC SYSTEMS Inc., Japan); and Professor Thierry Denœux (Université de technologie de Compiègne, France).
    Regarding their novel approach to this issue, Prof. Hieu-Chi Dam elaborates: “We developed a data-driven materials development system that uses the theory of evidence to collect reasonable evidence for the composition of potential materials from multiple data sources, i.e., clues that indicate the possibility of the existence of unknown compositions, and to propose the composition of new materials based on this evidence.” The basis of their method is as follows: elements in existing alloys are initially substituted with chemically similar counterparts. The newly substituted alloys are considered as candidates. Then, the collected evidence regarding the similarity between material composition is used to draw conclusions about these candidates. Finally, the newly substituted alloys are ranked to recommend a potential HEA.
    The researchers used their method to recommend Fe-Co-based HEAs as these have potential applications in next-generation high power devices. Out of all possible combinations of elements, their method recommended an alloy consisting of iron, manganese, cobalt, and nickel (FeMnCoNi) as the most probable HEA. Using this information as a basis, the researchers successfully synthesized the Fe0.25Co0.25 Mn0.25Ni0.25 alloy, confirming the validity of their method.
    The newly developed method is a breakthrough and paves the way forward to synthesize a wide variety of materials without the need for large and consistence datasets of material properties as Prof. Dam explains, “Instead of forcibly merging data from multiple datasets, our system rationally considers each dataset as a source of evidence and combines the evidence to reasonably draw the final conclusions for recommending HEA, where the uncertainty can be quantitatively evaluated.”
    While furthering research on functional materials, the findings of Prof. Dam and his team are also a noteworthy contribution to the field of computational science and artificial intelligence as they allow the quantitative measurement of uncertainty in decision making in a data-driven manner. More

  • in

    Neural network model shows why people with autism read facial expressions differently

    People with autism spectrum disorder have difficulty interpreting facial expressions.
    Using a neural network model that reproduces the brain on a computer, a group of researchers based at Tohoku University have unraveled how this comes to be.
    The journal Scientific Reports published the results on July 26, 2021.
    “Humans recognize different emotions, such as sadness and anger by looking at facial expressions. Yet little is known about how we come to recognize different emotions based on the visual information of facial expressions,” said paper coauthor, Yuta Takahashi.
    “It is also not clear what changes occur in this process that leads to people with autism spectrum disorder struggling to read facial expressions.”
    The research group employed predictive processing theory to help understand more. According to this theory, the brain constantly predicts the next sensory stimulus and adapts when its prediction is wrong. Sensory information, such as facial expressions, helps reduce prediction error.
    The artificial neural network model incorporated the predictive processing theory and reproduced the developmental process by learning to predict how parts of the face would move in videos of facial expression. After this, the clusters of emotions were self-organized into the neural network model’s higher level neuron space — without the model knowing which emotion the facial expression in the video corresponds to.
    The model could generalize unknown facial expressions not given in the training, reproducing facial part movements and minimizing prediction errors.
    Following this, the researchers conducted experiments and induced abnormalities in the neurons’ activities to investigate the effects on learning development and cognitive characteristics. In the model where heterogeneity of activity in neural population was reduced, the generalization ability also decreased; thus, the formation of emotional clusters in higher-level neurons was inhibited. This led to a tendency to fail in identifying the emotion of unknown facial expressions, a similar symptom of autism spectrum disorder.
    According to Takahashi, the study clarified that predictive processing theory can explain emotion recognition from facial expressions using a neural network model.
    “We hope to further our understanding of the process by which humans learn to recognize emotions and the cognitive characteristics of people with autism spectrum disorder,” added Takahashi. “The study will help advance developing appropriate intervention methods for people who find it difficult to identify emotions.”
    Story Source:
    Materials provided by Tohoku University. Note: Content may be edited for style and length. More