More stories

  • in

    Can the AI driving ChatGPT help to detect early signs of Alzheimer's disease?

    The artificial intelligence algorithms behind the chatbot program ChatGPT — which has drawn attention for its ability to generate humanlike written responses to some of the most creative queries — might one day be able to help doctors detect Alzheimer’s Disease in its early stages. Research from Drexel University’s School of Biomedical Engineering, Science and Health Systems recently demonstrated that OpenAI’s GPT-3 program can identify clues from spontaneous speech that are 80% accurate in predicting the early stages of dementia.
    Reported in the journal PLOS Digital Health, the Drexel study is the latest in a series of efforts to show the effectiveness of natural language processing programs for early prediction of Alzheimer’s — leveraging current research suggesting that language impairment can be an early indicator of neurodegenerative disorders.
    Finding an Early Sign
    The current practice for diagnosing Alzheimer’s Disease typically involves a medical history review and lengthy set of physical and neurological evaluations and tests. While there is still no cure for the disease, spotting it early can give patients more options for therapeutics and support. Because language impairment is a symptom in 60-80% of dementia patients, researchers have been focusing on programs that can pick up on subtle clues — such as hesitation, making grammar and pronunciation mistakes and forgetting the meaning of words — as a quick test that could indicate whether or not a patient should undergo a full examination.
    “We know from ongoing research that the cognitive effects of Alzheimer’s Disease can manifest themselves in language production,” said Hualou Liang, PhD, a professor in Drexel’s School of Biomedical Engineering, Science and Health Systems and a coauthor of the research. “The most commonly used tests for early detection of Alzheimer’s look at acoustic features, such as pausing, articulation and vocal quality, in addition to tests of cognition. But we believe the improvement of natural language processing programs provide another path to support early identification of Alzheimer’s.”
    A Program that Listens and Learns
    GPT-3, officially the third generation of OpenAI’s General Pretrained Transformer (GPT), uses a deep learning algorithm — trained by processing vast swaths of information from the internet, with a particular focus on how words are used, and how language is constructed. This training allows it to produce a human-like response to any task that involves language, from responses to simple questions, to writing poems or essays. More

  • in

    New X-ray imaging technique to study the transient phases of quantum materials

    The use of light to produce transient phases in quantum materials is fast becoming a novel way to engineer new properties in them, such as the generation of superconductivity or nanoscale topological defects. However, visualizing the growth of a new phase in a solid is not easy, due in-part to the wide range of spatial and time scales involved in the process.
    Although in the last two decades scientists have explained light-induced phase transitions by invoking nanoscale dynamics, real space images have not yet been produced and, thus, no one has seen them.
    In the new study published in Nature Physics, ICFO researchers Allan S. Johnson and Daniel Pérez-Salinas, led by former ICFO Prof. Simon Wall, in collaboration with colleagues from Aarhus University, Sogang University, Vanderbilt University, the Max Born Institute, the Diamond Light Source, ALBA Synchrotron, Utrecht University, and the Pohang Accelerator Laboratory, have pioneered a new imaging method that allows the capture of the light-induced phase transition in vanadium oxide (VO2) with high spatial and temporal resolution.
    The new technique implemented by the researchers is based on coherent X-ray hyperspectral imaging at a free electron laser, which has allowed them to visualize and better understand, at the nanoscale, the insulator-to-metal phase transition in this very well-known quantum material.
    The crystal VO2 has been widely used in to study light-induced phase transitions. It was the first material to have its solid-solid transition tracked by time-resolved X-ray diffraction and its electronic nature was studied by using for the first time ultrafast X-ray absorption techniques. At room temperature, VO2 is in the insulating phase. However, if light is applied to the material, it is possible to break the dimers of the vanadium ion pairs and drive the transition from an insulating to a metallic phase.
    In their experiment, the authors of the study prepared thin samples of VO2 with a gold mask to define the field of view. Then, the samples were taken to the X-ray Free Electron Laser facility at the Pohang Accelerator Laboratory, where an optical laser pulse induced the transient phase, before being probed by an ultrafast X-ray laser pulse. A camera captured the scattered X-rays, and the coherent scattering patterns were converted into images by using two different approaches: Fourier Transform Holography (FTH) and Coherent Diffractive Imaging (CDI). Images were taken at a range of time delays and X-ray wavelengths to build up a movie of the process with 150 femtosecond time resolution and 50 nm spatial resolution, but also with full hyperspectral information. More

  • in

    Words prove their worth as teaching tools for robots

    Exploring a new way to teach robots, Princeton researchers have found that human-language descriptions of tools can accelerate the learning of a simulated robotic arm lifting and using a variety of tools.
    The results build on evidence that providing richer information during artificial intelligence (AI) training can make autonomous robots more adaptive to new situations, improving their safety and effectiveness.
    Adding descriptions of a tool’s form and function to the training process for the robot improved the robot’s ability to manipulate newly encountered tools that were not in the original training set. A team of mechanical engineers and computer scientists presented the new method, Accelerated Learning of Tool Manipulation with LAnguage, or ATLA, at the Conference on Robot Learning on Dec. 14.
    Robotic arms have great potential to help with repetitive or challenging tasks, but training robots to manipulate tools effectively is difficult: Tools have a wide variety of shapes, and a robot’s dexterity and vision are no match for a human’s.
    “Extra information in the form of language can help a robot learn to use the tools more quickly,” said study coauthor Anirudha Majumdar, an assistant professor of mechanical and aerospace engineering at Princeton who leads the Intelligent Robot Motion Lab.
    The team obtained tool descriptions by querying GPT-3, a large language model released by OpenAI in 2020 that uses a form of AI called deep learning to generate text in response to a prompt. After experimenting with various prompts, they settled on using “Describe the [feature] of [tool] in a detailed and scientific response,” where the feature was the shape or purpose of the tool. More

  • in

    New and improved multi-band operational receiver for 5G new radio communication

    An ultra-wide-band receiver based on a harmonic selection technique to improve the operational bandwidth of 5G networks has been developed by Tokyo Tech researchers in a new study. Fifth generation (5G) mobile networks are now being used worldwide with frequencies of over 100 Hz. To keep up with the data traffic in these networks, appropriate receivers are necessary. In this regard, the proposed technology could revolutionize the world of next-generation communications.
    As next-generation communication networks are being developed, the technology used to deploy them must also evolve alongside. Fifth generation mobile network New Radio (5G NR) bands are continuously expanding to improve the channel capacity and data rate. To realize cross-standard communication and worldwide application using 5G NR, multi-band compatibility is, therefore, essential.
    Recently, millimeter-wave (mmW) communication has been considered a promising candidate for managing the ever-increasing data traffic between large devices in 5G NR networks. In the past few years, many studies have shown that a phased-array architecture improves the signal quality for 5G NR communication at mmW frequencies. Unfortunately, multiple chips are needed for multi-band operation, which increases the system size and complexity. Moreover, operating in multi-band modes exposes the receivers to changing electromagnetic environments, leading to cross-talk and cluttered signals with unwanted echoes.
    To address these issues, a team of researchers from Tokyo Institute of Technology (Tokyo Tech) in Japan has now developed a novel “harmonic-selection technique” for extending the operational bandwidth of 5G NR communication. The study, led by Professor Kenichi Okada, was published in the IEEE Journal of Solid-State Circuits. “Compared to conventional systems, our proposed network operates at low power consumption. Additionally, the frequency coverage makes it compatible with all existing 5G bands, as well as the 60 GHz earmarked as the next potential licensed band. As such, our receiver could be the key to utilizing the ever-growing 5G bandwidth,” says Prof. Okada.
    To fabricate the proposed dual-channel multi-band phased-array receiver, the team used a 65-nm CMOS process. The chip size was measured to be just 3.2 mm x 1.4 mm, which included the receiver with two channels.
    The team took a three-pronged approach to tackle the problems with 5G NR communication. The first was to use a harmonic-selection technique using a tri-phase local oscillator (LO) to drive the mixer. This technique decreased the needed LO frequency coverage while allowing for multi-band down-conversion. The second was to use a dual-mode multi-band low-noise amplifier (LNA). The LNA structure not only improved the power efficiency and tolerance of the inter-band blocker (reducing interference from other bands) but also achieved a good balance between circuit performance and chip area. Finally, the third prong was the receiver, which utilized a Hartley receiver’s architecture to improve image rejections. The team introduced a single-stage hybrid-type polyphase filter (PPF) for sideband selection and image rejection calibration.
    The team found that the proposed technique outperformed other state-of-the-art multi-band receivers. The harmonic-selection technique enabled operation between (24.25 — 71) GHz while showing above 36-dB inter-band blocker rejection. Additionally, the power consumed by the receiver was low (36 mW, 32 mW, 51 mW, and 75 mW at frequencies of 28 GHz, 39 GHz, 47.2 GHz, and 60.1 GHz, respectively).
    “By combining a dual-mode multi-band LNA with a polyphase filter, the device realizes rejections to inter-band blockers better than other state-of-the-art filters. This means that for currently used bands, the rejections are better than 50dB and over 36dB for the entire supported (24-71) GHz operation region. With new 5G frequency bands on the horizon, such low-noise broadband receivers will prove to be useful,” concludes an optimistic Prof. Okada.
    Story Source:
    Materials provided by Tokyo Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Cheerful chatbots don't necessarily improve customer service

    Imagine messaging an artificial intelligence (AI) chatbot about a missing package and getting the response that it would be “delighted” to help. Once the bot creates the new order, they say they are “happy” to resolve the issue. After, you receive a survey about your interaction, but would you be likely to rate it as positive or negative?
    This scenario isn’t that far from reality, as AI chatbots are already taking over online commerce. By 2025, 95% of companies will have an AI chatbot, according to Finance Digest. AI might not be sentient yet, but it can be programmed to express emotions.
    Humans displaying positive emotions in customer service interactions have long been known to improve customer experience, but researchers at the Georgia Institute of Technology’s Scheller College of Business wanted to see if this also applied to AI. They conducted experimental studies to determine if positive emotional displays improved customer service and found that emotive AI is only appreciated if the customer expects it, and it may not be the best avenue for companies to invest in.
    “It is commonly believed and repeatedly shown that human employees can express positive emotion to improve customers’ service evaluations,” said Han Zhang, the Steven A. Denning Professor in Technology & Management. “Our findings suggest that the likelihood of AI’s expression of positive emotion to benefit or hurt service evaluations depends on the type of relationship that customers expect from the service agent.”
    The researchers presented their findings in the paper, “Bots With Feelings: Should AI Agents Express Positive Emotion in Customer Service?,” in Information Systems Research in December.
    Studying AI Emotion
    The researchers conducted three studies to expand the understanding of emotional AI in customer service transactions. Although they changed the participants and scenario in each study, AI chatbots imbued with emotion used positive emotional adjectives, such as excited, delighted, happy, or glad. They also deployed more exclamation points. More

  • in

    Characters' actions in movie scripts reflect gender stereotypes

    Researchers have developed a novel machine-learning framework that uses scene descriptions in movie scripts to automatically recognize different characters’ actions. Applying the framework to hundreds of movie scripts showed that these actions tend to reflect widespread gender stereotypes, some of which are found to be consistent across time. Victor Martinez and colleagues at the University of Southern California, U.S., present these findings in the open-access journal PLOS ONE on December 21.
    Movies, TV shows, and other media consistently portray traditional gender stereotypes, some of which may be harmful. To deepen understanding of this issue, some researchers have explored the use of computational frameworks as an efficient and accurate way to analyze large amounts of character dialogue in scripts. However, some harmful stereotypes might be communicated not through what characters say, but through their actions.
    To explore how characters’ actions might reflect stereotypes, Martinez and colleagues used a machine-learning approach to create a computational model that can automatically analyze scene descriptions in movie scripts and identify different characters’ actions. Using this model, the researchers analyzed over 1.2 million scene descriptions from 912 movie scripts produced from 1909 to 2013, identifying fifty thousand actions performed by twenty thousand characters.
    Next, the researchers conducted statistical analyses to examine whether there were differences between the types of actions performed by characters of different genders. These analyses identified a number of differences that reflect known gender stereotypes.
    For instance, they found that female characters tend to display less agency than male characters, and that female characters are more likely to show affection. Male characters are less likely to “sob” or “cry,” and female characters are more likely to be subjected to “gawking” or “watching” by other characters, highlighting an emphasis on female appearance.
    While the researchers’ model is limited by the extent of its ability to fully capture nuanced societal context relating the script to each scene and the overall narrative, these findings align with prior research on gender stereotypes in popular media, and could help raise awareness of how media might perpetuate harmful stereotypes and thereby influence people’s real-life beliefs and actions. In the future, the new machine-learning framework could be refined and applied to incorporate notions of intersectionality such as between gender, age, and race, to deepen understanding of this issue
    The authors add: “Researchers have proposed using machine-learning methods to identify stereotypes in character dialogues in media, but these methods do not account for harmful stereotypes communicated through character actions. To address this issue, we developed a large-scale machine-learning framework that can identify character actions from movie script descriptions. By collecting 1.2 million scene descriptions from 912 movie scripts, we were able to study systematic gender differences in movie portrayals at a large scale.”
    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    At the edge of graphene-based electronics

    A pressing quest in the field of nanoelectronics is the search for a material that could replace silicon. Graphene has seemed promising for decades. But its potential faltered along the way, due to damaging processing methods and the lack of a new electronics paradigm to embrace it. With silicon nearly maxed out in its ability to accommodate faster computing, the next big nanoelectronics platform is needed now more than ever.
    Walter de Heer, Regents’ Professor in the School of Physics at the Georgia Institute of Technology, has taken a critical step forward in making the case for a successor to silicon. De Heer and his collaborators developed a new nanoelectronics platform based on graphene — a single sheet of carbon atoms. The technology is compatible with conventional microelectronics manufacturing, a necessity for any viable alternative to silicon. In the course of their research, published in Nature Communications, the team may have also discovered a new quasiparticle. Their discovery could lead to manufacturing smaller, faster, more efficient, and more sustainable computer chips, and has potential implications for quantum and high-performance computing.
    “Graphene’s power lies in its flat, two-dimensional structure that is held together by the strongest chemical bonds known,” de Heer said. “It was clear from the beginning that graphene can be miniaturized to a far greater extent than silicon — enabling much smaller devices, while operating at higher speeds and producing much less heat. This means that, in principle, more devices can be packed on a single chip of graphene than with silicon.”
    In 2001, de Heer proposed an alternative form of electronics based on epitaxial graphene, or epigraphene — a layer of graphene that was found to spontaneously form on top of silicon carbide crystal, a semiconductor used in high power electronics. At the time, researchers found that electric currents flow without resistance along epigraphene’s edges, and that graphene devices could be seamlessly interconnected without metal wires. This combination allows for a form of electronics that relies on the unique light-like properties of graphene electrons.
    “Quantum interference has been observed in carbon nanotubes at low temperatures, and we expect to see similar effects in epigraphene ribbons and networks,” de Heer said. “This important feature of graphene is not possible with silicon.”
    Building the Platform
    To create the new nanoelectronics platform, the researchers created a modified form of epigraphene on a silicon carbide crystal substrate. In collaboration with researchers at the Tianjin International Center for Nanoparticles and Nanosystems at the University of Tianjin, China, they produced unique silicon carbide chips from electronics-grade silicon carbide crystals. The graphene itself was grown at de Heer’s laboratory at Georgia Tech using patented furnaces. More

  • in

    The physical intelligence of ant and robot collectives

    Individual ants are relatively simple creatures and yet a colony of ants can perform really complex tasks, such as intricate construction, foraging and defense.
    Recently, Harvard researchers took inspiration from ants to design a team of relatively simple robots that can work collectively to perform complex tasks using only a few basic parameters.
    The research was published in ELife.
    “This project continued along an abiding interest in understanding the collective dynamics of social insects such as termites and bees, especially how these insects can manipulate the environment to create complex functional architectures,” said L Mahadevan, the Lola England de Valpine Professor of Applied Mathematics, of Organismic and Evolutionary Biology, and Physics, and senior author of paper.
    The research team began by studying how black carpenter ants work together to excavate out of and escape from a soft corral.
    “At first, the ants inside the corral moved around randomly, communicating via their antennae before they started working together to escape the corral,” said S Ganga Prasath, a postdoctoral fellow at the Harvard John A. Paulson School of Engineering and Applied Sciences and one of the lead authors of the paper. More