More stories

  • in

    Shrinking hydrogels enlarge nanofabrication options

    Carnegie Mellon University’s Yongxin (Leon) Zhao and the Chinese University of Hong Kong’s Shih-Chi Chen have a big idea for manufacturing nanodevices.
    Zhao’s Biophotonics Lab develops novel techniques to study biological and pathological processes in cells and tissues. Through a process called expansion microscopy, the lab works to advance techniques to proportionally enlarge microscopic samples embedded in a hydrogel, allowing researchers to be able to view fine details without upgrading their microscopes.
    In 2019, an inspiring conversation with Shih-Chi Chen, who was visiting Carnegie Mellon as an invited speaker and is a professor at the Chinese University of Hong Kong’s Department of Mechanical and Automation Engineering, sparked a collaboration between the two researchers. They thought they could use their combined expertise to find novel solutions for the long-standing challenge in microfabrication: developing ways to reduce the size of printable nanodevices to as small as 10s of nanometers or several atoms thick.
    Their solution is the opposite of expansion microscopy: create the 3D pattern of a material in hydrogel and shrink it for nanoscale resolution.
    “Shih-Chi is known for inventing the ultrafast two-photon lithography system,” said Zhao, the Eberly Family Career Development Associate Professor of Biological Sciences. “We met during his visit to Carnegie Mellon and decided to combine our techniques and expertise to pursue this radical idea.”
    The results of the collaboration open new doors for designing sophisticated nanodevices and are published in the journal Science. More

  • in

    Can the AI driving ChatGPT help to detect early signs of Alzheimer's disease?

    The artificial intelligence algorithms behind the chatbot program ChatGPT — which has drawn attention for its ability to generate humanlike written responses to some of the most creative queries — might one day be able to help doctors detect Alzheimer’s Disease in its early stages. Research from Drexel University’s School of Biomedical Engineering, Science and Health Systems recently demonstrated that OpenAI’s GPT-3 program can identify clues from spontaneous speech that are 80% accurate in predicting the early stages of dementia.
    Reported in the journal PLOS Digital Health, the Drexel study is the latest in a series of efforts to show the effectiveness of natural language processing programs for early prediction of Alzheimer’s — leveraging current research suggesting that language impairment can be an early indicator of neurodegenerative disorders.
    Finding an Early Sign
    The current practice for diagnosing Alzheimer’s Disease typically involves a medical history review and lengthy set of physical and neurological evaluations and tests. While there is still no cure for the disease, spotting it early can give patients more options for therapeutics and support. Because language impairment is a symptom in 60-80% of dementia patients, researchers have been focusing on programs that can pick up on subtle clues — such as hesitation, making grammar and pronunciation mistakes and forgetting the meaning of words — as a quick test that could indicate whether or not a patient should undergo a full examination.
    “We know from ongoing research that the cognitive effects of Alzheimer’s Disease can manifest themselves in language production,” said Hualou Liang, PhD, a professor in Drexel’s School of Biomedical Engineering, Science and Health Systems and a coauthor of the research. “The most commonly used tests for early detection of Alzheimer’s look at acoustic features, such as pausing, articulation and vocal quality, in addition to tests of cognition. But we believe the improvement of natural language processing programs provide another path to support early identification of Alzheimer’s.”
    A Program that Listens and Learns
    GPT-3, officially the third generation of OpenAI’s General Pretrained Transformer (GPT), uses a deep learning algorithm — trained by processing vast swaths of information from the internet, with a particular focus on how words are used, and how language is constructed. This training allows it to produce a human-like response to any task that involves language, from responses to simple questions, to writing poems or essays. More

  • in

    Extreme weather in 2022 showed the global impact of climate change

    It was another shattering year.

    Climate change amped up weather extremes around the globe, smashing temperature records, sinking river levels to historic lows and raising rainfall to devastating highs. Droughts set the stage for wildfires and worsened food insecurity. Researchers found themselves pondering the limits of humans’ ability to tolerate extreme heat (SN: 7/27/22).

    The extreme events from 2022 pinpointed on the map below are just a sample of this year’s climate disasters. Each was exacerbated by human-caused climate change or is in line with projections of regional impacts.

    Science News headlines, in your inbox

    Headlines and summaries of the latest Science News articles, delivered to your email inbox every Friday.

    Thank you for signing up!

    There was a problem signing you up.

    In its Sixth Assessment Report, released in 2021 and 2022, the United Nations’ Intergovernmental Panel on Climate Change, or IPCC, warned that humans are dramatically overhauling Earth’s climate (SN: 8/9/21). Earth’s average surface temperature has already risen by at least 1.1 degree Celsius since preindustrial times, thanks to human inputs of heat-trapping gases to the atmosphere, particularly carbon dioxide and methane (SN: 3/10/22). That warming has shifted the flow of energy around the planet, altering weather patterns, raising sea levels and turning past extremes into new normals (SN: 2/1/22).

    And the world will have to weather more such climate extremes as carbon keeps accumulating in the atmosphere and global temperatures continue to rise. But IPCC scientists and others hope that, by highlighting the regional and local effects of climate change, the world will ramp up its efforts to reduce climate-warming emissions — averting a more disastrous future. More

  • in

    New X-ray imaging technique to study the transient phases of quantum materials

    The use of light to produce transient phases in quantum materials is fast becoming a novel way to engineer new properties in them, such as the generation of superconductivity or nanoscale topological defects. However, visualizing the growth of a new phase in a solid is not easy, due in-part to the wide range of spatial and time scales involved in the process.
    Although in the last two decades scientists have explained light-induced phase transitions by invoking nanoscale dynamics, real space images have not yet been produced and, thus, no one has seen them.
    In the new study published in Nature Physics, ICFO researchers Allan S. Johnson and Daniel Pérez-Salinas, led by former ICFO Prof. Simon Wall, in collaboration with colleagues from Aarhus University, Sogang University, Vanderbilt University, the Max Born Institute, the Diamond Light Source, ALBA Synchrotron, Utrecht University, and the Pohang Accelerator Laboratory, have pioneered a new imaging method that allows the capture of the light-induced phase transition in vanadium oxide (VO2) with high spatial and temporal resolution.
    The new technique implemented by the researchers is based on coherent X-ray hyperspectral imaging at a free electron laser, which has allowed them to visualize and better understand, at the nanoscale, the insulator-to-metal phase transition in this very well-known quantum material.
    The crystal VO2 has been widely used in to study light-induced phase transitions. It was the first material to have its solid-solid transition tracked by time-resolved X-ray diffraction and its electronic nature was studied by using for the first time ultrafast X-ray absorption techniques. At room temperature, VO2 is in the insulating phase. However, if light is applied to the material, it is possible to break the dimers of the vanadium ion pairs and drive the transition from an insulating to a metallic phase.
    In their experiment, the authors of the study prepared thin samples of VO2 with a gold mask to define the field of view. Then, the samples were taken to the X-ray Free Electron Laser facility at the Pohang Accelerator Laboratory, where an optical laser pulse induced the transient phase, before being probed by an ultrafast X-ray laser pulse. A camera captured the scattered X-rays, and the coherent scattering patterns were converted into images by using two different approaches: Fourier Transform Holography (FTH) and Coherent Diffractive Imaging (CDI). Images were taken at a range of time delays and X-ray wavelengths to build up a movie of the process with 150 femtosecond time resolution and 50 nm spatial resolution, but also with full hyperspectral information. More

  • in

    Words prove their worth as teaching tools for robots

    Exploring a new way to teach robots, Princeton researchers have found that human-language descriptions of tools can accelerate the learning of a simulated robotic arm lifting and using a variety of tools.
    The results build on evidence that providing richer information during artificial intelligence (AI) training can make autonomous robots more adaptive to new situations, improving their safety and effectiveness.
    Adding descriptions of a tool’s form and function to the training process for the robot improved the robot’s ability to manipulate newly encountered tools that were not in the original training set. A team of mechanical engineers and computer scientists presented the new method, Accelerated Learning of Tool Manipulation with LAnguage, or ATLA, at the Conference on Robot Learning on Dec. 14.
    Robotic arms have great potential to help with repetitive or challenging tasks, but training robots to manipulate tools effectively is difficult: Tools have a wide variety of shapes, and a robot’s dexterity and vision are no match for a human’s.
    “Extra information in the form of language can help a robot learn to use the tools more quickly,” said study coauthor Anirudha Majumdar, an assistant professor of mechanical and aerospace engineering at Princeton who leads the Intelligent Robot Motion Lab.
    The team obtained tool descriptions by querying GPT-3, a large language model released by OpenAI in 2020 that uses a form of AI called deep learning to generate text in response to a prompt. After experimenting with various prompts, they settled on using “Describe the [feature] of [tool] in a detailed and scientific response,” where the feature was the shape or purpose of the tool. More

  • in

    New and improved multi-band operational receiver for 5G new radio communication

    An ultra-wide-band receiver based on a harmonic selection technique to improve the operational bandwidth of 5G networks has been developed by Tokyo Tech researchers in a new study. Fifth generation (5G) mobile networks are now being used worldwide with frequencies of over 100 Hz. To keep up with the data traffic in these networks, appropriate receivers are necessary. In this regard, the proposed technology could revolutionize the world of next-generation communications.
    As next-generation communication networks are being developed, the technology used to deploy them must also evolve alongside. Fifth generation mobile network New Radio (5G NR) bands are continuously expanding to improve the channel capacity and data rate. To realize cross-standard communication and worldwide application using 5G NR, multi-band compatibility is, therefore, essential.
    Recently, millimeter-wave (mmW) communication has been considered a promising candidate for managing the ever-increasing data traffic between large devices in 5G NR networks. In the past few years, many studies have shown that a phased-array architecture improves the signal quality for 5G NR communication at mmW frequencies. Unfortunately, multiple chips are needed for multi-band operation, which increases the system size and complexity. Moreover, operating in multi-band modes exposes the receivers to changing electromagnetic environments, leading to cross-talk and cluttered signals with unwanted echoes.
    To address these issues, a team of researchers from Tokyo Institute of Technology (Tokyo Tech) in Japan has now developed a novel “harmonic-selection technique” for extending the operational bandwidth of 5G NR communication. The study, led by Professor Kenichi Okada, was published in the IEEE Journal of Solid-State Circuits. “Compared to conventional systems, our proposed network operates at low power consumption. Additionally, the frequency coverage makes it compatible with all existing 5G bands, as well as the 60 GHz earmarked as the next potential licensed band. As such, our receiver could be the key to utilizing the ever-growing 5G bandwidth,” says Prof. Okada.
    To fabricate the proposed dual-channel multi-band phased-array receiver, the team used a 65-nm CMOS process. The chip size was measured to be just 3.2 mm x 1.4 mm, which included the receiver with two channels.
    The team took a three-pronged approach to tackle the problems with 5G NR communication. The first was to use a harmonic-selection technique using a tri-phase local oscillator (LO) to drive the mixer. This technique decreased the needed LO frequency coverage while allowing for multi-band down-conversion. The second was to use a dual-mode multi-band low-noise amplifier (LNA). The LNA structure not only improved the power efficiency and tolerance of the inter-band blocker (reducing interference from other bands) but also achieved a good balance between circuit performance and chip area. Finally, the third prong was the receiver, which utilized a Hartley receiver’s architecture to improve image rejections. The team introduced a single-stage hybrid-type polyphase filter (PPF) for sideband selection and image rejection calibration.
    The team found that the proposed technique outperformed other state-of-the-art multi-band receivers. The harmonic-selection technique enabled operation between (24.25 — 71) GHz while showing above 36-dB inter-band blocker rejection. Additionally, the power consumed by the receiver was low (36 mW, 32 mW, 51 mW, and 75 mW at frequencies of 28 GHz, 39 GHz, 47.2 GHz, and 60.1 GHz, respectively).
    “By combining a dual-mode multi-band LNA with a polyphase filter, the device realizes rejections to inter-band blockers better than other state-of-the-art filters. This means that for currently used bands, the rejections are better than 50dB and over 36dB for the entire supported (24-71) GHz operation region. With new 5G frequency bands on the horizon, such low-noise broadband receivers will prove to be useful,” concludes an optimistic Prof. Okada.
    Story Source:
    Materials provided by Tokyo Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Cheerful chatbots don't necessarily improve customer service

    Imagine messaging an artificial intelligence (AI) chatbot about a missing package and getting the response that it would be “delighted” to help. Once the bot creates the new order, they say they are “happy” to resolve the issue. After, you receive a survey about your interaction, but would you be likely to rate it as positive or negative?
    This scenario isn’t that far from reality, as AI chatbots are already taking over online commerce. By 2025, 95% of companies will have an AI chatbot, according to Finance Digest. AI might not be sentient yet, but it can be programmed to express emotions.
    Humans displaying positive emotions in customer service interactions have long been known to improve customer experience, but researchers at the Georgia Institute of Technology’s Scheller College of Business wanted to see if this also applied to AI. They conducted experimental studies to determine if positive emotional displays improved customer service and found that emotive AI is only appreciated if the customer expects it, and it may not be the best avenue for companies to invest in.
    “It is commonly believed and repeatedly shown that human employees can express positive emotion to improve customers’ service evaluations,” said Han Zhang, the Steven A. Denning Professor in Technology & Management. “Our findings suggest that the likelihood of AI’s expression of positive emotion to benefit or hurt service evaluations depends on the type of relationship that customers expect from the service agent.”
    The researchers presented their findings in the paper, “Bots With Feelings: Should AI Agents Express Positive Emotion in Customer Service?,” in Information Systems Research in December.
    Studying AI Emotion
    The researchers conducted three studies to expand the understanding of emotional AI in customer service transactions. Although they changed the participants and scenario in each study, AI chatbots imbued with emotion used positive emotional adjectives, such as excited, delighted, happy, or glad. They also deployed more exclamation points. More

  • in

    Characters' actions in movie scripts reflect gender stereotypes

    Researchers have developed a novel machine-learning framework that uses scene descriptions in movie scripts to automatically recognize different characters’ actions. Applying the framework to hundreds of movie scripts showed that these actions tend to reflect widespread gender stereotypes, some of which are found to be consistent across time. Victor Martinez and colleagues at the University of Southern California, U.S., present these findings in the open-access journal PLOS ONE on December 21.
    Movies, TV shows, and other media consistently portray traditional gender stereotypes, some of which may be harmful. To deepen understanding of this issue, some researchers have explored the use of computational frameworks as an efficient and accurate way to analyze large amounts of character dialogue in scripts. However, some harmful stereotypes might be communicated not through what characters say, but through their actions.
    To explore how characters’ actions might reflect stereotypes, Martinez and colleagues used a machine-learning approach to create a computational model that can automatically analyze scene descriptions in movie scripts and identify different characters’ actions. Using this model, the researchers analyzed over 1.2 million scene descriptions from 912 movie scripts produced from 1909 to 2013, identifying fifty thousand actions performed by twenty thousand characters.
    Next, the researchers conducted statistical analyses to examine whether there were differences between the types of actions performed by characters of different genders. These analyses identified a number of differences that reflect known gender stereotypes.
    For instance, they found that female characters tend to display less agency than male characters, and that female characters are more likely to show affection. Male characters are less likely to “sob” or “cry,” and female characters are more likely to be subjected to “gawking” or “watching” by other characters, highlighting an emphasis on female appearance.
    While the researchers’ model is limited by the extent of its ability to fully capture nuanced societal context relating the script to each scene and the overall narrative, these findings align with prior research on gender stereotypes in popular media, and could help raise awareness of how media might perpetuate harmful stereotypes and thereby influence people’s real-life beliefs and actions. In the future, the new machine-learning framework could be refined and applied to incorporate notions of intersectionality such as between gender, age, and race, to deepen understanding of this issue
    The authors add: “Researchers have proposed using machine-learning methods to identify stereotypes in character dialogues in media, but these methods do not account for harmful stereotypes communicated through character actions. To address this issue, we developed a large-scale machine-learning framework that can identify character actions from movie script descriptions. By collecting 1.2 million scene descriptions from 912 movie scripts, we were able to study systematic gender differences in movie portrayals at a large scale.”
    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More