More stories

  • in

    Researchers show a new way to induce useful defects using invisible material properties

    Much of modern electronic and computing technology is based on one idea: add chemical impurities, or defects, to semiconductors to change their ability to conduct electricity. These altered materials are then combined in different ways to produce the devices that form the basis for digital computing, transistors, and diodes. Indeed, some quantum information technologies are based on a similar principle: adding defects and specific atoms within materials can produce qubits, the fundamental information storage units of quantum computing.
    Gaurav Bahl, professor of mechanical science and engineering at the University of Illinois Urbana-Champaign and member of the Illinois Quantum Information Sciences and Technology Center, is exploring how special non-linear properties in engineered materials can achieve similar functionalities without the need to add intentional defects. As his research group reports in their article “Self-Induced Dirac Boundary State and Digitization in a Nonlinear Resonator Chain” published in Physical Review Letters, a metamaterial can change its functionality on its own depending on the power level of the input.
    A metamaterial is an artificial system that replicates the behavior of real materials made of natural atoms. The researchers constructed a whose behavior is analogous to a special kind of semiconductor called a Dirac material. It consisted of a chain of magnetic-mechanical resonators, where the magnetic interactions acted like bonds between atoms in a one-dimensional crystal. When any of these “atoms” was mechanically excited, that is, was made to move periodically, the excitation spread to the rest of the crystal, just like electrons injected into a semiconductor.
    After demonstrating that a completely uniform Dirac metamaterial does not allow mechanical excitations to pass through (just like electrons are forbidden from flowing through insulating semiconductor), the researchers introduced a specific set of nonlinearities into the system. This new property added sensitivity to the level of the mechanical excitation and could subtly change the resonance energy of the magneto-mechanical atoms. With the right choice of nonlinearity, the researchers observed a sharp transition from insulating to conducting behavior depending on how strong an input was provided.
    This intriguing behavior resulted from the spontaneous appearance of a new boundary where the effective mass of the mechanical excitation, an invisible internal property of Dirac materials, underwent a change of sign depending on the level of the excitation. The researchers were surprised to find that this boundary was accompanied by a new state that “popped in” at the boundary and allowed input energy to transmit through the material. This effect was very similar to how a defect atom acts within a semiconductor
    “In photonics and electronics,” Bahl said, “nonlinear properties like this could be engineered to form the foundation of new computational systems that don’t rely on the conventional semiconductor approach.”
    Whenever we add defect states and special atoms, we interrupt the uniformity of the material, which can lead to other undesirable effects. However, materials in which a defect state can be formed on demand through an invisible property, such as the Dirac mass used in this work, has profound implications for quantum information systems where it promises qubits that can be produced dynamically where they are needed. The next challenge is finding or synthesizing real materials based on natural atoms that can replicate this effect.
    The experiments were performed by Physics graduate student Gengming Liu in collaboration with postdoc Dr. Jiho Noh and MechSE graduate student Jianing Zhao
    Story Source:
    Materials provided by University of Illinois Grainger College of Engineering. Original written by Michael O’Boyle. Note: Content may be edited for style and length. More

  • in

    Shrinking hydrogels enlarge nanofabrication options

    Carnegie Mellon University’s Yongxin (Leon) Zhao and the Chinese University of Hong Kong’s Shih-Chi Chen have a big idea for manufacturing nanodevices.
    Zhao’s Biophotonics Lab develops novel techniques to study biological and pathological processes in cells and tissues. Through a process called expansion microscopy, the lab works to advance techniques to proportionally enlarge microscopic samples embedded in a hydrogel, allowing researchers to be able to view fine details without upgrading their microscopes.
    In 2019, an inspiring conversation with Shih-Chi Chen, who was visiting Carnegie Mellon as an invited speaker and is a professor at the Chinese University of Hong Kong’s Department of Mechanical and Automation Engineering, sparked a collaboration between the two researchers. They thought they could use their combined expertise to find novel solutions for the long-standing challenge in microfabrication: developing ways to reduce the size of printable nanodevices to as small as 10s of nanometers or several atoms thick.
    Their solution is the opposite of expansion microscopy: create the 3D pattern of a material in hydrogel and shrink it for nanoscale resolution.
    “Shih-Chi is known for inventing the ultrafast two-photon lithography system,” said Zhao, the Eberly Family Career Development Associate Professor of Biological Sciences. “We met during his visit to Carnegie Mellon and decided to combine our techniques and expertise to pursue this radical idea.”
    The results of the collaboration open new doors for designing sophisticated nanodevices and are published in the journal Science. More

  • in

    Can the AI driving ChatGPT help to detect early signs of Alzheimer's disease?

    The artificial intelligence algorithms behind the chatbot program ChatGPT — which has drawn attention for its ability to generate humanlike written responses to some of the most creative queries — might one day be able to help doctors detect Alzheimer’s Disease in its early stages. Research from Drexel University’s School of Biomedical Engineering, Science and Health Systems recently demonstrated that OpenAI’s GPT-3 program can identify clues from spontaneous speech that are 80% accurate in predicting the early stages of dementia.
    Reported in the journal PLOS Digital Health, the Drexel study is the latest in a series of efforts to show the effectiveness of natural language processing programs for early prediction of Alzheimer’s — leveraging current research suggesting that language impairment can be an early indicator of neurodegenerative disorders.
    Finding an Early Sign
    The current practice for diagnosing Alzheimer’s Disease typically involves a medical history review and lengthy set of physical and neurological evaluations and tests. While there is still no cure for the disease, spotting it early can give patients more options for therapeutics and support. Because language impairment is a symptom in 60-80% of dementia patients, researchers have been focusing on programs that can pick up on subtle clues — such as hesitation, making grammar and pronunciation mistakes and forgetting the meaning of words — as a quick test that could indicate whether or not a patient should undergo a full examination.
    “We know from ongoing research that the cognitive effects of Alzheimer’s Disease can manifest themselves in language production,” said Hualou Liang, PhD, a professor in Drexel’s School of Biomedical Engineering, Science and Health Systems and a coauthor of the research. “The most commonly used tests for early detection of Alzheimer’s look at acoustic features, such as pausing, articulation and vocal quality, in addition to tests of cognition. But we believe the improvement of natural language processing programs provide another path to support early identification of Alzheimer’s.”
    A Program that Listens and Learns
    GPT-3, officially the third generation of OpenAI’s General Pretrained Transformer (GPT), uses a deep learning algorithm — trained by processing vast swaths of information from the internet, with a particular focus on how words are used, and how language is constructed. This training allows it to produce a human-like response to any task that involves language, from responses to simple questions, to writing poems or essays. More

  • in

    Extreme weather in 2022 showed the global impact of climate change

    It was another shattering year.

    Climate change amped up weather extremes around the globe, smashing temperature records, sinking river levels to historic lows and raising rainfall to devastating highs. Droughts set the stage for wildfires and worsened food insecurity. Researchers found themselves pondering the limits of humans’ ability to tolerate extreme heat (SN: 7/27/22).

    The extreme events from 2022 pinpointed on the map below are just a sample of this year’s climate disasters. Each was exacerbated by human-caused climate change or is in line with projections of regional impacts.

    Science News headlines, in your inbox

    Headlines and summaries of the latest Science News articles, delivered to your email inbox every Friday.

    Thank you for signing up!

    There was a problem signing you up.

    In its Sixth Assessment Report, released in 2021 and 2022, the United Nations’ Intergovernmental Panel on Climate Change, or IPCC, warned that humans are dramatically overhauling Earth’s climate (SN: 8/9/21). Earth’s average surface temperature has already risen by at least 1.1 degree Celsius since preindustrial times, thanks to human inputs of heat-trapping gases to the atmosphere, particularly carbon dioxide and methane (SN: 3/10/22). That warming has shifted the flow of energy around the planet, altering weather patterns, raising sea levels and turning past extremes into new normals (SN: 2/1/22).

    And the world will have to weather more such climate extremes as carbon keeps accumulating in the atmosphere and global temperatures continue to rise. But IPCC scientists and others hope that, by highlighting the regional and local effects of climate change, the world will ramp up its efforts to reduce climate-warming emissions — averting a more disastrous future. More

  • in

    New X-ray imaging technique to study the transient phases of quantum materials

    The use of light to produce transient phases in quantum materials is fast becoming a novel way to engineer new properties in them, such as the generation of superconductivity or nanoscale topological defects. However, visualizing the growth of a new phase in a solid is not easy, due in-part to the wide range of spatial and time scales involved in the process.
    Although in the last two decades scientists have explained light-induced phase transitions by invoking nanoscale dynamics, real space images have not yet been produced and, thus, no one has seen them.
    In the new study published in Nature Physics, ICFO researchers Allan S. Johnson and Daniel Pérez-Salinas, led by former ICFO Prof. Simon Wall, in collaboration with colleagues from Aarhus University, Sogang University, Vanderbilt University, the Max Born Institute, the Diamond Light Source, ALBA Synchrotron, Utrecht University, and the Pohang Accelerator Laboratory, have pioneered a new imaging method that allows the capture of the light-induced phase transition in vanadium oxide (VO2) with high spatial and temporal resolution.
    The new technique implemented by the researchers is based on coherent X-ray hyperspectral imaging at a free electron laser, which has allowed them to visualize and better understand, at the nanoscale, the insulator-to-metal phase transition in this very well-known quantum material.
    The crystal VO2 has been widely used in to study light-induced phase transitions. It was the first material to have its solid-solid transition tracked by time-resolved X-ray diffraction and its electronic nature was studied by using for the first time ultrafast X-ray absorption techniques. At room temperature, VO2 is in the insulating phase. However, if light is applied to the material, it is possible to break the dimers of the vanadium ion pairs and drive the transition from an insulating to a metallic phase.
    In their experiment, the authors of the study prepared thin samples of VO2 with a gold mask to define the field of view. Then, the samples were taken to the X-ray Free Electron Laser facility at the Pohang Accelerator Laboratory, where an optical laser pulse induced the transient phase, before being probed by an ultrafast X-ray laser pulse. A camera captured the scattered X-rays, and the coherent scattering patterns were converted into images by using two different approaches: Fourier Transform Holography (FTH) and Coherent Diffractive Imaging (CDI). Images were taken at a range of time delays and X-ray wavelengths to build up a movie of the process with 150 femtosecond time resolution and 50 nm spatial resolution, but also with full hyperspectral information. More

  • in

    Words prove their worth as teaching tools for robots

    Exploring a new way to teach robots, Princeton researchers have found that human-language descriptions of tools can accelerate the learning of a simulated robotic arm lifting and using a variety of tools.
    The results build on evidence that providing richer information during artificial intelligence (AI) training can make autonomous robots more adaptive to new situations, improving their safety and effectiveness.
    Adding descriptions of a tool’s form and function to the training process for the robot improved the robot’s ability to manipulate newly encountered tools that were not in the original training set. A team of mechanical engineers and computer scientists presented the new method, Accelerated Learning of Tool Manipulation with LAnguage, or ATLA, at the Conference on Robot Learning on Dec. 14.
    Robotic arms have great potential to help with repetitive or challenging tasks, but training robots to manipulate tools effectively is difficult: Tools have a wide variety of shapes, and a robot’s dexterity and vision are no match for a human’s.
    “Extra information in the form of language can help a robot learn to use the tools more quickly,” said study coauthor Anirudha Majumdar, an assistant professor of mechanical and aerospace engineering at Princeton who leads the Intelligent Robot Motion Lab.
    The team obtained tool descriptions by querying GPT-3, a large language model released by OpenAI in 2020 that uses a form of AI called deep learning to generate text in response to a prompt. After experimenting with various prompts, they settled on using “Describe the [feature] of [tool] in a detailed and scientific response,” where the feature was the shape or purpose of the tool. More

  • in

    New and improved multi-band operational receiver for 5G new radio communication

    An ultra-wide-band receiver based on a harmonic selection technique to improve the operational bandwidth of 5G networks has been developed by Tokyo Tech researchers in a new study. Fifth generation (5G) mobile networks are now being used worldwide with frequencies of over 100 Hz. To keep up with the data traffic in these networks, appropriate receivers are necessary. In this regard, the proposed technology could revolutionize the world of next-generation communications.
    As next-generation communication networks are being developed, the technology used to deploy them must also evolve alongside. Fifth generation mobile network New Radio (5G NR) bands are continuously expanding to improve the channel capacity and data rate. To realize cross-standard communication and worldwide application using 5G NR, multi-band compatibility is, therefore, essential.
    Recently, millimeter-wave (mmW) communication has been considered a promising candidate for managing the ever-increasing data traffic between large devices in 5G NR networks. In the past few years, many studies have shown that a phased-array architecture improves the signal quality for 5G NR communication at mmW frequencies. Unfortunately, multiple chips are needed for multi-band operation, which increases the system size and complexity. Moreover, operating in multi-band modes exposes the receivers to changing electromagnetic environments, leading to cross-talk and cluttered signals with unwanted echoes.
    To address these issues, a team of researchers from Tokyo Institute of Technology (Tokyo Tech) in Japan has now developed a novel “harmonic-selection technique” for extending the operational bandwidth of 5G NR communication. The study, led by Professor Kenichi Okada, was published in the IEEE Journal of Solid-State Circuits. “Compared to conventional systems, our proposed network operates at low power consumption. Additionally, the frequency coverage makes it compatible with all existing 5G bands, as well as the 60 GHz earmarked as the next potential licensed band. As such, our receiver could be the key to utilizing the ever-growing 5G bandwidth,” says Prof. Okada.
    To fabricate the proposed dual-channel multi-band phased-array receiver, the team used a 65-nm CMOS process. The chip size was measured to be just 3.2 mm x 1.4 mm, which included the receiver with two channels.
    The team took a three-pronged approach to tackle the problems with 5G NR communication. The first was to use a harmonic-selection technique using a tri-phase local oscillator (LO) to drive the mixer. This technique decreased the needed LO frequency coverage while allowing for multi-band down-conversion. The second was to use a dual-mode multi-band low-noise amplifier (LNA). The LNA structure not only improved the power efficiency and tolerance of the inter-band blocker (reducing interference from other bands) but also achieved a good balance between circuit performance and chip area. Finally, the third prong was the receiver, which utilized a Hartley receiver’s architecture to improve image rejections. The team introduced a single-stage hybrid-type polyphase filter (PPF) for sideband selection and image rejection calibration.
    The team found that the proposed technique outperformed other state-of-the-art multi-band receivers. The harmonic-selection technique enabled operation between (24.25 — 71) GHz while showing above 36-dB inter-band blocker rejection. Additionally, the power consumed by the receiver was low (36 mW, 32 mW, 51 mW, and 75 mW at frequencies of 28 GHz, 39 GHz, 47.2 GHz, and 60.1 GHz, respectively).
    “By combining a dual-mode multi-band LNA with a polyphase filter, the device realizes rejections to inter-band blockers better than other state-of-the-art filters. This means that for currently used bands, the rejections are better than 50dB and over 36dB for the entire supported (24-71) GHz operation region. With new 5G frequency bands on the horizon, such low-noise broadband receivers will prove to be useful,” concludes an optimistic Prof. Okada.
    Story Source:
    Materials provided by Tokyo Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Cheerful chatbots don't necessarily improve customer service

    Imagine messaging an artificial intelligence (AI) chatbot about a missing package and getting the response that it would be “delighted” to help. Once the bot creates the new order, they say they are “happy” to resolve the issue. After, you receive a survey about your interaction, but would you be likely to rate it as positive or negative?
    This scenario isn’t that far from reality, as AI chatbots are already taking over online commerce. By 2025, 95% of companies will have an AI chatbot, according to Finance Digest. AI might not be sentient yet, but it can be programmed to express emotions.
    Humans displaying positive emotions in customer service interactions have long been known to improve customer experience, but researchers at the Georgia Institute of Technology’s Scheller College of Business wanted to see if this also applied to AI. They conducted experimental studies to determine if positive emotional displays improved customer service and found that emotive AI is only appreciated if the customer expects it, and it may not be the best avenue for companies to invest in.
    “It is commonly believed and repeatedly shown that human employees can express positive emotion to improve customers’ service evaluations,” said Han Zhang, the Steven A. Denning Professor in Technology & Management. “Our findings suggest that the likelihood of AI’s expression of positive emotion to benefit or hurt service evaluations depends on the type of relationship that customers expect from the service agent.”
    The researchers presented their findings in the paper, “Bots With Feelings: Should AI Agents Express Positive Emotion in Customer Service?,” in Information Systems Research in December.
    Studying AI Emotion
    The researchers conducted three studies to expand the understanding of emotional AI in customer service transactions. Although they changed the participants and scenario in each study, AI chatbots imbued with emotion used positive emotional adjectives, such as excited, delighted, happy, or glad. They also deployed more exclamation points. More